When a security issue is reported the first thing that is relevant is discovering whether this is an implementation bug or an architectural flaw. Implementation bugs are fairly trivial to fix (but might have a lot of short-term impact). Architectural flaws are big time trouble.
Ajax security issues tend to fall into neither category. In fact, they usually are not even Ajax security issues. At all.
These issues generally focus on the interesting things you can accomplish as an attacker once you have already breached security by performing a form on how of malicious code injection.
There is nothing inherently insecure about Ajax applications. Their security (or insecurity) depends entirely on how the service provider manages the content delivered to the client.
So the Ajax security alerts don’t really tell us anything about Ajax security in itself. It is by now a well-established maxim in the security community that once an attacker manages to execute arbitrary code on your computer – it’s not your computer anymore. Why would anyone assume the rules have changed because the web jumped up a number in the media?
As an attacker, if you manage to inject your code into a page you can pretty much go hog-wild with the user session. But this is nothing new. For Ajax security the surface area available for attacks is fairly limited. Once you circumvent that the sky’s the limit, of course. There are any number of ways in which it is possible to get around the single domain policy found in all modern browsers. Some of these have even become a de-facto standard for Ajax developers and have lead to comments stating that such policies are pointless (I respectfully disagree here, but more on that later).
The content provider has a trusted role. A function of that role is to sanitize the content. LISP programmers and security programmers have been comfortable with the idea that code equals data and vice versa for a long time. This is for others a rather novel concept, but it’s high time everybody else caught up.
And shifting blame on end-users (a popular activity in the IT world) is simply not acceptable. As developers we have to shield end-users from technological complexity. This includes security. When people use our applications they implicitly trust us. As technology developers we have to honour that trust.
It makes for some interesting food for thought.
The bottom line is that security is hard and we can’t expect end-users to ever comprehend the complexities involved. So dumping security resonsibility on end-users is fairly stupid.
It’s up to us to write decent systems.