We have a simple Blog system that allows users to input html and JavaScript to build a blog page. I'm aware that allowing javascript open up the door to xss attacks. We do however need to allow users to insert javascript, as an exmple would be allowing the user to insert a google ads code which contains JavaScript. The question is how go about allowing JavaScript while preventing xss. I was assuming https would secure cookies and prevent cookie stealing, but users on stackoverflow said otherwise.
- 27,158
- 6
- 80
- 121
- 263
- 2
- 6
-
1Does the JavaScript input need to be available right away? This may be to burdensome, but what about reviewing and aproving user submitted JavaScript? – this.josh Jun 30 '11 at 23:17
-
"as an example would be allowing the user to insert a google ads code" <- isn't this the definition of cross site scripting? You just want to prevent cross site scripting vulnerabilities? – jrwren Jul 07 '11 at 12:15
-
Actually you have an auditing issue that @jrwren really opens right up for you. At a bare minimum, you're going to want some kind of versioning on any source code that a user submits and that you store. If you don't, and say that code IS used to launch attacks--you will at least be able to say when that piece of code was added, and be able to roll back to a known "good" state. – avgvstvs Oct 27 '14 at 15:07
6 Answers
https is all about encryption and ensuring server identity to prevent other people from listening to the traffic. So it does not help at all against malicious JavaScript code within the bounds of the same origin policy.
There is a flag on cookies called httpOnly which prevents simple JavaScript code on modern browser to access the cookie. But this does not fix the issue, it just makes exploits a bit less simple: The JavaScript code can directly trigger form submissions with the permissions of the current user. Since the JavaScript is on the same domain, it has access to CSRF tokens. The browser will include the cookie on the form submission without the JavaScript code needing to access it. (And there is a number of bugs around HttpOnly, for example some browser allow to read the cookie from an XMLHttpRequest object.)
Furthermore the JavaScript author may replace the current page content with "Your session has timed out, please login again" and its own login form. But the form he adds does not point to the login URL of your server but to a server he controls. Or even more subtle, there may be a cross-domain Ajax call in the event handler of the submit-button.
There is a rather simple solution in two steps:
- Filter all non trusted markup, including all JavaScript. If you are using PHP, you can use HTML-Purifier. There are similar libraries for all common languages.
- Provide harmless placeholders ("widgets"). So instead of letting your users embed JavaScript code directly, they must write something like
<div id="googleadd">identifier</div>
. A piece of trusted global JavaScript code can replace those code fragments with the real add code. Make sure to properly escape input parameters. This will limit your users a little in what they can do, but it is pretty easy to cover all common use cases.
It does make sense to google a bit as there are already a number of Widget libraries out there; although most of them still require JavaScript calls.
- 27,158
- 6
- 80
- 121
-
-
@Hendrik, this does not answer the question -- you must have missed @Hussein's statement that users must be allowed to include Javascript. If you filter all markup (e.g., with HTML Purifier), that will strip all Javascript -- but the original question-asker said Javascript must be allowed. Putting the Javascript in a `div` does nothing for security; the untrusted Javascript can still do everything, regardless of where on teh page it is placed. – D.W. Jul 02 '11 at 07:18
-
1@D.W., no, I have not missed that part of the question. But I looked at the reasoning he gave for that statement and provided an answer for that instead. The idea is NOT to put the JavaScript into an div. The div only contains information that is interpreted by trusted JavaScript provided by the site owner. As I said, this does not give authors the full power of JavaScript, but it can be used for all common use cases, such as embedding known addvertisment systems. – Hendrik Brummermann Jul 02 '11 at 08:03
HTTPS doesn't prevent stealing cookies, HTTPOnly flag does. I would recommend (if it's possible) to have a library of "widgets" users can include in the page and configure. Then you can just insert the JavaScript code with configured parameters without allowing user to write JavaScript code.
If you enable your users to insert their own JavaScript code, they can do things like updating the login form of your blog system to send credentials to another server, so that if some user visits the malicious blog page and decides to log in to your system on that page, his credentials could be compromised.
- 1,870
- 13
- 22
-
"_HTTPOnly flag does [prevent stealing cookies]_" but it doesn't prevent **sending them to the original website**, and reading resources obtained by sending them to the original site, so HTTPOnly adds very little security (if it adds any security at all). Note that in most cases the security sensitive cookies are opaque values (like a session ID), so that **the only possible use of them by an attacker is to send them back to the the original website** anyway. – curiousguy Jun 20 '12 at 02:50
-
(...) And some websites only allow interaction from the original IP address (a terrible idea IMO). IOW, HTTPOnly prevents attacker from doing something he wouldn't do anyway. So if someone thinks HTTPOnly is the solution to a security problem (any security problem really), the odds that he is wrong is 99:1. – curiousguy Jun 20 '12 at 02:52
No, HTTPS doesn't stop this threat.
If your users must be allowed to include Javascript they have written into the page, this is a very challenging problem. What you want is some kind of Javascript sandbox. @Mike Samuel's recommendation of Caja is a good choice of Javascript sandboxing technology. Other possibilities in this space include Microsoft's Web Sandbox, Yahoo's AdSafe, Facebook's FBJS, and probably others I have missed.
Please understand that while solutions do exist, they won't be entirely trivial to deploy. You've stumbled upon a hard problem in web security; the web just wasn't designed to support this kind of thing, and as a result, the solutions necessarily end up being a pretty complex piece of technology. So you will probably need support from a capable developer to deploy and integrate one of these solutions into your web site.
- 98,860
- 33
- 271
- 588
-
As D.W. said this gets really complex and wildcard certificates, that are required to isolate one untrusted JavaScript code from another untrusted JavaScript code, are quite expensive. On a small site it is unlikely that you will make enough money with advertisement to recover those costs. So you should think in detail about whether arbitrary JavaScript code is really, really needed. – Hendrik Brummermann Jul 02 '11 at 08:13
-
@Hendrik, what did you meant about "wildcard certificates that are required to isolate one untrusted JavaScript code from another untrusted JavaScript code"? Usually, the biggest drawback of wildcard certs (besides the price), is *lack* of isolation. I think I didnt understand what you mean... – AviD Jul 05 '11 at 00:21
-
1@AviD, You are right that a compromise of one server, which exposes the private, key affects all other server using the same key. In this case, however, there is only one server: The isolation is enforced by the java script engine in the client according to the [same origin policy](http://de.wikipedia.org/wiki/Same-Origin-Policy). Using individual certificates is not an option unless the userbaes is extremely small because it requires one IP address for every certificate to be compatible with common browsers. – Hendrik Brummermann Jul 05 '11 at 05:51
-
@Hendrik ah, I think I understand now what you mean. Not that the wildcard enforces the isolation, but that in certain scenarios it is necessary, to be able to implement in the way you describe. – AviD Jul 05 '11 at 07:12
Shameless, but relevant plug:
From code.google.com/p/google-caja/
Caja allows websites to safely embed DHTML web applications from third parties, and enables rich interaction between the embedding page and the embedded applications. It uses an object-capability security model to allow for a wide range of flexible security policies, so that the containing page can effectively control the embedded applications' use of user data and to allow gadgets to prevent interference between gadgets' UI elements.
- 98,860
- 33
- 271
- 588
- 3,873
- 18
- 25
If you allow google tracking, you already allow XSS, because google-analitics is by definition an XSS.
You could set up a template where the user would only paste his ID that you could sanitize properly.
You can never allow javascript input and be safe. Try working with the below code. Do you think it's safe? Does it have any of the keywords that suggest danger? :)
$=''|'',_=$+!"",__=_+_,___=__+_,($)[_$=($$=(_$=""+{})[__+__+_])+_$[_]+(""+_$[-__])[_]+(""+!_)[___]+($_=(_$=""+!$)[$])+_$[_]+_$[__]+$$+$_+(""+{})[_]+_$[_]][_$]((_$=""+!_)[_]+_$[__]+_$[__+__]+(_$=""+!$)[_]+_$[$]+"("+_+")")();
Spoiler: it's alert(1)
- 1,115
- 2
- 12
- 15
-
1
-
It can be anything, You could replace $ in above code with anything including any of these: `øµª` and it will do the same. It's just a character. – naugtur Apr 11 '12 at 16:13
-
1+1 for showing one of the most non-obfuscated but still obfuscated XSS attacks I've ever seen. – avgvstvs Oct 27 '14 at 15:18
By definition storing javascript and rendering it in the browser is XSS
1) Add Identity
2) Add a use policy: https://www.google.com/analytics/tag-manager/use-policy/
3) I am against server site filtering whitelisting/blacklisting because it is workaround to the main problem, it is difficult to do right and can drop input that should not be dropped. When you have HTML5, CSS3 and JS ever evolving you will need to keep on updating your whitelist/blacklist until the point that sanitizer in place just becomes enourmous peace of code that does allow everything and therefore it will come to a point that defeats its purpose, but if you insist on this track OWASP HTML Sanitizer,JSoup is another example apart from above mentioned.
4) Instead of 3) use https://content-security-policy.com/ and drop support for ancient browsers, which is the proper way of fixing this as on the server you would just store content but execute in browser context
5) You can add templating tags/placeholders to reduce risk (e.g. markdown), this will also help your pentesting tools not to echo back ;alert(1); as the server would expect different input in order to store and sent the javascript
6) Run AV and fortify static code analyzer on server and move it on
7) Built in validation for errors: JSLint/JSHint/CSSHint,etc
8) Finally if you dont trust any of above automation add the human factor to review and approve the step before publishing
- 11
- 3