Skip to main content

Why your Content Security Policy isn't as secure as you think

· 8 min read
Forrest Allison

I'm a normal, non-security focused developer, but I work for a security company. I wasn't surprised to learn that our services use a good, well-thought-out CSP to protect our users.

What did come as a big surprise to me is that most companies, even huge, well thought of ones, have CSPs that don't protect from anything. Some have given up and don't even use them at all (like www.google.com). For the most part, it's not their fault and there's not much they can do about it. More on that below.

How CSPs work

The Content Security Policy (CSP) is a response header or <meta> tag set by the server when a webpage first loads. It controls what the page is allowed to load and, most importantly, from where. The idea is to act as a last line of defense against loading something malicious onto your page.

If you've never seen one before, they're actually pretty simple.

Example small CSP
Content-Security-Policy: 
default-src 'self';
script-src 'self' www.google.com;
For a more complete overview of CSP directives and values, this is a great article.

The CSP above will make it so that your page can only load any content from your domain, with the exception that scripts can also load from Google. It's pretty common to include a few additional domains (like Google) in here that you need to load things from, such as analytics or some third-party auth scripts.

That CSP also means that inline scripts like <script> alert('Hi I'm a script') <script> won't be executed, which is great for security.

So, if we trust our server and Google's server not to do something malicious, we would expect the website to be secure.

Unfortunately, this CSP is still totally insecure, because:

picture of swiss cheese

CSP is swiss cheese

Studies have found that the vast majority of sites using CSPs are still vulnerable to script injection. Here are some of the most common attacks.

JSONP Bypass

JSONP is a hacky way to load cross domain JavaScript which became popular in the mid 2000's. More of a pattern than a protocol, the requester is able to tell the server to wrap whatever JavaScript it sends back in whatever snippet you pass as a URL parameter. JSONP has since been superseded by CORS, but it's still very much alive in the wild.

So this script tag:


<script
src="https://accounts.google.com/o/oauth2/revoke
?callback=alert('This is a malicious alert')">
</script>

Will return this:

// API callback
alert('This is a malicious alert')({
// ...googles intended payload
}
);

Go ahead and try the url from that script in your browser. You'll quickly see the problem. Now an attacker that can embed HTML has the ability to execute whatever JavaScript by having the server reflect it back to your page. Any un-sanitized field that leaks HTML into your page is now an immediate vulnerability.

There a great number of these vulnerable legacy endpoints, not just on Google's domains but across the web.
Here is a list of just some of them.

Because this isn't a specific protocol but just a serverside (anti)pattern, there's no way for the browser to detect this behavior and disable it. The only solution is to not allow scripts to load from any vulnerable domain, and to host them yourself instead.

The bigger your website, the harder this is to deal with.

If you have a small site and immediate control over the codebase, this could be an easy problem to patch.

For large sites at big companies that load analytics payloads, third party auth, and other scripts from these vulnerable domains, it can be difficult to mitigate. Heck, a lot of big companies have legacy JSONP endpoints on the same domain as the site. There's not much you can do about that.

File Upload Bypass

Let's say your CSP is locked to only allow scripts from your domain.

If your site allows file uploads and downloads and doesn't discriminate against files that end in .js with content type application/javascript. A user can simply upload a file malicious-code.js, and load it as a script tag, directly from your domain.


<script src=www.yourdomain.com/myfiles/malicious-code.js></script>

You have to be very careful about what your users can upload. In some cases the browser will try to automatically detect and fix MIME types in a process called sniffing, which could convert something you didn't block back into application/javascript during download. File downloads need to include the header X-Content-Type-Options: nosniff to prevent that.

As you can see, these are nuanced details that are easy to miss.

Other vulnerabilities

These are just a few of the potential ways to bypass the CSP's protections. The list goes on.

Mitigations

For large sites, there may be just too many gaps in the armor to treat the CSP as a functional line of defense, and many big sites have disabled it entirely. This paper found that of those that use it over 94% of sites have ineffective CSPs!

Long story short, there's a good chance your CSP isn't doing much to protect your site from XSS. This tool made by Google can check your CSP for common vulnerabilities, but note that it will miss both of the vulnerabilities I discussed above.

Nonces and Strict CSP

There is an approach known as Strict CSP that leans heavily on the built-in nonce feature. Nonce-based CSP strategies can help mitigate the above attacks, where the server adds a random nonce attribute to every script tag and defines in the CSP. Embedding a script tag smart enough to clone the nonce isn't possible without JavaScript.

CSP
script-src 'nonce-randomString';
Script Tag
<script nonce="randomString" ...

Unfortunately this strategy is the most work to implement and is rarely used. Getting it to work with CDNs and caching can be a challenge.

info

Defining a nonce and a domain in the script-src together means that either the domain or the nonce can be used, which seems like a gotcha that could catch a lot of developers.

How LunaSec and other tools use CSP effectively

LunaSec is an Open Source security product we wrote, designed to help normal web apps handle sensitive data. The main idea with LunaSec is to be a secure "sidecar" for every part of a full-stack web app.

The core of its frontend security is that it embeds sensitive form elements inside an iFrame on a different domain than the main site. This is very similar to how payment processors like Stripe embed an iFrame to collect payment information, just more generic.

The above problems with CSP are a great demonstration of why we've worked to try to isolate and secure a small portion of our user's pages, instead of trying to completely secure the main sites themselves. Big production web apps are just too large of an attack surface for the average company to fully audit for vulnerabilities.

LunaSec and frameworks like it embed a tiny, stripped down, domain specific webpage that does one thing, totally focused on security. Because it's on a different domain, (in LunaSec's case, usually a subdomain you own) the main site can be vulnerable to the above issues without exposing the sensitive data to XSS.

Here is an example CSP header that loads with one of LunaSec's iFrames:

Content-Security-Policy: 
style-src 'unsafe-inline' 'nonce-r54BDFpZ8Nkb9IffY7JwJA';
script-src 'nonce-r54BDFpZ8Nkb9IffY7JwJA';
object-src 'none';
frame-ancestors https://your-site.com;
base-uri 'none';
report-uri http://lunasec-secure-backend.your-site.com/cspreport;
connect-src 'self' http://lunasec-secure-backend.your-site.com;
default-src 'none';
require-trusted-types-for 'script'

We expect this to be secure and stay that way because this CSP is never going to get bent out of shape by the changing requirements and slip-ups of a big web app. The scope here is so small.

Separating out the security concerns into a separate domain is one of the easiest ways for large sites to employ a CSP that actually works as a last line of defence.

Thanks for reading and if you'd like to give back, we are still growing and could always use another star on our github repo.