[go: up one dir, main page]

Extensions are powerful pieces of software in modern browsers, and as such, you should help ensure that your extensions are not susceptible to security exploits. If an attacker manages to exploit a vulnerability in an extension, it’s serious business because they may gain access to the same privileges that the extension has.

The Chrome extensions system has a number of built-in protections to make it more difficult to introduce exploitable code, but certain coding patterns can still open up the risk of exploits like a cross-site scripting (XSS) attack. Many of these mistakes are common to web programming in general, so it wouldn’t be a bad idea to check out one of the many good articles on the net about XSS bugs. Here are some of our recommendations for writing healthier extensions.

Minimize your permissions

The most important thing to consider is whether you’re declaring the minimal set of permissions that you need to function. That way, if you do have a security bug in your code, the amount of permissions you’re exposing to the attacker is minimal as well. Avoid requesting (*/*) permissions for hosts if you only need to access a couple, and don’t copy and paste your manifest from example code blindly. Review your manifest to make sure you’re only declaring what you need. This applies to permissions like tabs, history, cookies, etc. in addition to host permissions. For example, if all you’re using is chrome.tabs.create, you don’t actually need the tabs permission.

Use content_security_policy in your manifest

Starting in Chrome 14, we will begin supporting Content Security Policy in extensions via the content_security_policy manifest field. This allows you to control where scripts can be executed, and it can be used to help reduce your exposure to cross-site scripting vulnerabilities. For example, to specify that your extension loads resources only from its own package, use the following policy:

"content_security_policy": "default-src 'self'"

If you need to include scripts from specific hosts, you can add those hosts to this property.

Don’t use <script src> with an HTTP URL

When you include javascript into your pages using an HTTP URL, you’re opening your extension up to man-in-the-middle (MITM) attacks. When you do so in a content script, you have a similar effect on the pages you’re injected into. An attacker on the same network as one of your users could replace the contents of the script with content that they control. If that happens, they could do anything that your page can do.

If you need to fetch a remote script, always use HTTPS from a trusted source.

Don’t use eval()

The eval() function is very powerful and you should refrain from using it unless absolutely necessary. Where did the code come from that you passed into eval()? If it came from an HTTP URL, you’re vulnerable to the same issue as the previously mentioned <script> tag. If any of the content that you passed into eval() is based on content from a random web page the user visits, you’re vulnerable to escaping bugs. For example, let’s say that you have some code that looks like this:

function displayAddress(address) { // address was detected and grabbed from the current page
eval("alert('" + address + "')");
}


If it turned out that you had a bug in your parsing code, the address might wind up looking something like this:

'); dosomethingevil();

There’s almost always a better alternative to using eval(). For example, you can use JSON.parse if you want to parse JSON (with the added benefit that it runs faster). It’s worth the extra effort to use alternatives like this.

Don’t use innerHTML or document.write()

It’s really tempting to use innerHTML because it’s much simpler to generate markup dynamically than to create DOM nodes one at a time. However, this again sets you up for bugs in escaping your content. For example:

function displayAddress(address) { // address was detected and grabbed from the current page
myDiv.innerHTML = "<b>" + address + "</b>");
}


This would allow an attacker to make an address like the following and once again run some script in your page:

<script>dosomethingevil();</script>

Instead of innerHTML, you can manually create DOM nodes and use innerText to insert dynamic content.

Beware external content

In general, if you’re generating dynamic content based on data from outside of your extension (such as something you fetched from the network, something you parsed from a page, or a message you received from another extension, etc.), you should be extremely careful about how you use it. If you use this data to generate content within your extension, you might be opening your users up to increased risk.

You can also read a few more examples of the issues discussed here in our extension docs. We hope these recommendations help you create better and safer extensions for everyone.

In Chrome 13, we added some new capabilities to content scripts and proxy management.

First, you can now make cross-origin XMLHttpRequest calls with the privileges of the extension directly from your content script. You will no longer need to relay these requests through a background page; this should simplify your code. In some cases, it may even eliminate your need to use a background page. Here’s a sample extension which demonstrates the old way a content script could make a cross domain request. As you can see, the extension required a background page to route the cross-origin calls through a chrome.extension.onRequest listener. Using the new content script cross-origin capabilities, we were able to successfully rewrite the extension to completely eliminate the background page requirement. This reduces the memory required to run the extension, and reduces code complexity as well. This also means that Greasemonkey scripts that use GM_xmlhttpRequest - such as the classic Book Burro - will now work in Chrome.

Second, we improved how match patterns work. Until this release you could specify a matches array for your content script - the URLs over which it should operate. In Chrome 13 you can now also specify an exclude_matches array, where you can indicate the pages in which your content scripts should not work. This should allow more precise targeting of your content script.

Finally, we added support for the @run-at command for imported Greasemonkey scripts, so you can control when your script is loaded in the same way you’ve been able to do for content scripts. Running scripts at different points in a page's lifecycle can enable additional functionality. For example, we've written a script which runs at "document-start" and lists all of the HTTP resources included in the current page.

In addition to these improvements to scripts, we’ve been working hard to allow extensions to manage Chrome’s proxy settings using different configuration options. With the Proxy Extension API, you can now configure proxy settings by choosing from several options including auto detection, the host OS’s system default, PAC scripts or fixed ProxyRules.

These new configuration options allow for more fine grained proxy controls, which we invite you to try out. There are already several 3rd party extensions available in the Chrome Web Store that showcase the API and its new capabilities, including Proxy Anywhere and Proxy SwitchyPlus.

Let us know what you think of these new features and what else you might like to see us do by joining the discussion at chromium-extensions@chromium.org.

A few weeks ago, we became aware of a security issue with WebGL: shaders could be used to indirectly deduce the contents of textures uploaded to the GPU. As a result, the WebGL specification was updated to be more restrictive when it comes to using cross-domain images and videos as WebGL textures.

As a result, Chrome 13 (and Firefox 5) will no longer allow cross-domain media as a WebGL texture. The default behavior will be a DOM_SECURITY_ERR. However, applications may still utilize images and videos from another domain with the cooperation of the server hosting the media, otherwise known as CORS.

CORS support for MediaElements has also been fully implemented in WebKit by setting a new .crossOrigin attribute. This means that sophisticated applications that were using cross-origin textures before, can continue to do so, assuming the hosting image server grants the necessary cross-origin permission using CORS. If you want to enable CORS request on an image, all you have to do is add one line of code:

var img = document.createElement('img');
img. { … };
img.crossOrigin = ''; // no credentials flag. Same as img.crossOrigin='anonymous'
img.src = 'http://other-domain.com/image.jpg';


Another nice property that we gain from this new setting is the ability to read cross-domain image data set on a 2D canvas. Normally, filling a canvas with a remote image (e.g. ctx.drawImage()) flips the origin-clean flag to false. Attempting to read back the pixels using ctx.toDataURL() or ctx.getImageData() throws a SECURITY_ERR. This is to prevent information leakage. However, when .crossOrigin is set (and the remote server supports CORS), the read is possible. For example:

var img = document.createElement('img');
img. {
ctx.drawImage(img, 0, 0, canvas.width, canvas.height);
var url = canvas.toDataURL(); // Read succeeds, canvas won't be dirty.
};
img.crossOrigin = '';

img.src = 'http://other-domain.com/image.jpg';


Unfortunately, this new restriction in WebGL means that some existing content will break. We’ve already started working with external image and video hosting services like Flickr to evangelize the use of CORS on their images.

You can test this new behavior today using images from Picasa, which already sends a CORS header allowing cross-origin requests, and the Chrome dev channel.

When we talk Chromebooks with our developer friends, a common reaction we get is “I can see why my [insert-relative-here] would use it, but I need my PC for coding”. Over the last few years, browser-based coding has grown from a research topic to a viable practice. You can already find many development apps on the Chrome Web Store today. Some are conventional code editors and IDEs, built right into the browser. Others are oriented more around prototyping and design. There are also many tools for project management.

First up, IDEs. You can now code, debug, and deploy real programs from the browser. A popular example at Google IO was Cloud9, an IDE for JavaScript, Python, PHP, and Ruby. Cloud9 uses the HTML5 FileSystem capability and AppCache to sync files, so you can even code offline. There are many other IDEs in the web store too, such as Kodingen, Codey, Akshell, eXo Cloud IDE, and PHPAnywhere.


It’s not all about coding though. There are also apps focusing on web design, for people who want to make a web page without coding or perhaps experiment with a few concepts early on. Being able to edit and design web pages inside the tool that will display them is a very powerful concept. BuildorLite and BuildorPro let you construct a web page via a graphical user interface, and publish it straight on their servers. Handcraft and Mockingbird are two apps aimed at design and prototyping. And if you want a scratchpad to try a few coding experiments, check out JSFiddle.


Launching software isn’t just about designing and coding your apps; it’s also about managing the entire workflow, from planning release schedules to triaging bug reports. One example is GitHub Issues, providing a quick, app-like, way to track project issues. Another is Launchlist Pro, a checklist you can use to launch your website.


Chrome aims to bring simplicity, speed, and security to all users, and that includes developers. Being cloud-based means these tools are always up to date, and running inside the browser’s sandbox minimises the security risk to your machine. There’s no complicated install process and the only dependency is Chrome itself, which is automatically kept up to date. Just install the app and get coding.

We’re especially excited about what this means for new developers, as programming tools have never been more accessible to everyone. So whether you’re a seasoned veteran or just looking to get started, visit the Chrome Web Store today and build something awesome in your browser!

We released Google Chrome Frame in September 2009 to expand the reach of modern web technologies and help developers take advantage of HTML5's capabilities. Since then, we've seen great adoption of the plug-in by end users and developers. Even more exciting, we’ve heard from developers that Google Chrome Frame is enabling them to create legacy-free apps that are easier to build, maintain, and optimize.

However, there was one remaining obstacle to making Chrome Frame accessible to users of older browsers - users needed to have administrative privileges on their machines to install Chrome Frame. At this year's Google I/O we announced this obstacle has finally been removed.

Non-Admin Chrome Frame runs a helper process at startup to assist with loading the Chrome Frame plug-in into Internet Explorer. The helper process is designed to consume almost no system resources while running. Once installed, non-admin users will have the same no-friction experience that admin users of Chrome Frame have today.

You can try it yourself by installing the new non-admin version of Chrome Frame here. This is now available in our developer channel and is coming to stable channel very soon.

For more technical details, please see the Chrome Frame FAQ. Please share your feedback in our discussion group and if you encounter any bugs while using Chrome Frame, please file them on Chromium's issue tracker. We’ll be working hard to bring Non-Admin Chrome Frame up to the beta and stable channels over the coming weeks. You can help us move this up to stable as quickly as possible by trying out the current release and sending us your feedback!

Valgrind is a great tool for detecting memory errors. We are running many Chromium tests under Valgrind and it has helped us find hundreds of significant bugs. However, when we run binaries under Valgrind, testing becomes at least 10 times slower. This huge slowdown costs us more than just machine time; our trybots and buildbots can’t provide fast feedback and some tests fail due to timeouts.

A month ago we released AddressSanitizer (aka ASan), a new testing tool. ASan consists of two parts:
  • A compiler which performs instrumentation - currently we use a modified LLVM/Clang and we're trying to contribute our code to the core LLVM package.
  • A run-time library that replaces malloc(), free()and friends.

The custom malloc() allocates more bytes than requested and “poisons” the redzones around the region returned to the caller. The custom free() “poisons” the entire region and puts it into quarantine for some time. The instrumented code produced by the compiler checks if the address being accessed is poisoned and if so, reports an error. The compiler also inserts poisoned redzones between objects on stack to catch stack buffer overrun/underrun.

ASan helps us find a subset of bugs that are detectable by Valgrind like heap buffer overrun/underrun (out-of-bounds access) and “Use after free.” It can also detect bugs that Valgrind can not find, such as stack buffer overrun/underrun. Last month alone, ASan helped us find more than 20 bugs in Chromium including some that could have potentially led to security vulnerabilities.

What makes ASan even more powerful than other comparable tools is its speed. On SPEC CPU2006 benchmarks the average slowdown is about 2x. On Chromium’s “browser_tests”, the slowdown is about 20%. If you are curious to learn why ASan is faster than comparable tools read this article.

Today ASan works only on Linux (x86 and x86_64) and ChromiumOS, but we're planning to port it to other platforms in the near future. In the coming months we also plan to setup various ASan buildbots and trybots for Chromium.

The AddressSanitizer home page has the instructions for running it with your favorite project outside of Chromium. If you are working on Chromium, refer to this page for instructions. If you have any questions or suggestions, feel free to contact address-sanitizer@googlegroups.com

When the Google Chrome Security Team isn’t busy giving prompt attention to finding and fixing bugs, we’re always looking for new security features to add and hardening tweaks to apply. There are some changes worth highlighting in our current and near-future Chromium versions:

Chromium 11: strong random numbers for the web
We added a new Javascript API for getting access to a good source of system entropy from a web page. The new API is window.crypto.getRandomValues. Web pages should not currently be using Math.random for anything sensitive. Instead of making a round-trip to the server to generate strong random numbers, web sites can now generate strong random numbers entirely on the client.

Chromium 12: user-specified HSTS preloads and certificate pins
Advanced users can enable stronger security for some web sites by visiting the network internals page: chrome://net-internals/#hsts


You can now force HTTPS for any domain you want, and even “pin” that domain so that only a more trusted subset of CAs are permitted to identify that domain.

It’s an exciting feature but we’d like to warn that it’s easy to break things! We recommend that only experts experiment with net internals settings.

Chromium 13: blocking HTTP auth for subresource loads
There’s an unfortunate conflict between a browser’s HTTP basic auth dialog, the location bar, and the loading of subresources (such as attacker-provided <img> tag references). It’s possible for a basic auth dialog to pop up for a different origin from the origin shown in the URL bar. Although the basic auth dialog identifies its origin, the user might reasonably look to the URL bar for trust guidance.

To resolve this, we’ve blocked HTTP basic auth for subresource loads where the resource origin is different to the top-level URL bar origin. We also added the command line flag switch --allow-cross-origin-auth-prompt in case anyone has legacy applications which require the old behavior.

Chromium 13: Content-Security-Policy support
We added an initial implementation of Content Security Policy, which was first introduced in Firefox 4. You can use the X-WebKit-CSP header to experiment with our implementation. We’re working with Mozilla and others through the W3C to finish the standard. Once that’s done, we’ll remove support for the X-WebKit-CSP header and add support for the final header name. Please feel encouraged to kick the tires and let us know how we can improve this feature!

Chromium 13: built-in certificate pinning and HSTS
We’re experimenting with ways to improve the security of HTTPS. One of the sites we’re collaborating with to try new security measures is Gmail.

As of Chromium 13, all connections to Gmail will be over HTTPS. This includes the initial navigation even if the user types “gmail.com” or “mail.google.com” into the URL bar without an https:// prefix, which defends against sslstrip-type attacks.

The same HSTS technology also prevents users from clicking through SSL warnings for things such as a self-signed certificate. These attacks have been seen in the wild, and users have been known to fall for such attacks. Now there’s a mechanism to prevent them from doing so on sensitive domains.

In addition in Chromium 13, only a very small subset of CAs have the authority to vouch for Gmail (and the Google Accounts login page). This can protect against recent incidents where a CA has its authority abused, and generally protects against the proliferation of signing authority.

Chromium 13: defenses for self-XSS via javascript URLs
Working together with Facebook and other browser vendors, we’re trialing a self-XSS defense that makes it harder for users to shoot themselves in the foot when they are tricked into pasting javascript: URLs into the omnibox.

This is an interesting area because it’s hard to know what detail of instruction it is possible to trick a user into following. It is also hard to measure success until a large percentage of installed browsers have the defense (thus forcing the attackers to adapt their approach).

Still hiring!
We are always looking to expand the Google Chrome Security Team, and we’re looking for a wide range of talents for both Chrome and ChromeOS. We can promise exciting and varied work, working to protect hundreds of millions of users and working alongside the best in the industry. Why not have a look at our job posting?