[go: up one dir, main page]

Recently a number of articles have discussed the security of browsers' password managers. There are many ways to build a secure password manager, and each browser uses a slightly different approach. In this edition of Security in Depth, we'll look at some of the key security decisions that went into designing the password manager for Google Chrome. As always, we welcome your feedback and suggestions.

Password managers improve security in two ways. First, they let users use more complex, harder-to-guess passwords because the password manager does the work of remembering them. Second, they help protect users from phishing pages (spoof pages that pretend to be from another site) by carefully scrutinizing the web page's URL before revealing the password. The key to the security of a password manager is the algorithm for deciding when to reveal passwords to the current web page. An algorithm that isn't strict enough can reveal users' passwords to compromised or malicious pages. On the other hand, an algorithm that's too strict won't function on some legitimate web sites. This may cause users to use more memorable (and less secure) passwords. Worse, users typically assume the browser is "broken," and become more willing to supply passwords to any page (including harmful ones), since they no longer trust the browser to make correct distinctions. The same side effects are possible if the password manager produces spurious warnings on legitimate sites; this simply trains users to ignore the warnings.

The password manager's algorithm is based on the browser's same-origin policy, which we've touched on before. The password manager supplies a password to a page only if the page is from the same origin (same scheme, host, and port) as the original page that saved the password. For example, this algorithm protects passwords from active network attackers by not revealing passwords saved on HTTPS pages to HTTP pages.

Because the same-origin policy does not distinguish between different paths, it's tempting to think that we could further improve security by requiring the paths to match as well; for example, passwords saved at https://example.com/login would not be sent to https://example.com/blog. However, this design works poorly with sites where users can log in from several places (like Facebook), as well as sites which store dynamically-generated state in the path. Furthermore, creating this "finer-grained" origin wouldn't actually improve security against compromised sites because other parts of the browser (like the JavaScript engine) still obey the same-origin policy. Imagine that example.com has a cross-site scripting vulnerability that lets an attacker inject malicious content into https://example.com/blog. An attacker would not need users to log in to this page; instead, the attacker could simply inject an <iframe> pointing to https://example.com/login and use JavaScript to read the password from that frame.

Besides checking the page hosting the password field, we can also check where password data is going to be sent when users submit their information. Consider a scenario that occurred a few years ago on a popular social networking site that let users (or in this case, attackers) customize their profile pages. At the time, an attacker could not include JavaScript on his profile page, but could still use malicious HTML — a password field set to send data back to the attacker's web server. When users viewed the attacker's profile, their password managers would automatically fill in their passwords because the profile page was part of the same origin as the site's login page. Lacking JavaScript, the attacker could not read these passwords immediately, but once the users clicked on the page, their data was sent to the attacker's server. Google Chrome defends against this subtle attack by checking the page to which the password data is submitted, once again using the same-origin policy. If this check fails, the password manager will not automatically fill in passwords when the page is loaded. The downside is that this can trip up legitimate web sites that dynamically generate their login URLs. To help users in both cases, the password manager waits for users to type their user names manually before filling in any passwords. At this point, if a page is really malicious, these users have most likely already fallen for the scam and would have proceeded to type in their passwords manually; continuing to refuse to fill in passwords would merely give the impression that the browser is "broken."

A number of other proposals to improve password manager security seem reasonable but don't actually make users more secure. For example, the password manager could refuse to supply passwords to invisible login fields, on the theory that legitimate sites have no need to do this and invisible fields are used only by attackers. Unfortunately, attackers trying to hide password fields from users can make the fields visible but only one pixel tall, or 99% transparent, hidden behind another part of the page, or simply scrolled to a position where users don't normally look. It is impossible for browsers to detect all the various ways password fields can be made difficult to notice, so blocking just one doesn't protect users. Plus, a legitimate site might hide the password field initially (similar to Washington Mutual), and if it does, the password manager wouldn't be able to fill in passwords for this site. 

We've put a lot of thought into the password manager's design and carefully considered how to defend against a number of threats including phishing, cross-site scripting, and HTTPS certificate errors. By using the password manager, you can choose stronger, more complex passwords that are more difficult to remember. When the password manager refuses to automatically fill in your password, you should pause and consider whether you're viewing a spoof web site. We're also keen to improve the compatibility of the password manager. If you're having trouble using the password manager with your favorite site, consider filing a bug.

Since we open-sourced Chromium, we've had patches contributed from all over the world. Once we published guidelines on how contributors can become committers, Paweł Hajdan Jr. was the obvious first choice for nomination.

Paweł is a computer science student at the University of Warsaw, and in his free time he's managed to write a ton of high-quality code towards making Chromium work on non-Windows platforms. Those of us who have worked on committing his near-daily patches are relieved to see that he'll now be able to commit them himself!

Today we announced that with Google Chrome's fifteenth release, we are taking off the "beta" label. We set goals around improving the stability and performance of the browser, and many of you have helped us meet these goals. We are grateful to the many people around the world who have contributed patches to Chromium (we had our first come from Korea after only five hours), the dedicated volunteers who have filed and triaged bugs in our issue tracker, the loyal users who braved the sometimes rocky dev channel releases, and everyone who took the simple step of opting-in to sending usage statistics and crash reports. 

Google Chrome wouldn't be where it is today without your help. Of course, there's still a ton more to do, so keep the feedback, patches, bug reports, and moral support coming!

The foundation of the browser's security model is the same-origin policy, which protects web sites from one another. For example, the same-origin policy stops a news site from reading the contents of your Gmail inbox (even if you open both web sites at the same time). But what if a web page comes from your local file system rather than from the Internet? Consider the following hypothetical attack if your browser did not limit the power of local pages:

  1. You receive an email message from an attacker containing a web page as an attachment, which you download.
  2. You open the now-local web page in your browser.
  3. The local web page creates an <iframe> whose source is https://mail.google.com/mail/.
  4. Because you are logged in to Gmail, the frame loads the messages in your inbox.
  5. The local web page reads the contents of the frame by using JavaScript to access frames[0].document.documentElement.innerHTML. (An Internet web page would not be able to perform this step because it would come from a non-Gmail origin; the same-origin policy would cause the read to fail.)
  6. The local web page places the contents of your inbox into a <textarea> and submits the data via a form POST to the attacker's web server. Now the attacker has your inbox, which may be useful for spamming or identify theft.

There is nothing Gmail can do to defend itself from this attack. Accordingly, browsers prevent it by making various steps in the above scenario difficult or impossible. To design the best security policy for Google Chrome, we examined the security policies of a number of popular web browsers.

  • Safari 3.2. Local web pages in Safari 3.2 are powerful because they can read the contents of any web site (step 5 above succeeds). Safari protects its users by making it difficult for a web page from the Internet to navigate the browser to a local file (step 2 becomes harder). For example, if you click a hyperlink to a local file, Safari won't render the local web page. You have to manually type the file's URL into the location bar or otherwise open the file.
  • Internet Explorer 7. Like Safari 3.2, Internet Explorer 7 lets local web pages read arbitrary web sites, and stops web sites from providing hyperlinks to local files. Internet Explorer further mitigates local-file based attacks by stopping local web pages from running JavaScript by default (causing step 5 to fail). Internet Explorer lets users override this restriction by providing a yellow "infobar" that re-enables JavaScript.
  • Opera 9.6. Instead of letting local web pages read every web site, Opera 9.6 limits local web pages to reading pages in the local file system (step 5 fails because the <iframe>'s source is non-local). This policy mitigates the most serious attacks, but letting local web pages read local data can still be dangerous if your file system itself contains sensitive information. For example, if you prepare your tax return using your computer, your file system might contain last year's tax return. An attacker could use an attack like the one above to obtain this data.
  • Firefox 3. Like Opera, Firefox 3 blocks local web pages from reading Internet pages. Firefox further restricts a local web page to reading only files in the same directory, or a subdirectory. If you view a local web page stored in, say, a "Downloaded Files" directory, it won't be able to read files in "My Documents." Unfortunately, if the local web page is itself located in "My Documents", the page will be able to read your (possibly sensitive) documents.

When we designed Google Chrome's security policy for local web pages, we chose a design similar to that used by Opera. Google Chrome prevents users from clicking Internet-based hyperlinks to local web pages and blocks local web pages from reading the contents of arbitrary web sites. We chose not to disable JavaScript with an "infobar" override (like Internet Explorer) because most users do not understand the security implications of re-enabling JavaScript and simply re-enable it to make pages work correctly; even those that do understand frequently wish to override the warning, e.g. to develop web pages locally.

There is more to the security of local web pages than simply picking an access policy. A sizable number of users have more than one browser installed on their machine. In protecting our users, we also consider "blended" threats that involve more than one browser. For example, you might download a web page in Google Chrome and later open that page in Internet Explorer. To help secure this case, we attach the "mark of the web" to downloaded web pages. Internet Explorer then treats these pages as if they were on an "unknown" Internet site, which means they can run JavaScript but cannot access the local file system or pages on other Internet sites.

Other blended threats are also possible. Consider a user who uses Google Chrome but has Safari set as his or her default browser. When the user downloads a web page with Google Chrome, the page appears in the download tray at the bottom of the browser window. If the user clicks on the downloaded file, Google Chrome will launch the user's default browser (in this case Safari), and Safari will let the page read any web page, bypassing Safari's protections against step 2 in our hypothetical attack. Although this scenario is also possible in other browsers, downloading a file in those browsers requires more steps, making the vector less appealing to attackers. To mitigate this threat, we recently changed Google Chrome to require the user's confirmation when downloading web pages, just as we do for executable files.

In the future, we hope to further restrict the privileges of local web pages. We are considering several proposals, including implementing directory-based restrictions (similar to Firefox 3), or preventing local web pages from sending sensitive information back to Internet sites (blocking step 6 above), as proposed by Maciej Stachowiak of the WebKit project. Ultimately, we'd like to see all the browser vendors converge on a uniform, secure policy for local web pages.

Historically, testing hasn't gotten much respect in the world of software development.  As the old saying goes, "It compiles! Ship it!" Only a joke — but like most jokes, it hides a grain of truth.

Not so for the Chromium project. Our philosophy is to test everything we possibly can, in as many ways as we can think of.

Test drive: why test?

It's easy to find arguments against testing. Writing tests takes time that developers could be using to write features, and keeping the test hardware and software infrastructure running smoothly isn't trivial.  (I'm one of the people largely responsible for the latter for Chromium, along with Nicolas Sylvain, so I know how time-consuming it can be.)  But in the long run, it's a big win, for at least two reasons.

A well-established set of tests that developers are expected to run before sending changes in makes it a lot easier to avoid causing problems, which lets other developers stay productive rather than chasing down regressions.  And testing submitted changes promptly keeps the code building cleanly and minimizes trouble in the longer term.

But even more importantly, an extensive set of automated tests gives us more confidence that Chromium is reliable, stable, and correct.  We're not afraid to rewrite major portions of the code, because verifying correctness afterward is easier. And we have the flexibility to iterate faster and produce releases more often, because we don't need a 6-month QA cycle before each one.

The test of time: performance testing

We run a lot of different tests. Tests of security. Tests of UI functionality. Tests of startup time, page-load speed, DOM manipulation, memory usage.  Tests for memory errors using Rational Purify. WebKit's suite of layout tests. Hundreds of unit tests to make sure that individual methods are still doing what they should. At last count, we run more than 9100 individual tests, typically 30-40 times every weekday.[1] You can find the full list in the developer documentation, but I'll talk more about one broad category here: performance testing.

With every change made in the tree, we keep track of Chromium's page-load time, memory usage, startup time, the time to open a new tab or switch to one, and more.  All these data points are available in graphs like this one:



Here the top, gold trace shows the startup time on XP for the tip-of-tree build; the green, bottom trace shows the startup time for a reference build so we can discount variation in the test conditions; and the blue, middle trace shows the startup time along a different code path that includes loading gears.dll. The light blue horizontal line is a reference marker. As you can see, whatever changed between the previous build and r3693, it increased the startup time (gold trace) by more than 8%. The developer responsible was able to see that and fix the problem a few builds later.

This graph also shows the usefulness of running a reference build. The spike in startup time that lasted only a single build also shows up in the reference-build time (the green trace). We can assume that it was something temporarily affecting the build machine, rather than a code change. (The problem must have cleared up by the time the Gears startup test ran.)

With so many performance graphs, it can be hard to watch them all, so there's also a summary page.

One final note about Chromium's performance graphs: they're written in HTML and JavaScript, and we're looking for someone to make them easier to use.  If you're interested, grab the code and start hacking!

Test bed: the Chromium buildbot

Nearly all of this testing is controlled by Chromium's buildbot, which automates the build/test cycle.  Every time a change is submitted, the buildbot master builds the tree, runs the tests on all the different platforms, and displays the results.  For a complete guide to the buildbot and its "waterfall" result page, see the Tour of the Chromium Buildbot in the developer docs.

Pro-test

Of course, once you have lots of tests running, the second important aspect of good tree hygiene is to keep them all passing.  But that's a subject for another post.

[1] It's hard to put a single number on it, because certain tests only apply to some parts of the code.  But however you count it, it's a lot of tests.

After the beta launch in early September, from the first wave of feedback, we realized that a large number of users were facing plugin compatibility issues in Google Chrome. These included Adobe Flash videos not playing, as well as various browser performance issues with Adobe Flash and Adobe PDF document loading. There was even an issue where the browser consumed 100% CPU when users interacted with plugins.

This is exactly the kind of feedback we are expecting from a beta launch. We have invested a lot of effort into automating compatibility testing for large number of web pages but there is nothing like actual user feedback. We are impressed by the user response to the beta and the quality of bug reports filed. Nothing more motivating than a lot of users waiting for your work. :) 

One of the big issues was support for PDF Fast WebView, which is the ability for a webserver to byte serve a PDF document. This allows a client to request specific byte ranges in the file while skipping pages that are not needed. This is supported generically by seekable streams specification in NPAPI, which Google Chrome now implements. This should improve performance with large PDF files or any other content served using Fast WebView.

We had a lot of fun fixing other issues too, and here are the stories behind a couple of them. 
YouTube videos stop after six seek attempts:

We received several reports of YouTube videos failing to play. Many reports indicated that this symptom had something to do with using the slider while playing the video. However, we didn't have a reliable scenario to reproduce in this in-house.

Darin Fisher observed that if you move the slider many times, the video stopped playing. Furthermore, he found out that if the slider was moved exactly six times the video would stop playing. This was interesting, because Google Chrome uses a maximum of 6 HTTP connections per host. 

A quick check of the 'I/O Status' in about:network revealed that all connections were active. The question then became one of why the existing connections weren't canceled when the slider was moved. 

Darin found that the Flash plugin would return an error when we supplied it data while the slider was moved. In this case a browser is supposed to cancel the connection and that's what fixed it.

Google Finance chart dragging:

This report was very interesting, due to the fact that it only occurred on single core machines. Of course, in the end we found out that there wasn't any direct connection between the root cause and single core machines. In Google Chrome plugin windows live in a separate plugin process so a plugin has little or no influence on the browser thread, or so we thought.

After some inspection we found out that when a plugin is receiving mouse input, the browser main thread spins with 100% CPU for sometime. Now, the twist to the story is that since a plugin window is a child of the browser window, thread inputs of the browser and the plugin are attached. 

We started looking at the browser message loop more closely. Soon we discovered that MsgWaitForMultipleObjects/PeekMessage APIs behaved strangely when thread inputs are attached. The MsgWaitForMultipleObjects API is typically used to block until an event or a windows message such as an input is available for processing. In this case, we found that it was returning an indication that an input event was available for processing, while PeekMessage indicated no event was available. 

This behavior is probably due to the fact that thread inputs are attached and GetQueueStatus, called internally by MsgWaitForMultipleObjects, returned an indication that input is available in the browser thread, while in reality it was intended for the plugin. This caused the MsgWaitForMultipleObjects not do its intended function of waiting, and caused the browser thread to spin.
These are just a few examples of bugfixes we've made to Google Chrome to address performance issues related to plugins. We continue to look closely at the performance of Google Chrome, both as a whole and in relation to interaction with plugins, to make sure that users are getting the best browsing experience that we can deliver.

Google Chrome uses a library called Skia, which is also the graphics engine behind Google's Android mobile OS. The two projects share code that implements WebKit's porting API in terms of Skia. Google Chrome also uses Skia to render parts of the user interface such as the toolbar and tab strip. I'm going to talk about some of the history that led us to choose Skia, as well as how our graphics layer works. 

WebKit is designed to work on multiple operating systems. It abstracts platform-specific functions into the "port," which an embedder application such as Google Chrome implements specifically for their system. This relatively clean abstraction has helped WebKit to be adopted on a wide variety of devices and systems. One of the parts of the port we had to consider when developing Google Chrome was the graphics layer, which is responsible for rendering text, images, and other graphics to the screen.

Which graphics library?

One question that people often ask is, why not use OpenGL or DirectX for accelerated rendering? First, on Windows, we use a sandbox that prevents us from displaying windows from our renderer processes. The image data must be transferred to the main browser process before it can be drawn to the screen, which limits the possible approaches we can take. If the images needs to be read off the video card only to be copied back to the video card in another process, it is usually not worthwhile using accelerated rendering in the first place.

Second, drawing graphics is actually a very small percentage of the time we spend rendering a page. Most of the time is spent in WebKit computing where things will be placed, what styles to apply to them, and using system routines to draw text. Accelerated 3D graphics would not give us enough overall improvement in speed to balance out the extra work and compatibility problems that we would encounter.

If we aren't going to be using OpenGL or DirectX, what about other graphics libraries? We considered a number of options when we first started work on our Windows port of WebKit:
  • Windows GDI: GDI is the basic, low-level graphics API in Microsoft Windows. It is used to draw buttons, window controls, and dialog boxes for every Windows application, so we know that it's tested and works well. However, it has relatively basic capabilities. Although most web pages can be drawn using only these basic primitives, parts of <canvas> or SVG would need to be implemented separately, either using a different graphics library, or our own custom code.
  • GDI+: GDI+ is a more advanced graphics API provided on newer versions of Windows. Its API is cleaner and it supports most 2D graphics operations you could think to use. However, we had concerns about GDI+ using device independent metrics, which means that text and letter spacing might look different in Google Chrome than in other Windows applications (which measure and draw text tailored to the screen device). Additionally, at the time we were making the decision, Microsoft was recommending developers use newer graphics APIs in Windows, so we weren't sure how much longer GDI+ would be supported and maintained.
  • Cairo: Cairo is an open-source 2D graphics library. It is successfully used in Firefox 3, and the Windows port of WebKit at that time already had a partially complete graphics implementation for WebKit. Cairo is also cross-platform, a key advantage over GDI and GDI+ when building a cross-platform browser.
We ended up choosing Skia over these options because it is cross-platform (meaning our work wouldn't have to be duplicated when porting to other systems), because there was already a high-quality WebKit port using it created for Android's browser, and because we had in-house expertise. The latter point is critical because we expected to (and did) need additional features added to the graphics library as well as some bugs fixed.

So far, we've been very happy with our choice. Skia has proved to be effective at handling all the graphics operations we've needed, has been fast enough despite being software-only, and we've gotten great support from the Skia team. Thanks!

System-specific features

Android has the advantage of controlling the entire operating system graphics layer. Skia's font layer implements all text rendering for the Android system, so all text looks consistent. However, we wanted to match the host OS's look and feel. This means using native text rendering routines so that, for example, we can get ClearType on Windows.

To solve this problem, we create a wrapper around Skia's SkDevice (an object representing a low-level drawing surface) which we call PlatformDevice. The object is both a bitmap in main memory that Skia can draw into, and a "Device Independent Bitmap" that the Windows GDI layer can draw into. Lines, images, and patters are all drawn by Skia into this bitmap, while text is drawn directly by Windows. As part of our porting efforts, we are currently working on creating similar abstractions for OS X and Linux.

In user-experience lingo, 'chrome' refers to the frame of an application - the toolbars, titlebars and buttons that surround your primary content. In Google Chrome, we strove to eliminate as much of this as possible - not just because it leads to a simpler, cleaner design, but because we felt that your web applications should not appear to be constrained within the bulky cruft of a browser - they should feel like first-class applications on your desktop. 

This notion of "content, not chrome" was the mostly-quiet, sometimes-loud guiding principle behind our design; in combination with our tab-dragging work, it lead us to think of Google Chrome as a lightweight, tabbed window manager for the web. You may have noticed, for example, that Google Chrome doesn't use the traditional "browser titlebar - navigation toolbar - tabstrip" layout common in browsers today; in the Google Chrome world, we think of tabs as the equivalent of titlebars for web pages, and they deserve top-level placement and prominence, and should be the container for everything related to them - title, toolbar and content.

To achieve the streamlined feel we were after, we knew we would have to cut some things, and while we had our own intuitions about what was and wasn't useful in current browsers, we had no idea how those ideas matched to reality. So in typical Google fashion, we turned to data; we ran long studies of the browsing habits of thousands of volunteers, compiled giant charts of what features people did and didn't use, argued over and incorporated that data into our designs and prototypes, ran experiments, watched how our test users reacted, listened to their feedback, and then repeated the cycle over and over and over again.

Even the the more subtle parts of our first-level UI were subjected to similarly intense scrutiny - "what shade of blue best suits XP users", "should the tabs start 18 or 19 pixels below the top of the window?", "what's the correct offset between our buttons?". The answers to these questions were debated and tested for our entire development cycle, and we saw that opinions consistently differed greatly depending on whether we had been Windows 3.1, OS7 or even NeXT users and developers.

We realize that browser UI is controversial and that despite our data-driven approach, much of it remains subjective, so we've documented many of the major UI decisions and thought processes behind Google Chrome on our UX Site, we encourage you to read about our work, challenge our assumptions, and let us know how you think things could be improved.

In my last post, I wrote about how we handle I/O in the browser process to keep the main thread of Google Chrome free from hiccups. This time, I'll write about how we keep our sub-processes from interfering with the main ("browser") process.

As you may recall, Google Chrome is a multi-process application, with HTML rendering happening in separate processes we call the "renderers," and plugins running in separate "plugin" processes. Our priority is to always keep the browser process, and especially its main thread, running as smoothly as possible. If a plugin or renderer is interfering with the browser process, the user's interaction with all other tabs and plugins, as well as all the other features of Google Chrome, would also be interrupted. The user might even be prevented from terminating the offending sub-process, negating a key benefit of our multi-process architecture.

The first and most obvious approach is never to block while waiting for information from a renderer process, in case the renderer process happens to be busy or hung. And although the renderers may sometimes synchronously wait for the browser for some requests, there is not an easy way to express that the browser process should wait for the renderer on Windows. Unfortunately, this doesn't cover all cases.

The basic primitive of Microsoft Windows is the "window," which is much more general than just a top-level window with a title bar. Buttons, toolbars, and text controls are usually expressed as sub-windows of a floating top-level window. Windows in this hierarchy are not restricted to single processes, and early versions of Google Chrome used this feature to implement our cross-process rendering architecture. Each tab contained a sub-window owned by a renderer process. The renderer received input and painted into its child window just like any other. The browser and renderer processes each ran their own message processing for things like painting.

A problem arises for some types of Windows messages. The system will synchronously send them to all windows in a hierarchy, waiting for each window to process the message before sending it to parent or child windows. This introduces an implicit wait in the browser process on the renderer processes. If a renderer is hung and not responding to messages, the browser process will also hang as soon as one of these special messages is received. To solve this problem, we no longer allow the renderers to create any windows. Instead, the renderer paints the web page into an off-screen bitmap and sends it asynchronously to the browser process where it is copied to the screen.

Once we made this change, everything ran great. That is, until we implemented plugins. The NPAPI plugin standard that Google Chrome implements allows plugins to create sub-windows, and for compatibility, we can't avoid it. Sometimes a plugin may hang, or more commonly, block waiting on disk I/O. All the hard work we did to insulate the user interface from I/O latency is occasionally undone by our plugin architecture through this long chain of dependencies. To mitigate this problem, we periodically check plugins for responsiveness. If a plugin is unresponsive for too long, we know that the user-interface of Google Chrome might also be affected, and the user might not even be able to close the page that is hosting the plugin. To allow the user to regain control of the browser, we pop up a dialog that offers to terminate the plugin.

If you are doing something that saturates your hard drive (such as compiling Google Chrome), now you know one of the reasons why the interface may occasionally hang and give the "hung plugin" dialog box. Sometimes you may not even realize that a page has loaded plugins when you get this message. You can terminate the plugin immediately, but most of the time it also works to just wait longer for the plugin's I/O to complete.

One of our early goals for Google Chrome was to make the browser as fast as we possibly could. But in addition to raw speed, we wanted it to be highly responsive. After all, it doesn't matter how fast pages can be loaded if the user interface is locked up!

To understand our holistic approach to performance in Google Chrome, it helps to know some background. Processing speed per dollar has rapidly increased over the last 40 years, but hard drives, which are based on moving parts, do not improve nearly as fast. As a result, a modern processor can execute millions of instructions in the same time that it takes to read just one byte off disk. We knew that building a fast, responsive browser for modern systems would require extra attention to disk I/O usage.

Developers are ideal testers for I/O performance since the load of compiling a very large application like Google Chrome will bog down even the most powerful system. This soon led us to a rule that the main thread of Google Chrome—the thread that runs the entire user interface—is not allowed to do any I/O. After fixing the obvious cases, we ran a program that thrashes the disk while we profiled common operations in Google Chrome to find latent I/O hotspots. We even ran a test where we removed the privileges of the main thread to read or write to disk, and made sure that nothing stopped working.

We moved the I/O onto a number of background threads which allow the user-interface to proceed asynchronously. We did this for large data sources like cookies, bookmarks, and the cache, and also for a myriad of smaller things. Writing a downloaded file to disk, or getting the icons for files in the download manager? The disk operations are being called from a special background thread. Indexing the contents of pages in history or saving a login password? All from background threads. Even the "Save As" dialog box is run from another thread because it can momentarily hang the application while it populates.

Startup poses a different type of problem. If all the subsystems simultaneously requested their data on startup, even if it was from different threads, the requests would quickly overwhelm the disk. As a result, we delay loading as much data as possible for as long as possible, so the most important work can get done first.

Our startup sequence works like this: First chrome.exe and chrome.dll are loaded. Then the preferences file is loaded (it may affect how things proceed). Then we immediately display the main window. The user now can interact with the UI and feels like Google Chrome has loaded, even though there has been remarkably little work done. Immediately after showing the window, we create the sub-process for rendering the first tab. Only once this process has loaded do subsystems like bookmarks proceed, since any I/O contention would slow down the display of that first tab (this is why you may see things like your bookmarks appear after a slight delay). The cache, cookies, and the Windows networking libraries are not loaded until even later when the first network request is issued.

We carefully monitor startup performance using an automated test that runs for almost every change to the code. This test was created very early in the project, when Google Chrome did almost nothing, and we have always followed a very simple rule: this test can never get any slower. Because it's much easier to address performance problems as they are created than fixing them later, we are quick to fix or revert any regressions. As a result, our very large application starts as fast today as the very lightweight application we started out with.

Our work on I/O performance sometimes complicates the code because so many operations have to be designed to be asynchronous: managing requests on different threads with data that comes in at different times is a lot of extra work. But we think it has been well worth it to help us achieve our goal of making Google Chrome not only the fastest, but the most responsive browser we could build.

Building a secure browser is a top priority for the Chromium team; it's why we spend a lot of time and effort keeping our code secure. But as you can imagine, code perfection is something almost impossible to achieve for a project of this size and complexity. To make things worse, a browser spends most of its time handling and executing untrusted and potentially malicious input data. In the event that something goes wrong, the team has developed a sandbox to help thwart any exploit in two of the most popular vectors of attack against browsers: HTML Rendering and JavaScript execution.

In a nutshell, a sandbox is security mechanism used to run an application in a restricted environment. If an attacker is able to exploit the browser in a way that lets him run arbitrary code on the machine, the sandbox would help prevent this code from causing damage to the system. The sandbox would also help prevent this exploit from modifying and even reading your files or any information on the system.

We are very excited to be able to launch Google Chrome with the sandbox enabled on all the platforms we currently support. Even though the sandbox in Google Chrome uses some of the new security features on Windows Vista, it is fully compatible with Windows XP.

What part of chromium is sandboxed?

Google Chrome's multi process architecture allows for a lot of flexibility in the way we do security. The entire HTML rendering and JavaScript execution is isolated to its own class of processes; the renderers. These are the ones that live in the sandbox. We expect to work in the near future with the plug-in vendors to securely sandbox them as well. 

How does the sandbox work?

The sandbox uses the security features of Windows extensively; it does not reinvent any security model.

To understand how it works, one needs a basic understanding of the Windows security model. With this model all processes have an access token. This access token is like an ID card, it contains information about the owner of the process, the list of groups that it belongs to and a list of privileges. Each process has its own token, and the system uses it to deny or grant access to resources. 

These resources are called securable objects. They are securable because they are associated with an access control list, or security descriptor. It contains the security settings of the object. The list of all the users and groups having access to the resource, and what kind of access they have (read, write, execute, etc) can be found there. Files, registry keys, mutexes, pipes, events, semaphores are examples of securable objects.

The access check is the mechanism by which the system determines whether the security descriptor of an object grants the rights requested to an access token. It is performed every time a process tries to acquire a securable object.

The process access token is almost entirely customizable. It's possible to remove privileges and disable some groups. This is exactly what the sandbox does. 

Before launching the renderer process we modify its token to remove all privileges and disable all groups. We then convert the token to a restricted token. A restricted token is like a normal token, but the access checks are performed twice, the first time with the normal information in the token, and the second one using a secondary list of groups. Both access checks have to succeed for the resources to be granted to the process. Google Chrome sets the secondary list of groups to contain only one item, the NULL user. Since this user is never given permissions to any objects, all access checks performed with the access token of the renderer process fail, making this process useless to an attacker.

Of course, not all resources on Windows follow this security model. The keyboard, the mouse, the screen and some user objects, like cursors, icons and windows are examples of resources that don't have security descriptors. There is no access check performed when trying to access them. To prevent the renderer from accessing those, the sandbox uses a combination of Job Objects and alternate desktops. A job object is used to apply some restrictions on a group of processes. Some of the restrictions we apply on the renderer process include accessing windows created outside the job, reading or writing to the clipboard, and exiting Windows. We also used an alternate desktop to prevent the renderer from seeing the screen (screen scrapping) or eavesdropping on the keyboard and mouse (key logging). Alternate desktops are commonly used for security. For example, on Windows, the login screen is on another desktop. It ensures that your password can't be stolen by applications running on your normal desktop.

What are the limitations?

As we said earlier, the sandbox itself is not a new security model; it relies on Windows to achieve its security. Therefore, it is impossible for us to prevent against a flaw in the OS security model itself. In addition, some legacy file systems, like FAT32, used on certain computers and USB keys don't support security descriptors. Files on these devices can't be protected by the sandbox. Finally, some third party vendors mistakenly configure files, registry keys and other objects in a way that bypasses the access check, giving everyone on the machine full access on them. Unfortunately, it's impossible for the sandbox to protect most of these misconfigured resources.

To conclude, it is important to mention that this sandbox was designed to be generic. It is not tied to Google Chrome. It can easily be used by any other projects with a compatible multi-process architecture. You can find more information about the sandbox in the design doc, and we will post here again about the details of our token manipulation and our policy framework to configure the level of security of the sandbox.

We hope you will feel safe browsing the web using Google Chrome, and we are looking forward to your feedback and code contribution!

A number of people have asked about the relationship between Google Chrome, Chromium, and Google, specifically in regards to what data is sent to Google or other providers. This is meant to provide a complete answer to that question, and as you will see below, almost all such communication can be disabled within the options of the product itself. Before getting too deep into the question though, it is helpful to have a common set of terminology. Chromium is the name we have given to the open source project and the browser source code that we released and maintain at www.chromium.org. One can compile this source code to get a fully working browser. Google takes this source code, and adds on the Google name and logo, an auto-updater system called GoogleUpdate, and RLZ (described later in this post), and calls this Google Chrome. As such, everything which applies to Chromium below also applies to Google Chrome, while there are some things that apply to Google Chrome (such as the auto-updater) that do not apply to Chromium.

Communications between Chromium (and Google Chrome) and service providers

Search Suggest: If you type a few letters into the address bar and pause, Google Chrome will send the letters you have typed to your default search provider so it can return a list of suggestions. These suggestions will appear in the drop-down that appears below the address bar.  The letters you have typed are only sent if your search provider provides a suggest service. As an example, suppose your search provider is Google and you're located in the United States. If you type "presid" and pause, you might get suggestions like "Search Google for Presidential Polls", "Search Google for Presidents", as well as a suggested website (a page listing the Presidents of the United States on www.whitehouse.gov). Your provider for search suggestions may log these suggest queries. In the case of Google, we log 2% of these suggest queries, and anonymize these logs within approximately 24 hours as described in an earlier blog post.

If you choose to accept a search query suggestion, that query will be sent back to your search provider to return a results page. If you choose to accept a suggested website, the accepted suggestion is not sent back to your search provider unless you've opted-in to stats collection. If you have, Google may collect the suggestion you accepted along with the letters you had typed so far in order to improve the suggest service. If you are in a different part of the world, or are using a different search provider, you may get different suggestions. 

If you do not wish this data to be sent to your search provider, you have a number of options. The first is to use incognito mode, in which the suggest feature is automatically disabled. You may still get suggestions from your local history stored on your computer, but no suggest queries are sent to your search provider. You can also turn off search suggestions permanently. Finally, you can change your search provider. If your new search provider supports suggest functionality (such as Yahoo!), suggest queries will be sent to this new provider. If it does not support suggest functionality then suggest queries will not be sent.

Safe Browsing: Safe Browsing is a feature designed to help protect users from phishing and malware. The way Safe Browsing works is that roughly every half hour, an updated list of suspected phishing and malware websites is downloaded from Google and stored locally on your computer. As you browse the web, URLs are checked against these lists that are maintained locally on your computer. If a match against the list is found, a request to Google is sent for more information. This request is formed by taking a 256-bit hash of the URL, and sending the first 32-bits of that hash. To be clear, requests are not sent for each page you visit, and we never send a URL in plain text to Google.  More information on how this feature works is available in the Google Chrome help center. This feature can also be disabled, although disabling this feature means that you will not be warned before you visit a suspected phishing website, or a website suspected of downloading and installing malware onto your computer.

Suggestions for Navigation Errors: By default, Chromium (and Google Chrome) offer smarter error messages when you encounter unavailable websites. These error messages automatically generate suggestions for webpages with web addresses similar to the one you're trying to access. This feature involves sending the URL that failed to Google, to obtain suggested alternatives. This feature can be disabled in the options dialog.

Which Google Domain: Shortly after startup, Chromium (and Google Chrome) send a request to Google to determine which localized version of Google to use, e.g. whether to direct queries to google.com, google.de, google.co.uk, or another localized version of Google. This is important because sending queries to the right localized Google domain ensures that you get results that are more relevant. For instance, a user searching for "Football" on google.co.uk would get results for European football ("soccer"), while a search on google.com might favor results on American football. Currently (in Google Chrome version 0.2.149), this request is sent regardless of what your default search engine is set to. This was an oversight, and this information was never used to track users.  In Google Chrome 0.3, this request will only be sent if your default search provider is set to Google. This change has already been made in the Chromium source code. 


Communications between Google Chrome (but not Chromium) and service providers

Usage Statistics and Crash Reports: This option is opt-in, and is disabled by default. Users can elect to send Google their usage statistics and crash reports. This includes statistics on how often Google Chrome features, such as accepting a suggested query or URL in the address bar, are used. Google Chrome doesn't send other personal information, such as name, email address, or Google account information. This option can be enabled or disabled at any time.

GoogleUpdate: When you install Google Chrome, GoogleUpdate is also installed. GoogleUpdate makes sure that your copy of Google Chrome is kept up to date, so that when updates are released your version is automatically updated without any action required on your part. This is especially important to make sure that users are protected by the latest security fixes that Google releases. As part of these update checks, two unique, randomly generated IDs are sent, along with information such as version number, language, operating system, and other install or update-related details. This information helps us accurately count total users, and is not associated with you or your Google Account. More information is available in the Google Chrome help center. GoogleUpdate cannot be disabled from within Google Chrome. GoogleUpdate is automatically uninstalled on the next update check (typically every few hours) after the last Google product using it is uninstalled. The GoogleUpdate team is working on functionality to allow GoogleUpdate to be uninstalled immediately after the last app using it is uninstalled.

RLZ: When you do a Google search from the Google Chrome address bar, an "RLZ parameter" is included in the URL. It is also sent separately on days when Google Chrome has been used or when certain significant events occur such as a successful installation of Google Chrome. RLZ contains some encoded information, such as where you downloaded Google Chrome and where you got it from. This parameter does not uniquely identify you, nor is it used to target advertising. This information is used to understand the effectiveness of different distribution mechanisms, such as downloads directly from Google vs. other distribution channels. More information is available in the Google Chrome help center. This cannot be disabled so long as your search provider is Google. If your default search provider is not Google, then searches performed using the address bar will go to your default search provider, and will not include this RLZ parameter.

Updates to Google Chrome's privacy policy
 
In addition to explaining what communications take place between Google Chrome and service providers, we want to let you know that we are updating the Google Chrome privacy policy to reflect a change that we have made to the browser to protect user privacy.

Since the release of Google Chrome, we have modified the way its browsing history works so that the searchable index of pages you visit does not include sites with "https" web addresses. However, thumbnail-sized screenshots of these pages will be captured for local use, such as on the new tab page. The updated version of the privacy policy reflects this change. As before, your browsing history stays on your own computer for your convenience and is not sent back to Google. And remember: You can delete all or part of your browsing history at any time, or you can conduct your browsing in incognito mode, which does not store browsing history. 

We hope this new language makes it more clear how Google Chrome works. For more information, check out the Google Chrome privacy video on our YouTube Google Privacy Channel

A lot of smart people are doing some serious tire kicking on Google Chrome. Now with several days of testing under their belts, we're seeing many observations about Google Chrome's memory usage. I've just posted a techie document about memory over on the developer website as an initial brain-dump of our current thinking about memory usage within Google Chrome. This article is a quick summary.

Measuring memory

If you're measuring memory in a multi-process application like Google Chrome, don't forget to take into account shared memory. If you add the size of each process via the Windows XP task manager, you'll be double counting the shared memory for each process. If there are a large number of processes, double-counting can account for 30-40% extra memory size.

To make it easy to summarize multi-process memory usage, Google Chrome provides the "about:memory" page which includes a detailed breakdown of Google Chrome's memory usage and also provides basic comparisons to other browsers that are running.

Multi-process Model Disadvantages

While the multi-process model provides clear robustness and performance benefits, it can also be a setback in terms of using the absolute smallest amount of memory. Since each tab is its own "sandboxed" process, tabs cannot share information easily. Any data structures needed for general rendering of web pages must be replicated to each tab. We've done our best to minimize this, but we have a lot more work to do.

Example: Try opening the browser with 10 different sites in 10 tabs. You will probably notice that Google Chrome uses significantly more memory than single-process browsers do for this case.

Keep in mind that we believe this is a good trade-off. For example; each tab has it's own JavaScript engine. An attack compromising one tab's Javascript engine is much less likely to be able to gain access to another tab (which may contain banking information) due to process separation. Operating systems vendors learned long ago that there are many benefits to not having all applications load into a single process space, despite the fact that multiple processes do incur overhead.

Multi-process advantages

Despite the setback, the multi-process model has advantages too. The primary advantage is the ability to partition memory for particular pages. So, when you close a page (tab), that partition of memory can be completely cleaned up. This is much more difficult to do in a single-process browser.

To demonstrate, lets expand on the example above. Now that you have 10 open tabs in a single process browser and Google Chrome, try closing 9 of them, and check the memory usage. Hopefully, this will demonstrate that Google Chrome is actually able to reclaim more memory than the single process browser generally can. We hope this is indicative of general user behavior, where many sites are visited on a daily basis; but when the user leaves a site, we want to cleanup everything.

You can find even more details in the design doc in our Chromium developer website.

A major goal of Google Chrome was to improve user enjoyment and value in web surfing. Critical to that is increasing the responsiveness of the browser to user input, or reducing user perceived latency. Measurements in the browser have shown that a significant amount of time is traditionally spent waiting for DNS to resolve domain names. To speed up browsing, Google Chrome resolves domain names before the user navigates, typically while the user is viewing a web page. This is done using your computer's normal DNS resolution mechanism; no connection to Google is used. As a result, user navigation time in Google Chrome when first visiting a domain is on average about 250ms faster than traditional browsing, and the occasional but painful 1-second-plus delays are almost never experienced.

How it works, and how much it helps.

First off, DNS Resolution is the translation of a domain name, such as www.google.com, into an IP address, such as 74.125.19.147. A user can't go anywhere on the internet until after the target domain is resolved via DNS.

The histograms at the end of this post show actual resolution times encountered when computers needed to contact their network for DNS resolutions. The data was gathered during our pre-release testing by Google employees who opted-in to contributing their results. As can be seen in that data, the average latency was generally around 250ms, and many resolutions took over 1 second, some even several seconds.

DNS prefetching just resolves domain names before a user tries to navigate, so that there will be no effective user delay due to DNS resolution. The most obvious example where prefetching can help is when a user is looking at a page with many links to various unexplored domains, such as a search results page. Google Chrome automatically scans the content of each rendered page looking for links, extracting the domain name from each link, and resolving each domain to an IP address. All this work is done in parallel with the user's reading of the page, hardly using any CPU power. When a user clicks on any of these pre-resolved names to visit a new domain, they will save an average of over 250ms in their navigation.  

If you've been running Google Chrome for a while, be sure to try typing "about:dns" into the address bar to see what savings you've accrued! Humorously, this prefetching feature often goes unnoticed, as users simply avoid the pain of waiting, and tend to think the network is just fast and smooth. To look at it another way, DNS prefetching removes the variance from surfing latency that is induced by DNS resolutions. (Note: If about:dns doesn't show any savings, then you probably are using a proxy, which is resolving DNS on the behalf of your browser.)

There are several other benefits that Google Chrome derives from DNS prefetching. During startup, it pre-resolves domain names, such as the home pages, very early in the startup process. This tends to save about 200-500 ms during application startups. Google Chrome also pre-resolves the host names in URLs suggested by the omnibox while the user is typing, but before they press enter. This feature works independently of the broader omnibox logic, and doesn't utilize any connection to Google. As a result, Google Chrome will generally navigate to a typed URL faster, or reach a user's search provider faster. Depending on the popularity of the target domain, this can save 100-250ms on average, and much more in the worst case.

If you are running Google Chrome, try typing "about:histograms/DNS.PrefetchFoundName" into the address bar to see details of the resolution times currently being encountered on your machine.

The bottom line to all this DNS prefetching is that Google Chrome works overtime, anticipating a user's needs, and making sure they have a very smooth surfing experience. Google Chrome doesn't just render and run Java Script at a remarkable speed, it gets users to their destinations quickly, and generally sidesteps the pitfalls surrounding DNS resolution time.

Of course, the best way to see this DNS prefetching feature work, is to just surf.  

Sample of DNS Resolutions Times requiring Network Activity (i.e., over 15ms resolution)

The following is a recent histogram of aggregated DNS resolutions times observed during tests of Google Chrome by Googlers, prior to the product's public release. The samples listed are only those that required network access (i.e., took more than 15 ms). The left column lists the lower range of each bucket.  For example, the first bucket lists samples between 14 and 18ms inclusive. The next three columns show the number of samples in that range, the fraction of samples in the range, and the cumulative fraction of samples at or below that range. For example, in the first bucket, there were 31761 samples in this bucket range, or about 5.10% of all the 6,228,600 samples shown. Looking at the cumulative percentage column (far right), we can see that the median resolution took around 90ms (actually, 52.71% took less than 118ms, but 43.63% took less than 87ms). Reading from the top of the chart, the average DNS resolution time was 271ms, and the standard deviation was 1.130 seconds. The "long tail" may have included users that lost network connectivity, and eventually reconnected, producing extraordinarily long resolution times.


Count: 6,228,600; Sum of times: 1,689,207,135; Mean: 271 ± 1130.67


Unlike most current web browsers, Google Chrome uses many operating system processes to keep web sites separate from each other and from the rest of your computer.  In this blog post, I'll explain why using a multi-process architecture can be a big win for browsers on today's web.  I'll also talk about which parts of the browser belong in each process and in which situations Google Chrome creates new processes.

1. Why use multiple processes in a browser?

In the days when most current browsers were designed, web pages were simple and had little or no active code in them.  It made sense for the browser to render all the pages you visited in the same process, to keep resource usage low.

Today, however, we've seen a major shift towards active web content, ranging from pages with lots of JavaScript and Flash to full-blown "web apps" like Gmail.  Large parts of these apps run inside the browser, just like normal applications run on an operating system.  Just like an operating system, the browser must keep these apps separate from each other.

On top of this, the parts of the browser that render HTML, JavaScript, and CSS have become extraordinarily complex over time.  These rendering engines frequently have bugs as they continue to evolve, and some of these bugs may cause the rendering engine to occasionally crash.  Also, rendering engines routinely face untrusted and even malicious code from the web, which may try to exploit these bugs to install malware on your computer.

In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security.  If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open.  Web apps often have to compete with each other for CPU time on a single thread, sometimes causing the entire browser to become unresponsive.  Security is also a concern, because a web page that exploits a vulnerability in the rendering engine can often take over your entire computer.

It doesn't have to be this way, though.  Web apps are designed to be run independently of each other in your browser, and they could be run in parallel.  They don't need much access to your disk or devices, either.  The security policy used throughout the web ensures this, so that you can visit most web pages without worrying about your data or your computer's safety.  This means that it's possible to more completely isolate web apps from each other in the browser without breaking them.  The same is true of browser plug-ins like Flash, which are loosely coupled with the browser and can be separated from it without much trouble.

Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself.  This means that a rendering engine crash in one web app won't affect the browser or other web apps.  It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won't lock up if a particular web app or plug-in stops responding.  It also means we can run the rendering engine processes in a restrictive sandbox that helps limit the damage if an exploit does occur.

Interestingly, using multiple processes means Google Chrome can have its own Task Manager (shown below), which you can get to by right clicking on the browser's title bar.  This Task Manager lets you track resource usage for each web app and plug-in, rather than for the entire browser.  It also lets you kill any web apps or plug-ins that have stopped responding, without having to restart the entire browser.


For all of these reasons, Google Chrome's multi-process architecture can help it be more robust, responsive, and secure than single process browsers.

2. What goes in each process?

Google Chrome creates three different types of processes: browser, renderers, and plug-ins.

Browser.  There's only one browser process, which manages the tabs, windows, and "chrome" of the browser.  This process also handles all interactions with the disk, network, user input, and display, but it makes no attempt to parse or render any content from the web.

Renderers.  The browser process creates many renderer processes, each responsible for rendering web pages.  The renderer processes contain all the complex logic for handling HTML, JavaScript, CSS, images, and so on.  We achieve this using the open source WebKit rendering engine, which is also used by Apple's Safari web browser.  Each renderer process is run in a sandbox, which means it has almost no direct access to your disk, network, or display.  All interactions with web apps, including user input events and screen painting, must go through the browser process.  This lets the browser process monitor the renderers for suspicious activity, killing them if it suspects an exploit has occurred.

Plug-ins.  The browser process also creates one process for each type of plug-in that is in use, such as Flash, Quicktime, or Adobe Reader.  These processes just contain the plug-ins themselves, along with some glue code to let them interact with the browser and renderers.

3. When should the browser create processes?

Once Google Chrome has created its browser process, it will generally create one renderer process for each instance of a web site you visit.  This approach aims to keep pages from different web sites isolated from each other.

You can think of this as using a different process for each tab in the browser, but allowing two tabs to share a process if they are related to each other and are showing the same site.  For example, if one tab opens another tab using JavaScript, or if you open a link to the same site in a new tab, the tabs will share a renderer process.  This lets the pages in these tabs communicate via JavaScript and share cached objects.  Conversely, if you type the URL of a different site into the location bar of a tab, we will swap in a new renderer process for the tab.

Compatibility with existing web pages is important for us.  For this reason, we define a web site as a registered domain name, like google.com or bbc.co.uk.  This means we'll consider sub-domains like mail.google.com and maps.google.com as part of the same site.  This is necessary because there are cases where tabs from different sub-domains may try to communicate with each other via JavaScript, so we want to keep them in the same renderer process.

There are a few caveats to this basic approach, however.  Your computer would start to slow down if we created too many processes, so we place a limit on the number of renderer processes that we create (20 in most cases).  Once we hit this limit, we'll start re-using the existing renderer processes for new tabs.  Thus, it's possible that the same renderer process could be used for more than one web site.  We also don't yet put cross-site frames in their own processes, and we don't yet swap a tab's renderer process for all types of cross-site navigations.  So far, we only swap a tab's process for navigations via the browser's "chrome," like the location bar or bookmarks.  Despite these caveats, Google Chrome will generally keep instances of different web sites isolated from each other in common usage.

For each type of plug-in, Google Chrome will create a plug-in process when you first visit a page that uses it.  A short time after you close all pages using a particular plug-in, we will destroy its process.

We'll post future blog entries as we refine our policies for creating and swapping among renderer processes.  In the mean time, we hope you see some of the benefits of a multi-process architecture when using Google Chrome.

Chromium helps protect your computer from malware by running some parts of the browser in a sandbox.  The sandbox tries to limit what an attacker can do after exploiting a bug.  In particular, the sandbox aims to prevent malicious web sites from automatically installing software on your computer and from reading confidential files on your hard drive.

The two main modules of Chromium are the browser process and the rendering engine.  The browser process has the same access to your computer that you do, so we try to reduce its attack surface by keeping it as simple as possible.  For example, the browser process does not attempt to understand HTML, JavaScript, or other complex parts of web pages.  The rendering engine does the heavy lifting: laying out web pages and running JavaScript.

To access your hard drive or the network, the rendering engine must go through the browser process, which checks to make sure the request looks legitimate.  In a sense, the browser process acts like a supervisor that double-checks that the rendering engine is acting appropriately.  The sandbox doesn't prevent every kind of attack (for example, it doesn't stop phishing or cross-site scripting), but it should make it harder for attackers to get to your files.

To see how well this architecture might mitigate future attacks, we studied recent vulnerabilities in web browsers.  We found that about 70% of the most serious vulnerabilities (those that let an attacker execute arbitrary code) were in the rendering engine.  Although "number of vulnerabilities" is not an ideal metric for evaluating security, these numbers do suggest that sandboxing the rendering engine is likely to help improve security.

To learn more, check out our technical report on Chromium's security architecture.

Google Suggest is one of the things that makes the omnibox so cool. Just type a few letters and Google Chrome will often point you at the search query or the page that you were trying to type. Based on feedback from privacy groups, the Suggest team is making some changes to the information they log. You can read more about it on the Google Blog.

Our recent launch of Google Chrome simply would not have been possible were it not for the awesome WebKit rendering engine and the amazing team behind it. We want to take a moment to recognize their excellent work (past and present!) and talk about how we arrived at incorporating WebKit into Google Chrome. By the way, that excellent web inspector tool is actually a component of WebKit ;-)

At the onset of the project, we knew we didn't want to create yet another rendering engine. After all, web developers already have enough to worry about when it comes to making sure that all users can access their web pages and web applications. Being inside Google, where we develop lots of pages and webapps, we were very familiar with this problem!

Yet, we also knew that we wanted to create a multi-process browser, which meant that our rendering engine needed to be very lightweight as we were going to be running many of them. Furthermore, in order to achieve our sandboxing objectives, the rendering engine needed to be stripped of any access to the local file system and native widget system.

Our final constraint involved our open source ambitions for Google Chrome. We needed a rendering engine that was open source.

WebKit became the obvious solution after talking to fellow engineers working on the Android project. They were already using WebKit (as it is a great option for mobile devices), and they trumpeted its speed, flexibility and simplicity. We routinely heard comments like "It's so easy to hack!" and "It didn't take me long to find my way around the code base."

Our next step was to put together a test app, that allowed us to try WebKit out in a basic multi-process configuration. We were blown away by how fast WebKit could render pages! You can see a simple example of this in our press conference video (advance to the 38:30 mark). The bottom line: WebKit is a big reason why Chrome feels fast.

We continued tracking the WebKit tip-of-tree during the development of Google Chrome. Now that Chromium.org is live, all of our source code is available there, and we are busily working to contribute our modifications back upstream to WebKit. We are excited about all the cool things coming in WebKit and can't wait to start helping out in a big way.

Thanks again to everyone who worked on WebKit. You guys rock!

Ever since we opened the Google office in Aarhus, Denmark, I've been bombarded with the same question. What kind of virtual machine are you working on? Finally, I'm able to answer.
It is an open source JavaScript engine and it is fast.

A core part of any web browser is its JavaScript engine. Web applications cannot be responsive and stable without a fast and reliable JavaScript engine. Google Chrome features a new JavaScript engine, V8, that has been designed for performance from the ground up. In particular, we wanted to remove some common bottlenecks that limit the amount and complexity of JavaScript code that can be used in Web applications.

The cornerstones of the V8 design are:
  • Compilation of JavaScript source code directly into native machine code.
  • An efficient memory management system resulting in fast object allocation and small garbage collection pauses.
  • Introduction of hidden classes and inline caches that speed up property access and function calls.
Virtual machines for object oriented languages have in the past used inline caching to speed up execution. However, this relies on objects with similar structure share the same runtime type.
By dynamically creating hidden classes for JavaScript objects, V8 can apply optimizations only possible in virtual machines with runtime types.

More design details can be found here: http://code.google.com/apis/v8/design.html.

Along with V8 we have released a benchmark suite that reflects the kind of code we want to run fast: well-structured object-based applications with abstraction layers and many property accesses. As Web applications grow, we believe this suite will be representative of how Web developers write JavaScript code.

The V8 benchmark suite consists of five medium sized standalone JavaScript applications: Richards, DeltaBlue, Crypto, RayTrace, and EarleyBoyer. A total more than 11,000 lines of JavaScript code. Web applications often spend considerable time waiting for the network connection, manipulating the DOM, and rendering pages. The V8 benchmark suite only measures pure JavaScript execution. Visit http://code.google.com/apis/v8/benchmarks.html to see how to run the suite.

I hope the web community will adopt the code and the ideas we have developed to advance the performance of JavaScript. Raising the performance bar of JavaScript is important for continued innovation of web applications.

V8 is an open source project and we encourage developers to visit http://code.google.com/p/v8.

Today, Google launched a new web browser called Google Chrome. At the same time, we are releasing all of the code as open source under a permissive BSD license. The open source project is called Chromium - after the metal used to make chrome.

Why did Google release the source code?

Primarily it's because one of the fundamental goals of the Chromium project is to help drive the web forward. Open source projects like Firefox and WebKit have led the way in defining the next generation of web technologies and standards, and we felt the best way we could help was to follow suit, and be as open as we could. To be clear, improving the web in this way also has some clear benefits for us as a company. With a richer set of APIs we can build more interesting apps allowing people to do more online. The more people do online, the more they can use our services. At any rate, we have worked on this project by ourselves for long enough - it's time for us to engage with the wider web community so that we can move on to the next set of challenges.

We believe that open source works not only because it allows people to join us and improve our products, but also (and more importantly) because it means other projects are able to use the code we've developed. Where we've developed innovative new technology, we hope that other projects can use it to make their products better, just as we've been able to adopt code from other open source projects to make our product better.

How will we be working with the open source community?

To begin with, we are engaging with the WebKit community to integrate our patches back into the main line of WebKit development. Because of Chromium's unique multi-process architecture, the integration of the V8 JavaScript engine, and other factors, we've built a fairly significant port of WebKit on Windows, and are developing the same for Mac OS X and Linux. We want to make sure that we can find a productive way to integrate and sync up with the WebKit community in this effort as we move forward.

Today, you can visit our project website at www.chromium.org, where you can get the latest source code or the freshest development build. If you're interested in keeping track of what's going on, you can join one of our discussion groups, where you can participate in development discussions and keep track of bugs as they're filed and fixed. Maybe you'll want to fix a few, too! You'll also find information on reporting bugs and all the various other aspects of the project. We hope you'll check it out!

This is the Chromium blog. The posts here will be of a mostly technical nature, discussing the design theory and implementation details of work we've done or are doing. Over the next few weeks there'll be a number of posts that give a high level tour of the most important aspects of the browser.

Finally, if you've not yet done so, take Google Chrome for a spin. You can download it from http://www.google.com/chrome/.