[go: up one dir, main page]

Years ago, I remember watching a webcast of the introduction of the Aqua user interface when Mac OS X Public Beta was first demoed. The part I distinctly remember was realizing the brilliance of sheets. Like many great innovations, they were simple in retrospect and solved a problem you didn't realize you had: the modality problem — the fact that dialog boxes blocked interacting with the whole application even though only one window needed the information that you, as the user, had to provide. I watched in wonder as a save dialog blocked only the one window that needed saving, leaving all the other windows free. Finally, a solution to limit the modality.

Because modality sucks.

Back in 2000, sheets worked well because the smallest unit of user interaction with an application was a window. Soon after, though, things started to change. Web browsers in particular were among the first to start using tabs to put more than one document in a window. This caused a snag. A web page can require modal interaction from the user: picking a file, or supplying a username and password. Yet we don't want to prevent the user from switching to a different tab and continuing to interact with other websites. If the finest-grained modality control we have is per-window, how can we achieve that outcome?

Chromium's current answer comes from combining Cocoa's child window support with sheets to get tab-modal sheets:

While this looks like a normal sheet, you can switch between open tabs while the password request is up. You can't, however, interact with the web page.

The implementation, like all of the code used in Chromium, is open source, and can be found in the Google Toolbox for Mac, a collection of reusable components from the Mac developers at Google. The technical details of the GTMWindowSheetController can be found on the Google Mac blog. The other thing to note is that right now tab-modal sheets are only used for website authentication. The other sheets we use (for file selection, etc) are currently window-modal; we hope to convert them over soon.

The fate of tab modal sheets, however, isn't certain. A way to enforce tab-modal interaction is certainly needed. But is attaching sheets to the tabs the right way to achieve that goal? At the last WWDC, I talked to some graphic designers who were opposed to the idea. "Reusing sheets in a context that isn't window modality will only confuse the user!" On the other hand, my position is that the concept of modality is the same, and the context is similar enough that users will find that sheets help them understand the modality in which they must interact.

So the story isn't over. Tab-modal sheets are our contribution to the ongoing discussion, an experiment to see what works and what doesn't. Together we can work out the best way to help users interact with their computers.

The seven days since our beta launch have been busy and exciting for the Google Chrome Extensions team.

Besides having fun trying out some of the 800+ new extensions in our gallery, we hosted an event for developers on our Mountain View campus to discuss the design principles of the Google Chrome's extensions system and to present the team's roadmap. Approximately 140 developers attended, representing more than 50 companies. Aaron Boodman and Erik Kay, technical leads for the extensions platform, provided insights across several topics, including the UI design and the security model for the extensions system. They also demonstrated the platform's flexibility by building and publishing an "Email this page" extension in less than 5 minutes.





Aaron and Erik were joined on stage by the Xmarks, eBay and Google Translate teams, who discussed their own experiences with Google Chrome Extensions, highlighting the ease of development and the advanced capabilities that HTML5 provides to extension developers. Finally, Nick Baum, product manager for Google Chrome Extensions, closed the event by walking through the extensions gallery approval process, tips for successful extensions, as well as the team's near-term goals.

To learn more on these topics you can check out the videos from the event below:



We also met many extensions developers last week at Add-on Con, an annual conference for browser add-ons. Erik and Aaron presented a quick overview of the extension system's design for those who had missed our earlier event. In addition, Aaron shared his thoughts on a panel about cross-browser extension development while Linus Upson, Google's engineering lead for client products, presented his views on a panel about the future of the browser.

We'd like to thank developers for building and uploading some great extensions in our gallery and for giving us plenty of feedback. This week, we plan to continue our discussions with the developer community by hosting several online tutorial sessions. You can still sign up for one of these sessions, but if you aren't able to attend, we encourage you to submit your questions through our discussion group.

In our earliest discussions about the extension system, we knew we wanted to raise the bar for security, but how can we secure the platform while still letting developers create awesome extensions that have rich interactions with web pages? During our threat analysis, we realized there were two main security concerns: malicious extensions and "benign-but-buggy" extensions.

A malicious extension is an extension written by an ill-intentioned developer. For example, a malicious extension might record your passwords and send them to back to a central server. The tricky part about defending against malicious extensions is that there are well-intentioned extensions that do exactly the same thing. Our defenses against malicious extensions focus on helping the user avoid installing malicious extensions in the first place:
  1. We expect most users to install extensions from the gallery, where each extension has a reputation. We expect malicious extensions will have a low reputation and will have difficulty attracting many users. If a malicious extension is discovered in the gallery, we will remove it from the gallery.

  2. When installing extensions outside the gallery, the user experience for installing an extension is very similar to the experience for running a native executable. If an attacker can trick the user into installing a malicious extension, the attacker might as well trick the user into running a malicious executable. In this way, the extension system avoids increasing the attack surface.
To help protect against vulnerabilities in benign-but-buggy extensions, we employ the time-tested principles of least privilege and privilege separation. Each extension declares the privileges it needs in its manifest. If the extension is later compromised, the attacker will be limited to those privileges. For example, the Gmail Checker extension declares that it wishes to interact with Gmail. If the extension is somehow compromised, the attacker will not be granted the privilege to access your bank.

To achieve privilege separation, each extension is divided into two pieces, a background page and content scripts. The background page has the lion's share of the extensions privileges but is isolated from direct contact with web pages. Content scripts can interact directly with web pages but are granted few additional privileges. Of course, the two can communicate, but dividing extensions into these components means a vulnerability in a content script does not necessarily leak all the extension's privileges to the attacker.

Finally, we utilize our multi-process architecture and sandboxing technology to provide strong isolation between web content, extensions, and the browser. Extensions run in a separate operating system process from the browser kernel and from web content, helping prevent malicious web sites from compromising extensions and malicious extensions from compromising the browser kernel. To facilitate rich interaction, content scripts run in-process with web content, but we run content scripts in an "isolated world" where they are protected from the page's JavaScript.

Of course, attackers will write malicious extensions and well-intentioned developers will write buggy extensions. The extension system improves security by making it easier for developers to write secure extensions. If you would like to learn more about the security of the extension system, you can watch our video or read our academic paper describing all the details.

I'm excited to announce four new tech talks on the guts of Chromium have been posted to YouTube! These should be especially useful for developers who work on Chromium whether they're fairly new to the project or have been around the block.

We've done tech talks before, but this time we asked Chromium developers what they'd most like to hear about. Once we knew what was most in demand, we found experts on each subject and asked them to make a presentation. The talks were given before a live studio audience of Googlers last Friday with extra attention paid to creating high quality recordings. Now we're excited to make these widely available to all Chromium contributors!

The WebKit API
with Darin Fisher

Darin Fisher talks about the recently upstreamed Chromium WebKit API. The API is a critical step in our path to becoming completely integrated into the WebKit project. Like the other WebKit APIs, ours is a veneer which shields developers (including many of our own) from the internal details of WebKit (named WebCore). Darin talks at a high level about the API, dives into some code examples, and talks about the history and future of the API.

Layout Tests
with Pam Greene

Layout Tests are the tests we inherit from the WebKit project and are a very important part of the Chromium's testing infrastructure. Pam Greene talks about what they are, how to run them, how to debug problems within them, and even touches on how to write your own. She also covers advanced (but easy to use) tools for rebaselining and tracking flakyness. Any Chromium developer that works on WebKit really should check this out!

Painting in Chromium
with Brett Wilson

Because of Chromium's multi-process architecture, painting within Chromium is far from typical. In this talk, Brett Wilson starts from Skia and the WebKit render tree, follows the bits across the process boundaries, and continues all the way to your screen. He also details many of the differences in painting between platforms, how things work in test shell, and interesting corner cases like resizing.

WebKit's Guts
with Eric Seidel

A large percentage of Chromium's code (and part of what makes it so fast) is WebKit. In this talk, Eric Seidel gives us a 30,000 foot view of how WebKit actually renders a page. He starts with how resources are loaded, explains how they're parsed into a DOM tree, and then talks about the various trees involved in rendering. In addition, he touches on many other important topics like hit testing (figuring out what you're hovering over and clicking on). This is a must-see for anyone working on the guts of WebKit.

Also note that all the tech talks are posted to the Chromium developer website.

The last couple of weeks since we open sourced Chromium OS have been pretty exciting. The discussion groups have been buzzing and a number of sites have put up Chromium OS builds for download. While we're happy that developers have been building Chromium OS there are a few things we would like to clarify:
  1. This is not ready for consumers yet — everything you see can and probably will change by the time Google Chrome OS-based devices are available late next year.
  2. Please note that Google has not released an official binary for Chromium OS and therefore if you download Chromium OS binaries please ensure that you trust the site you are downloading them from.
  3. While we will try our best to help you through the Chromium discussion forums, we will not be officially supporting any of these builds. Remember that you are downloading that specific site/developer's build of Chromium OS.
We have also received a number of questions that we wanted to answer directly and so we put together the following FAQ to clarify some of these issues.

One of the top questions has been around the distinction between Google Chrome OS and Chromium OS. Google Chrome OS is to Chromium OS what Google Chrome browser is to Chromium. Chromium OS is the open source project, used primarily by developers, with code that is available for anyone to checkout, modify and build their own version with. Meanwhile, Google Chrome OS is the Google product that OEMs will ship on Netbooks next year. Therefore, dear developers who have built and posted Chromium OS binaries, you're awesome and we appreciate what you are doing, however we have to ask you to call the binaries you've put up for download "Chromium OS" and not "Google Chrome OS".

Thanks!

Google is one of the Khronos member companies helping to develop the WebGL specification, bringing hardware accelerated 3D rendering to the web via the Canvas element. Today is the release of the initial public draft of the WebGL spec. We're happy to announce that Chromium contains provisional WebGL support on Linux (32- and 64-bit), Mac and Windows. This implementation was developed in close collaboration with Apple Computer and utilizes much shared code from WebKit.

See Getting a WebGL Implementation for instructions on getting a Chromium build and enabling WebGL support. This is an early version with many caveats, but with it you can get a taste of the new functionality coming to the web.

Here are a few demos to whet your appetite:
The WebGL wiki is the central location for information about the evolving specification, including the draft spec, introductory articles, tutorials, mailing lists and forums. See the WebGL demo repository for more demos and instructions on how to check out their source code.

We're looking forward to finalizing the WebGL specification and making this functionality available to web developers, and look forward to your feedback. For Chromium-specific questions, use the Chromium-dev mailing list, for more general WebGL questions, use the WebGL forums or WebGL public mailing list.

Starting in the Google Chrome developer channel release 4.0.249.0, Web Sockets are available and enabled by default. Web Sockets are "TCP for the Web," a next-generation bidirectional communication technology for web applications being standardized in part of Web Applications 1.0. We've implemented this feature as described in our design docs for WebKit and Chromium.

The Web Sockets API enables web applications to handle bidirectional communications with server-side process in a straightforward way. Developers have been using XMLHttpRequest ("XHR") for such purposes, but XHR makes developing web applications that communicate back and forth to the server unnecessarily complex. XHR is basically asynchronous HTTP, and because you need to use a tricky technique like long-hanging GET for sending data from the server to the browser, simple tasks rapidly become complex. As opposed to XMLHttpRequest, Web Sockets provide a real bidirectional communication channel in your browser. Once you get a Web Socket connection, you can send data from browser to server by calling a send() method, and receive data from server to browser by an onmessage event handler. A simple example is included below.

if ("WebSocket" in window) {
var ws = new WebSocket("ws://example.com/service");
ws. {
// Web Socket is connected. You can send data by send() method.
ws.send("message to send"); ....
};
ws. (evt) { var received_msg = evt.data; ... };
ws. { // websocket is closed. };
} else {
// the browser doesn't support WebSocket.
}

In addition to the new Web Sockets API, there is also a new protocol (the "web socket protocol") that the browser uses to communicate with servers. The protocol is not raw TCP because it needs to provide the browser's "same-origin" security model. It's also not HTTP because web socket traffic differers from HTTP's request-response model. Web socket communications using the new web socket protocol should use less bandwidth because, unlike a series of XHRs and hanging GETs, no headers are exchanged once the single connection has been established. To use this new API and protocol and take advantage of the simpler programming model and more efficient network traffic, you do need a new server implementation to communicate with — but don't worry. We also developed pywebsocket, which can be used as an Apache extension module, or can even be run as standalone server.

You can use Google Chrome and pywebsocket to start implementing Web Socket-enabled web applications now. We're more than happy to hear your feedback not only on our implementation, but also on API and/or protocol design. The protocol has not been completely locked down and is still in discussion in IETF, so we are especially grateful for any early adopter feedback.

These last few days, it seems that the extensions team has developed a newfound love for the F5 key. We all keep refreshing the "Most recent" page of our new gallery, obsessively checking the newest amazing extensions that developers have uploaded. Today, we get to share this nervous tic with millions of Google Chrome's users. We're launching extensions in the beta channel for Windows and Linux (Mac is in progress). We're also opening our gallery, which, as of now, contains more than 300 extensions!

An extension system has been one of our most requested features for Google Chrome. It's a tribute to Mozilla and the Firefox project that nowadays, users just expect all browsers to have built-in extensibility.

We started the project by presenting a design doc that outlined our vision to create an extensions system based on web technologies - a system that is easy to use, stable, more secure and that wouldn't slow down Google Chrome. It wasn't always easy to balance our goals, and sometimes we had to make tough trade-offs.

Since we built all of this in the open, we had tons of help. Developers started using our code shortly after the first check-in, and have been sending us feedback on our mailing list ever since. Being able to see the extensions people were trying to build and the problems they faced made it more fun to design the system, and motivated us to keep fixing the bugs.

Today, we're really happy to release a beta of extensions that begins to deliver on our initial vision. Extensions are as easy to create as webpages. Users can install and uninstall them quickly without restart, and extensions have a great polished look that fits in with Google Chrome's minimalist aesthetic. When developers upload an extension it is available to users immediately, with limited restrictions and manual reviews only in a few situations.

On the technical side, we've been able to use Google Chrome's multiprocess architecture to help keep extensions stable and safe. And Chromium's extensive performance monitoring infrastructure has helped us ensure extensions affect Google Chrome's speed as little as possible. You can learn more details about the internals of our system in the videos below.



We still have a long way to go - next up, we're going to be working hard to get extensions to all Google Chrome users, and we're already brainstorming the next set of API improvements. Oh and, we should also fix some bugs ;-).

For those of you who want to learn more about extensions, let us know if you want to join us in a small get together tomorrow in our campus in Mountain View. Space is limited - we'd love to see many of you there so do RSVP early and we'll email you more information if are selected to attend. You can also meet with our team at Add-on Con, where we are going to participate in a couple of panels. Finally for those of you who are far away, we are planning some online developer tutorial sessions. If you are interested in attending these, please fill in this form.

Google Chrome for Linux is finally ready for beta. Like the Windows version, it's fast, secure, stable, simple, extensible, and embraces open standards like HTML5.

But bringing Google Chrome to Linux wasn't just a straight port -- it was a labor of love. Google Chrome works well with both Gnome and KDE, and is updated via the normal system package manager. It has also been developed as a true open source project, using public mailing lists, IRC channels, bug tracker, code repository, and continuous build and test farm -- following in large part the trail blazed by Mozilla. Where we noticed problems in system libraries, we pushed fixes upstream and filed bugs. This open approach to development seems to be working: so far, about 50 developers outside Google have contributed code (for instance, thanks to Ibrar and Paweł for our FTP stack), and several Linux distributions even maintain preliminary open source builds of Google Chromium.

In short, we really love Google Chrome for Linux, and we think you will, too. Please try it and let us know what you think.

(One more thing: if you've already installed the dev channel version, you may need to uninstall that before installing the beta version -- we tried to make that work smoothly, but a few rough edges remain.)

We've introduced a new way for web developers to take advantage of Google Chrome's multi-process architecture, as of version 4.0.229.1. Google Chrome already uses separate OS processes to isolate independent tabs from each other in the browser, so that crashes or slowdowns in one tab won't affect the others. Google Chrome even switches a given tab's process if you type a different site's URL into the Omnibox.

In many cases, though, Google Chrome needs to keep pages from related tabs in the same process, since they may access each other's contents using JavaScript code. For example, clicking links that open in a new window will generally cause the new and old pages to share a process.

In practice, web developers may find situations where they would like links to other pages to open in a separate process. As one example, links from messages in your webmail client would be nice to isolate from the webmail client itself. This is easy to achieve now, thanks to new support in WebKit for HTML5's "noreferrer" link relation.

To cause a link to open in a separate process from your web page, just add rel="noreferrer" and target="_blank" as attributes to the <a> tag, and then point it at a URL on a different domain name. For example:

<a href="http://www.google.com" rel="noreferrer" target="_blank">Google</a>

In this case, Google Chrome knows that the page will be opened in a new window, that no referrer information will be passed to the new page, and that the window.opener value will be null in the new page. As a result, the two pages cannot script each other, so Chrome can load them in separate processes. Google Chrome will still keep same-site pages in the same process, to allow them to share caches and minimize overhead.

We hope you find this useful on your own sites!

Earlier this year, we heard from many of you on how important speed is to your daily activities on the web. We kicked off a series of discussions with the Internet community on ways to make the web faster: from Internet protocols and best practices in website development, to improvements in the browser itself.

A lot of engineering effort is involved in making sure that a browser continually provides a fast, responsive, and satisfying experience on the web. We're excited to see modern browsers continue to push the envelope in designing and optimizing browser architecture for speed and performance.

We've often been asked what makes Google Chrome so fast -- from its snappy start-up time and fast page-loading, to the ability to run complex web applications quickly. To walk through some of the thought processes and technical decisions involved in making Google Chrome a fast browser, we've put together three technical interviews on DNS pre-resolution, the V8 JavaScript engine, and DOM bindings. In a future post, we'll also cover other important areas like WebKit and UI responsiveness.



DNS pre-resolution
with Jim Roskind


  1. What is DNS pre-resolution, and how does it make Google Chrome even faster?
  2. Why is DNS pre-resolution difficult to do?
  3. Explain in more detail how adaptive pre-resolution works.
  4. How else is DNS pre-resolution beneficial? Can it help with browser start-up time?
  5. How do we measure and benchmark the benefits of DNS pre-resolution?
  6. What's next for DNS pre-resolution?



V8 JavaScript engine
with Mads Ager



  1. What is V8?
  2. What are we currently doing to speed up JavaScript performance on V8?
  3. How do we achieve big boosts in JavaScript speed, such as the recent 150% improvement since our initial launch?
  4. How do we measure V8's performance?


DOM bindings and more
with Mike Belshe



  1. What are DOM bindings?
  2. What are the most recent improvements in DOM bindings, for Google Chrome as well as other browsers?
  3. The Google Chrome beta release in August 2009 included improvements in DOM bindings. Tell us more.
  4. How do we measure and benchmark improvements in DOM bindings?
  5. In general, what are the biggest performance impediments for a browser?
  6. What are some of the performance benefits of Google Chrome's multiprocess architecture?




Since we first introduced Google Chrome's developer tools, we've been busy adding more functionality to them.

First, our tools benefited from improvements that the WebKit team made to Web Inspector (our developer tools are partially based on Web Inspector). Second, from our end, we recently released the heap profiler and the timeline tab in Google Chrome's Developer Channel.

With the heap profiler you can now take a snapshot of the JavaScript heap at any point in time. A heap snapshot helps you understand memory usage, and by comparing snapshots you can also follow memory usage over time. You will find the heap profiler in the profiles tab along with the sample-based CPU profiler.

The new timeline view gives you a complete overview of where time is spent when loading a web app. All events -- ranging from loading resources over parsing and executing JavaScript to calculating styles and repainting -- are plotted on a timeline.

Besides these product improvements, we've tried to make the Google Chrome Developer tools easier to find and understand by putting together a mini site with tutorials and videos.



To take our newest release for a spin, get Google Chrome from the Developer Channel and you'll automatically be brought up to date. We welcome your feedback and your contributions to improve developer tools in WebKit and Google Chrome even more.


During the last few months, our team has been working hard to support extensions in Google Chrome's beta channel. Today, we are getting one step closer to this goal; developers can now upload their extensions to Google Chrome's extension gallery. We are making the upload flow available early to make sure that developers have the time to publish their extensions ahead of our full launch.

You can find all the info to write an extension in our docs. Once your extension is ready for the gallery, you'll need to upload a zip file of your code and an icon that helps users distinguish your extension. You'll also have the option to submit text, screenshots and/or YouTube videos that describe the functionality of your extension. All types of extensions are welcome in the gallery, provided they comply with our Terms of Service.

For most extensions, the review process is fully automated. The only extensions we'll review manually are those that include an NPAPI component and all content scripts that affect "file://" URLs. For security reasons, developers of these types of extensions will need to provide some additional information before they can post them in the gallery.

Once an extension is uploaded, our gallery takes care of packaging and signing. Updating an extension is also incredibly easy — all a developer needs to do is to upload a new file in the gallery. Finally, to further help developers, in the next few days, we plan to open up the gallery to a small group of trusted testers. They will provide developers with insights and bug reports that will help them polish their extensions ahead of our beta launch.

We can't wait to share all the great extensions that you'll submit with all of Google Chrome's users. In the meantime, we encourage you to submit any bugs you find in the upload process to our Issue Tracker and to ask all relevant questions in our discussion group.

Today we announced the Chromium OS project on the Official Google Blog. This release of Chromium OS includes:
We are doing this early, almost a year before Google Chrome OS will be ready for users, because we are eager to engage with open source developers. There are many of you who share our passion for creating a new model of computing. Chromium OS makes it possible for any interested developer to contribute code, ideas and designs to help shape the future of personal computing.

Speed, simplicity and security are fundamental to Chrome OS. We wanted to talk about these areas in a bit more detail.

Speed


Simplicity


Security


Open Source


We expect to publish additional design docs and documentation in the upcoming few months. You can track what we're doing on this blog and we hope you will join us in this effort.

Today we'd like to share with the web community information about SPDY, pronounced "SPeeDY", an early-stage research project that is part of our effort to make the web faster. SPDY is at its core an application-layer protocol for transporting content over the web. It is designed specifically for minimizing latency through features such as multiplexed streams, request prioritization and HTTP header compression.

We started working on SPDY while exploring ways to optimize the way browsers and servers communicate. Today, web clients and servers speak HTTP. HTTP is an elegantly simple protocol that emerged as a web standard in 1996 after a series of experiments. HTTP has served the web incredibly well. We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support.

So far we have only tested SPDY in lab conditions. The initial results are very encouraging: when we download the top 25 websites over simulated home network connections, we see a significant improvement in performance - pages loaded up to 55% faster. There is still a lot of work we need to do to evaluate the performance of SPDY in real-world conditions. However, we believe that we have reached the stage where our small team could benefit from the active participation, feedback and assistance of the web community.

For those of you who would like to learn more and hopefully contribute to our experiment, we invite you to review our early stage documentation, look at our current code and provide feedback through the Chromium Google Group.


This post is cross-posted at the Google Research Blog

Today, we're releasing an early version of Google Chrome Frame, an open source plug-in that brings HTML5 and other open web technologies to Internet Explorer.

We're building Google Chrome Frame to help web developers deliver faster, richer applications like Google Wave. Recent JavaScript performance improvements and the emergence of HTML5 have enabled web applications to do things that could previously only be done by desktop software. One challenge developers face in using these new technologies is that they are not yet supported by Internet Explorer. Developers can't afford to ignore IE — most people use some version of IE — so they end up spending lots of time implementing work-arounds or limiting the functionality of their apps.

With Google Chrome Frame, developers can now take advantage of the latest open web technologies, even in Internet Explorer. From a faster Javascript engine, to support for current web technologies like HTML5's offline capabilities and <canvas>, to modern CSS/Layout handling, Google Chrome Frame enables these features within IE with no additional coding or testing for different browser versions.

To start using Google Chrome Frame, all developers need to do is to add a single tag:

<meta http-equiv="X-UA-Compatible" content="chrome=1">

When Google Chrome Frame detects this tag it switches automatically to using Google Chrome's speedy WebKit-based rendering engine. It's that easy. For users, installing Google Chrome Frame will allow them to seamlessly enjoy modern web apps at blazing speeds, through the familiar interface of the version of IE that they are currently using.

We believe that Google Chrome Frame makes life easier for web developers as well as users. While this is still an early version intended for developers, our team invites you to try out this for your site. You can start by reading our documentation. Please share your feedback in our discussion group and file any bugs you find through the Chromium issue tracker.




Good news for extension developers: as of today, extensions are turned on by default on Google Chrome's dev channel.

Extensions are small pieces of software that developers can write to customize the way Google Chrome works. We've been working on enabling extensions for a while, but until now, they were hidden behind a developer flag. As of today, this is no longer true. If you're on the dev channel, you can try installing some of our sample extensions.

Removing the flag is the first step in our launch process, and it means we're ready for a few more people to start using extensions-- the kind of adventurous people who populate the dev channel. For this release, we focused on getting most of the basic infrastructure and security pieces in place, in particular our new permission system. However, you should still be cautious and only install extensions from developers you trust.

Going forward, we are working hard towards a release on the Beta Channel. The UI is likely to change as we bring it up to Google Chrome's high standard, and we're still finishing up a few APIs. We've also enlisted some help to get extensions up to speed on Mac and Linux.

From the beginning, we've strived to make extensions super easy to develop. If you'd like to give it a try, you'll find everything you need to get started in our brand new documentation. If you've already written an extension, make sure to read this post about some recent changes.

Posted by Aaron Boodman, Software Engineer

Starting in the Dev channel release 4.0.203.2, we are using our new FTP implementation by default on Windows. (It was already enabled by default on Linux and Mac.) This switchover is an important milestone in the development of our network stack. We'd like to acknowledge two Chromium contributors who made this possible.

The new FTP implementation was initially written by Ibrar Ahmed single–handedly. It was a long journey for him because he worked on it in his spare time. Ibrar has a master's degree in computer science from International Islamic University. After working as software engineer and associate architect at other companies, he recently started his own tele-medicine company. We thank Ibrar for his contribution to the Chromium network stack!

Paweł Hajdan Jr. started to work on the new FTP code in July as one of his summer intern projects at Google. Paweł added new unit tests, fixed bugs and compatibility issues, and is taking the lead in bringing the new FTP code to production quality.

Finally, we used Mozilla code for parsing and formatting FTP directory listings (ParseFTPList.cpp), which was originally written by Cyrus Patel.

In the near term, the original WinInet-based FTP implementation will still be available as an option on Windows. Specify the --wininet-ftp command-line option to enable it. (The original --new-ftp option is now obsolete and ignored.) During this period we will fix FTP bugs only in the new FTP implementation. When we're happy with the quality of the new FTP code, we will remove the original WinInet-based implementation, finally eliminating our dependency on WinInet.

Please help us achieve that goal by testing FTP with a Dev channel release and filing bug reports. Follow these guidelines when reporting bugs:
  • Please don't add a comment like "Here is another URL that doesn't work for me" to a bug. Always open a new bug, and give a link to another bug if you think they are similar.
  • Make the steps to reproduce as detailed as possible, and always include the version number of Chrome.
  • Check if the problem can be reproduced with --wininet-ftp on Windows and include that information in the bug report.

As of today's dev channel build, we're adding a brand new feature to Google Chrome: bookmark sync. Many users have several machines, one at home and one at work for example. This new feature makes it easy to keep the same set of bookmarks on all your machines, and stores them alongside your Google Docs for easy web access.

To activate this feature, launch Google Chrome with the --enable-sync command-line flag. Once you set up sync from the Tools menu, Chrome will then upload and store your bookmarks in your Google Account. Anytime you add or change a bookmark, your changes will be sent to the cloud and immediately broadcast to all other computers for which you've activated bookmark sync (using the same XMPP technology as Google Talk).

For more information on this, please see this email to chromium-dev.

Happy syncing!

There's been some public discussion lately about memory usage in Google Chrome. We think about our memory usage quite a bit so we're happy to see other people paying attention too. This has been a topic of discussion before, but our multiprocess architecture makes measuring memory utilization difficult with the standard set of tools. The crux of the problem is that Chromium goes to great lengths to share memory between processes. However, that shared memory is difficult to account for in the Windows Task Manager. On Windows XP, using the default Task Manager measurement of memory leads to double counting. On Vista, using the default view leads to under counting.

There are a couple of more accurate ways to measure memory utilization in Chromium (or Google Chrome). The easiest is to crack open the task manager that is built into Chromium which tries to account for our memory usage more holistically. If you want even more detail, you can click on "Stats for nerds" which is a link to about:memory.


If you don't fully trust Chromium's task manager or about:memory, the gold standard for measuring memory usage is to look at the system's total commit charge before, during, and after using Chromium. It's a little tricky to get right because you'll need to shut down other services that may kick in while you are running your test. Here's the basic procedure:
  1. Shut down any unnecessary services
  2. Reboot your computer
  3. Using the windows task manager, measure the Total Commit Charge of the system*
  4. Run the application you are seeking to test, in this case, Chromium
  5. Measure the Total Commit Charge again
  6. Close the application
  7. Measure the Total Commit Charge one more time
  8. Subtract your first measurement from your second, and you should have the memory used by Chromium
  9. To validate your test, make sure that the first and last measurement are nearly identical
*On XP, Commit Charge shows up on the bottom of the Windows Task Manager. On Vista, look at the Performance tab of the Windows Task Manager and use the "Memory" number.

For more information on memory usage and how to measure it, check out the Memory Usage Backgrounder on chromium.org.

We recently announced the availability of developer tools for Google Chrome. We are now releasing ChromeDevTools, which enables JavaScript debugging using Eclipse.

You can set breakpoints, inspect variables and evaluate expressions all from within Eclipse. The screenshot shows the debugger in action stopped at a breakpoint.


The project is fully open sourced on a BSD-license and consists of two components, an SDK and a debugger. The SDK provides a Java API that enables communication with Google Chrome over TCP/IP. The debugger is an Eclipse plugin that uses the SDK and enables you to debug JavaScript running in Google Chrome from the Eclipse IDE.

We hope this project will help web app developers and welcome feedback as well as contributions.

Since we began work on an extensions system for Chromium, we've received a lot of positive feedback. While the system is not yet complete, we've noticed that a lot of you have started creating and installing extensions for daily use. This is really encouraging, and it motivates us to quickly finish things up, to enable extensions by default on all Google Chrome releases.

If you're using extensions now, you should keep in mind that they are powerful software. Extensions integrate with your browser, so they can access and change everything that happens in it. For example, the same technology that enables an extension to periodically check the number of messages in your Gmail inbox could also be used to read all your personal mail and tweet it to your mom! This can happen because of malicious intent or simply because of a bug.

To help protect your experience when using extensions, we recently enabled auto-update for extensions on the dev channel release. Like Chrome's auto-update mechanism, extensions will be updated using the Omaha protocol, giving developers the ability to push out bug fixes and new features rapidly to users of their extensions. This is an important step towards a v1 release of extensions for all users, so we're pretty excited.

In addition, when we turn the extension system on, we plan to offer a gallery with ratings and comments that you can use to judge whether you want to install a particular extension. We will also have processes in place that, combined with reports from users, should help limit the number of malicious extensions that get uploaded and distributed to users. These processes will include removal of extensions that we have reason to believe are malicious. Until these things are in place and the extension system is officially launched, we recommend that you only install extensions that you built yourself.

We have just started using a new compression algorithm called Courgette to make Google Chrome updates small.

We have built Google Chrome to address multiple factors that affect browser security. One of the pillars of our approach is to keep the software up to date, so we push out updates to Google Chrome fairly regularly. On the stable channel these are mainly security bug fixes, but the updates are more adventurous and numerous on developer channel.

It is an anathema to us to push out a whole new 10MB update to give you a ten line security fix. We want smaller updates because it narrows the window of vulnerability. If the update is a tenth of the size, we can push ten times as many per unit of bandwidth. We have enough users that this means more users will be protected earlier. A secondary benefit is that a smaller update will work better for users who don't have great connectivity.

Rather then push put a whole new 10MB update, we send out a diff that takes the previous version of Google Chrome and generates the new version. We tried several binary diff algorithms and have been using bsdiff up until now. We are big fans of bsdiff - it is small and worked better than anything else we tried.

But bsdiff was still producing diffs that were bigger than we felt were necessary. So we wrote a new diff algorithm that knows more about the kind of data we are pushing - large files containing compiled executables. Here are the sizes for the recent 190.1->190.4 update on the developer channel:
  • Full update: 10,385,920 bytes
  • bsdiff update: 704,512 bytes
  • Courgette update: 78,848 bytes
The small size in combination with Google Chrome's silent update means we can update as often as necessary to keep users safe.

More information on how Courgette works can be found here.

Today we're releasing the Sputnik JavaScript test suite. Sputnik is a comprehensive set of more than 5000 tests that touch all aspects of the JavaScript language as defined in the ECMA-262 standard.

Soon after the V8 project started we also began work on what would become the Sputnik tests. The goal was to create a test suite based directly on the language spec that checked the behavior of every object, function and individual algorithm in the language. The task was given to a team in Russia – hence the name "Sputnik" – which went about systematically producing tests. As the test suite grew we used it to ensure that V8 conformed to the spec and to detect unexpected changes in our behavior.

Now that the test suite is complete we're happy to be able to release it as an open source project, under the BSD license. We hope Sputnik can be as useful to other implementers of JavaScript as it has been to us, particularly at a time where implementations change rapidly.

The goal is not that all implementations should pass all tests. V8 set out with that intention and we learned the hard way that sometimes you have to be incompatible with the spec to be compatible with the web. Rather, we want Sputnik to be a tool for identifying differences between implementations.

One of the biggest challenges for web developers today is the many incompatibilities between browsers. Finding these differences is the first step towards removing them. In an ideal world web developers would not have to worry about which browser is being used to view their site and users would not have to worry about whether a site supported their browser. We hope the Sputnik tests will make the browser community take another step towards making that a reality.

Since the initial launch of Google Chrome back in September we have had the Elements and Resources tabs of WebKit's Inspector available. We are now ready to present Inspector's Scripts and Profiles panels built on top of the V8 engine providing web developers with full-featured Javascript debugger and sample-based profiler in the dev channel release of Google Chrome. We are also re-introducing the Elements and Resources tabs running out of process for better robustness, security and support for the new debugger and profiler setup.

You can invoke new developer tools by selecting "JavaScript console" from the Developer menu (or using Ctrl+Shift+J). For example, running the statistical profiler on the V8 benchmark suite (below screenshot) will give exact information on the actual code execution as the data is generated straight from running the optimized code from V8.


As with the rest of Google Chrome, the developer tools are open source and built upon WebKit and in particular WebKit's Inspector. We would love to get feedback - both in terms of bugs reports and feature requests - on the Chromium public issue tracker. Or even better yet, we would love to get contributions to improving developer tools further in WebKit and Google Chrome.

We're excited to see many people are experimenting with the upcoming extension features of Chrome in the dev channel. We're getting a lot of great feedback and are working hard to bring extensions to the stable channel as quickly as possible.

First of all, we've set up a new discussion group for extension-related topics. Going forward, chromium-extensions will be your one-stop shop for extension development news, feedback and questions. If you're interested in developing extensions, we invite you to join us at chromium-extensions.

Second, as part of the latest dev channel release, we've had to make a breaking change to the crx format. This change adds signatures to our package format, which are necessary to enable automatic updates. Unfortunately, this means that any existing extensions will stop working, and will have to be repackaged.
  • If you've developed an extension, you can learn how to repackage your extensions for Chrome v 3.0.189.0 in the packaging doc on our developer site. Note that your extension ID will now be your public key, so you'll have to change any code that uses that.
  • If you're using an extension someone else has developed, you will have to reinstall it once the developer has repackaged it (as described above). We've already updated our sample extensions.
Even though the whole point of the dev channel is to make our APIs available early while they're still changing, we don't make these changes lightly. Once we push the extension system to the stable channel, breaking changes should be very rare (we'd like to say non-existent, but we don't want to jinx ourselves).

With the release of Mac Chrome to the dev-channel, I wanted to talk about open source and expectations. What was the point of releasing at this stage, you might ask? It's clearly not finished. Clearly. It's missing a large number of features, some half implemented, others not at all. Why even bother? Doesn't it just make us look bad?

Open source projects aren't simply about a runnable binary, they're about the community of users, testers, and developers who devote their time and skills to working on a product they believe in. They go hand in hand: there's no binary without the community and there's no community without the binary. At some point in the life-cycle of a project, you have to stop thinking solely about your small band of developers and start growing the larger supporting community that will become your users, testers, localizers, documentation writers, and possibly even new coders.

In "The Cathedral and the Bazaar", Eric Raymond writes:
"When you start community-building, what you need to be able to present is a plausible promise. Your program doesn't have to work particularly well. It can be crude, buggy, incomplete, and poorly documented. What it must not fail to do is (a) run, and (b) convince potential co-developers that it can be evolved into something really neat in the foreseeable future."
We in the Chromium project feel like our Mac and Linux builds are at this stage, if not beyond it. They run pretty well and demonstrate the fundamental architecture that sets Chromium apart from other browsers. Sure, the bells and whistles aren't all there, but the core functionality of web browsing is. We feel that we've delivered on ESR's "plausible promise" and that it's enough to start attracting those who really want to help make this the best product it can be. We're not done yet, nor is it ready for the average user. It is, however, ready for those who want to live on the bleeding edge and help lend their talents towards completing it.

The community we build today is what will make it a better product down the road, and without that community the product will ultimately suffer. ESR describes testers as "a project's most valuable resource" and my first-hand experience with Camino and Mozilla bear this out. A web browser is a program that accepts an infinite number of inputs and having people who can test webpages the developers wouldn't normally encounter is a tremendous aid. Testing on diverse hardware and software setups is also invaluable as developers tend to only run the latest and greatest (and fastest!). Eventually we might uncover many of these issues on our own, but probably not.

Another pillar of open source, along with releasing early, is releasing often. To that end, the dev channel will automatically receive weekly updates as development continues. You will be able to see the product improving from week to week and help immediately identify when things break. Getting feedback on new features as soon as they are completed helps the developers know if they hit the mark and helps close the feedback loop with the community. The community benefits by being more involved and connected and promoting further transparency in the development process. This wouldn't be possible if we only teased users with releases at widely-spaced intervals when most decisions had been set in stone (end-users who want that can use the beta or release channels).

Right now we need your help, and it doesn't take a PhD in computer science. Read the bug reporting guidelines for Mac and Linux and get involved.

In order to get more feedback from developers, we have early developer channel versions of Google Chrome for Mac OS X and Linux, but whatever you do, please DON'T DOWNLOAD THEM! Unless of course you are a developer or take great pleasure in incomplete, unpredictable, and potentially crashing software.

How incomplete? So incomplete that, among other things , you won't yet be able to view YouTube videos, change your privacy settings, set your default search provider, or even print.

Meanwhile, we'll get back to trying to get Google Chrome on these platforms stable enough for a beta release as soon as possible!

Posted by Mike Smith and Karen Grunberg, Product Managers

Google Chrome is moving fast. Version 2.0 was stabilized just six months after 1.0, and auto-updates have ensured that nearly all users are using the newest version of the browser within days of a release. As a web developer, it can be a bit daunting that the browser version changes so fast: What if the new version breaks something? How can I be prepared for changes that will affect my sites?

To answer these questions, it's helpful to know how Google Chrome releases are made, the relationship between "dev," "beta," and "stable" update channels, and how you can test new versions. In this post, we'll be expanding on Mark Larson's earlier explanation of the update channel system.
  • Stable channel. As Mark outlines, the Stable channel is, well, stable. As a web developer, that means that as long as the major version — the "2" in "version 2.0.181.1" — doesn't change, you can count on Stable channel builds to use the same versions of WebKit (CSS, layout, etc.), V8 (JavaScript), and other components that might affect how a page loads or renders. Stable updates between major version releases are generally focused on addressing security issues, fixing egregious bugs, and improving stability. The big developer-facing bits of the browser won't change on the Stable channel until the next major version is released, and you can always preview upcoming changes using the Beta channel.
  • Beta channel. As a web developer, being on the Beta channel will ensure that you can test your sites with the next version of Google Chrome's rendering behavior before it's sent to the Stable channel and into the hands of most users. Whenever a major version lands in the Beta channel, the versions of WebKit, V8, networking, and the other systems that affect how web pages load and render generally become fixed. These versions may change during the major version's beta cycle, but changes are usually incremental fixes to help stabilize a feature rather than changes in behavior. New versions of WebKit may be introduced during a beta period, but those versions are always accompanied by a new build number (e.g. 2.0.169.xx vs. 2.0.172.xx) and are unlikely to differ drastically. As this major version moves closer to a stable release, these kinds of changes become more and more infrequent. Since Google Chrome development moves so quickly, you should stay on the Beta channel to catch compatibility issues ahead of time.
  • Dev channel. The Dev channel is where the sausage gets made. Dev releases happen frequently, and they track what's happening upstream in WebKit, V8, and other relevant systems very closely. This means that changes that might affect rendering, performance, and layout are likely to occur on the Dev channel on a regular basis. We don't recommend that you install the Dev channel if you're looking to maintain site compatibility, since tracking breaking changes as they happen can be a major headache. You should be able to spot any problems early enough via the Beta channel.
Users are on the Stable channel by default. To get onto the Beta or Dev channel, follow these instructions. Once you change to a less stable channel, e.g. from Stable to Dev, there isn't a supported "downgrade" path. If you change from the less stable channel back to a more stable one, Google Chrome will simply stop updating until your new channel "catches up" with the installed build. To force an immediate downgrade, uninstall and reinstall using an appropriate installer. This may occasionally cause errors when your more stable (older) version tries to read the newer user data left over from the previous installation.

Once you have a copy of Google Chrome, you can test your site's compatibility. Google Doctype has a helpful FAQ on best practices for Google Chrome compatibility. In short, prefer object detection over userAgent string parsing; don't rely on pixel-accurate font and element sizes; declare your pages' encodings correctly; double check <object> and <embed> parameters; check for illegal markup; and avoid browser-specific CSS.

The Chromium and WebKit teams work hard to ensure compatibility with websites. If after reading the above you discover browser problems, please don't hesitate to file a bug. If a particular problem with your site occurs in both Google Chrome and a corresponding version of Safari, it may be due to a WebKit issue, which you can file in the WebKit bug tracker. If the problem only happens in Google Chrome, log an issue in the Chromium bug tracker.

Sandboxing is a technique that Google Chrome employs to help make the browser more secure, and was discussed in a previous blog post. On Windows, getting a process sandboxed in a way that's useful to us is a pretty complicated affair. The relevant source code consists of over 100 files and is located under the sandbox/ directory in Chromium's Open Source repository. But for our Mac and Linux ports, sandboxing is a very different story. On Linux there are a number of different sandboxing mechanisms available. Different Linux distributions ship with different (or no) sandboxing APIs, and finding a mechanism that is guaranteed to work on end-user's machines is a challenge. Fortunately, on Mac OS X, the OS APIs for sandboxing a process are easy to use and straightforward.

Sandboxing on the Mac

Starting a sandbox involves a single call to sandbox_init() specifying which resources to block for a specific process. In our case we lock down the process pretty tightly. That means no network access, and very limited or no access to files and Mach ports.

When Chromium starts a renderer process, we open an IPC channel (a UNIX socketpair) back to the browser process before turning on the sandbox. Any resources a process owns before turning on the sandbox stay with the process, so this channel can still be used after the sandbox is enabled. When we want to pass a shared memory area between processes, we send it over as an mmaped file handle using the sendmsg() API. We don't need to do anything else, as Apple's sandbox API is smart enough to allow access to file descriptors passed between processes in this manner even if the receiving process itself is forbidden from calling open().

One sticky point we run into is that the sandboxed process calls through to OS X system APIs. There is no documentation available about which privileges each API needs, such as whether they need access to on-disk files, or call other APIs to which the sandbox restricts access. Our approach to date has been to "warm up" any problematic API calls before turning the sandbox on. This means that we call through to the API, to allow it to cache whatever resource it needs. For example, color profiles and shared libraries can be loaded from disk before we "lock down" the process. To get a more complete understanding of our use of the sandbox in OSX, you can read the OSX sandboxing design doc.

As we continue the porting efforts for Chromium on the Mac, it's very satisfying to see the puzzle pieces fit into place alongside the native system APIs. It's important to us that the Mac port of Chromium feels and performs like a native Mac application, and that it provides the kind of high-quality experience Mac users expect.

Today I gave a presentation at Google I/O explaining some of the cool ideas that lie at the heart of our upcoming extension system. For those who didn't get a chance to attend the conference, you can check out the slides, below:

The actions menu, visible in full-screen mode, will let you show speaker notes. We'll also post a video of the talk as soon as it's available.

As some of you know, it's already possible to write extensions using the latest developer build of Google Chrome. You can find out more about the system, and learn how to write your first extension, by reading our HOWTO document. We've really focused on making extensions as easy as possible to write, so you'll be up and running in no time.

We're still pretty early in the development of the extensions system, and we're constantly adding new features and tweaking the APIs based on your feedback. So if you try it out, we'd love to hear from you at chromium-discuss@chromium.org.

Happy coding!

Update: Video of this talk is now available, as are videos of a number of other Google Chrome-related talks.

The V8 JavaScript engine has been designed for scalability. What does scalability mean in the context of JavaScript and why is it important for modern web applications?

Web applications are becoming more complex. With the increased complexity comes more JavaScript code and more objects. An increased number of objects puts additional stress on the memory management system of the JavaScript engine, which has to scale to deal efficiently with object allocation and reclamation. If engines do not scale to handle large object heaps, performance will suffer when running large web applications.

In browsers without a multi-process architecture, a simple way to see the effect of an increased working set on JavaScript performance is to log in to GMail in one tab and run JavaScript benchmarks in another. The objects from the two tabs are allocated in the same object heap and therefore the benchmarks are run with a working set that includes the GMail objects.

V8's approach to scalability is to use generational garbage collection. The main observation behind generational garbage collection is that most objects either die very young or are long-lived. There is no need to examine long-lived objects on every garbage collection because they are likely to still be alive. Introducing generations to the garbage collector allows it to only consider newly allocated objects on most garbage collections.

Splay: A Scalability Benchmark

To keep track of how well V8 scales to large object heaps, we have added a new benchmark, Splay, to version 4 of the V8 benchmark suite. The Splay benchmark builds a large splay tree and modifies it by creating new nodes, adding them to the tree, and removing old ones. The benchmark is based on a JavaScript log processing module used by the V8 profiler and it effectively measures how fast the JavaScript engine can allocate nodes and reclaim unused memory. Because of the way splay trees work, the engine also has to deal with a lot of changes to the large tree.

We have measured the impact of running the Splay benchmark with different splay tree sizes to test how well V8 performs when the working set is increased:


The graph shows that V8 scales well to large object heaps, and that increasing the working set by more than a factor of 7 leads to a performance drop of less than 17%. Even though 35 MB is more memory than most web applications use today, it is necessary to support such working sets to enable tomorrow's web applications.

Since starting work at Google, I've formed a deep appreciation for the number of high quality talks we have access to here (both technical and not). Reading code and documentation is pretty much unavoidable when you're a developer, but you really can't beat hearing directly from the expert's mouth on topics that you're interested in.

Last Wednesday, 5 Chromium experts gave mini tech talks on subjects ranging from the network stack to hacking on WebKit. Armed with 2 video cameras, a microphone, and a whiteboard, we did the best we could to capture these talks and make them available to Chromium developers around the world. Whether you're a seasoned Chromium contributor or just getting started, I think these videos have a lot to offer.

Here's a rundown of the videos:

Darin Fisher talking about Chromium's multi-process architecture
Brett Wilson talking about the various layers of Chromium
Dimitri Glazkov talking about hacking on WebKit
Ben Goodger talking about Views (and how to write good tests for them)
Wan-Teh Chang and Eric Roman talking about Chromium's network stack (and its history)

I hope these are just the first of many tech talks we can offer to you, the Chromium community.

Today, we shared with the open source community an early version of O3D, a new shader-based API for 3D graphics in the browser. We are excited about this release: we believe that a 3D API for the web will  allow web developers to create powerful, immersive 3D apps, that are comparable to the experience offered by client applications and game consoles. This will make the web better, not to mention more fun! 

O3D is still at an early stage and is not a part of the Chromium code base. However, we hope that, combined with projects like Mozilla's Canvas 3D, it will encourage the discussion within the graphics and web communities about a new open web standard on 3D graphics for the web. With JavaScript (and browsers) becoming faster every day, we believe it is the right time for such a standard to emerge. To help you participate in this broader discussion, Google has created a forum where you can submit suggestions on what features a 3D API for the web should have. 

If you are interested to learn more about O3D, you can visit us at code.google.com/apis/o3d










A video of the O3D Beach Demo