[go: up one dir, main page]




Whether you prefer organizing your browser with tab groups, naming your windows, tab search, or another method, you have lots of features that help you get to the tabs you want. In this The Fast and the Curious post, we describe how we use what windows are visible to you to optimize Chrome, leading to 25.8% faster start up and 4.5% fewer crashes.


BackgroundFor several years, to improve the user experience, Chrome has lowered the priority of background tabs[1]. For example, JavaScript is throttled in background tabs, and these tabs don’t render web content. This reduces CPU, GPU and memory usage, which leaves more memory, CPU and GPU for foreground tabs that the user actually sees. However, the logic was limited to tabs that weren't focused in their window, or windows that were minimized or otherwise moved offscreen.

Through experiments, we found that nearly 20% of Chrome windows are completely covered by other windows, i.e., occluded. If these occluded windows were treated like background tabs, our hypothesis was that we would see significant performance benefits. So, around three years ago, we started working on a project to track the occlusion state of each Chrome window in real time, and lower the priority of tabs in occluded windows. We called this project Native Window Occlusion, because we had to know about the location of native, non-Chrome windows on the user’s screen. (The location information is discarded immediately after it is used in the occlusion calculation.)

Calculating Native Window OcclusionThe Windows OS doesn’t provide a direct way to find out if a window is completely covered by other windows, so Chrome has to figure it out on its own. If we only had to worry about other Chrome windows, this would be simple because we know where Chrome windows are, but we have to consider all the non-Chrome windows a user might have open, and know about anything that happens that might change whether Chrome windows are occluded or not.

There are two main pieces to keeping track of which Chrome windows are occluded. The first is the occlusion calculation, which consists of iterating over the open windows on the desktop, in z-order (front to back) and seeing if the windows in front of a Chrome window completely cover it. The second piece is deciding when to do the occlusion calculation.

Calculating OcclusionIn theory, figuring out which windows are occluded is fairly simple. In practice, however, there are lots of complications, such as multi-monitor setups, virtual desktops, non-opaque windows, and even cloaked windows(!). This needs to be done with great care, because if we decide that a window is occluded when in fact it is visible to the user, then the area where the user expects to see web contents will be white. We also don’t want to block the UI thread while doing the occlusion calculation, because that could reduce the responsiveness of Chrome and degrade the user experience. So, we compute occlusion on a separate thread, as follows:
  1. Ignore minimized windows, since they’re not visible.
  2. Mark Chrome windows on a different virtual desktop as occluded.
  3. Compute the virtual screen rectangle, which combines the display monitors. This is the unoccluded screen rectangle.
  4. Iterate over the open windows on the desktop from front to back, ignoring invisible windows, transparent windows, floating windows (windows with style WS_EX_TOOLBAR), cloaked windows, windows on other virtual desktops, non-rectangular windows[2], etc. Ignoring these kinds of windows may cause some occluded windows to be considered visible (false negatives) but importantly it won’t lead to treating visible windows as occluded (false positives). For each window:
    • Subtract the window's area from the unoccluded screen rectangle.
    • If the window is a Chrome window, check if its area overlapped with the unoccluded area. If it didn’t, that means the Chrome window is completely covered by previous windows, so it is occluded.
  5. Keep iterating until all Chrome windows are captured.
  6. At this point, any Chrome window that we haven’t marked occluded is visible, and we’re done computing occlusion. Whew! Now we post a task to the UI thread to update the visibility of the Chrome windows.
  7. This is all done without synchronization locks, so the occlusion calculation has minimal effect on the UI thread, e.g., it will not ever block the UI thread and degrade the user experience.
For more detailed implementation information, see the documentation.


Deciding When to Calculate OcclusionWe don’t want to continuously calculate occlusion because it would degrade the performance of Chrome, so we need to know when a window might become visible or occluded. Fortunately, Windows lets you track various system events, like windows moving or getting resized/maximized/minimized. The occlusion-calculation thread tells Windows that it wants to track those events, and when notified of an event, it examines the event to decide whether to do a new occlusion calculation. Because we may get several events in a very short time, we don’t calculate occlusion more than once every 16 milliseconds, which corresponds to the time a single frame is displayed, assuming a frame rate of 60 frames per second (fps).

Some of the events we listen for are windows getting activated or deactivated, windows moving or resizing, the user locking or unlocking the screen, turning off the monitor, etc. We don’t want to calculate occlusion more than necessary, but we don’t want to miss an event that causes a window to become visible, because if we do, the user will see a white area where their web contents should be. It’s a delicate balance[3].

The events we listen for are focused on whether a Chrome window is occluded. For example, moving the mouse generates a lot of events, and cursors generate an event for every blink, so we ignore events that aren’t for window objects. We also ignore events for most popup windows, so that tooltips getting shown doesn’t trigger an occlusion calculation.

The occlusion thread tells Windows that it wants to know about various Windows events. The UI thread tells Windows that it wants to know when there are major state changes, e.g., the monitor is powered off, or the user locks the screen.





ResultsThis feature was developed behind an experiment to measure its effect and rolled out to 100% of Chrome Windows users in October 2020 as part of the M86 release. Our metrics show significant performance benefits with the feature turned on:
A reason for the startup and first-contentful-paint improvements is when Chrome restores two or more full-screen windows when starting up, one of the windows is likely to be occluded. Chrome will now skip much of the work for that window, thus saving resources for the more important foreground window.

Posted by David Bienvenu, Chrome Developer

Data source for all statistics: Real-world data anonymously aggregated from Chrome clients.
[1] Note that certain tabs are exempt from having their priority lowered, e.g., tabs playing audio or video.
[2] Non-rectangular windows complicate the calculations and were thought to be rare, but it turns out non-rectangular windows are common on Windows 7, due to some quirks of the default Windows 7 theme.
[3] When this was initially launched, we quickly discovered that Citrix users were getting white windows whenever another user locked their screen, due to Windows sending us session changed notifications for sessions that were not the current session. For the details, look here.



Chrome is fast, but there's always room for improvement. Often, that's achieved by carefully crafting the algorithms that make up Chrome. But there's a lot of Chrome, so why not let computers do at least some part of our work? In this installment of The Fast And the Curious, we'll show you several changes in how we build Chrome to achieve a 25.8% higher score on Speedometer on Windows and a 22.0% increase in browser responsiveness.


Why speed?So why do we care about performance benchmarks? It's not a simple "higher numbers is better" chasing of achievements - performance was so important to Chrome that we embedded in our core principles, the "4Ss" - Speed, Security, Stability, Simplicity. And speed matters because we want a browser that responds quickly. Speed matters so much because we want to build a faster and more responsive browser. And by improving the speed of the browser, there's the additional benefit of maximizing battery use, so you don't have to charge your laptop/devices as often.


Speed? Size? Something Else?Let's look at a typical optimization.


int foo();
int fiver(int num) {
  for(int j = 0; j < 5; j++)
    num = num + foo();
return num;
}


The compiler can either compile this as a loop (smaller), or turn it into five additions in a row (faster, but bigger)

You save the cost of checking the end of the loop and incrementing a variable every time through the loop. But in exchange, you now have many repeated calls to foo(). If foo() is called a lot, that is a lot of space.

And while speed matters a lot, we also care about binary size. (Yes, we see your memes!) And that tradeoff - exchanging speed for memory, and vice versa, holds for a lot of compiler optimizations.

So how do you decide if the cost is worth it? One good way is to optimize for speed in areas that are run often, because your speed wins accumulate each time you run a function. You could just guess at what you inline (your compiler can do this, it's called "a heuristic", it's an educated guess), and then measure speed and code size.

The result: Likely faster. Likely larger. Is that good?

Ask any engineer a question like that, and they will answer “It depends”. So, how do you get an answer?


The More You Know… (profiling & PGO)The best way to make a decision is with data. We collect data based on what gets run a lot, and what gets run a little. We do that for several different scenarios, because our users do lots of different things with Chrome and want them to be fast.

Our goal is collecting performance data in various scenarios, and using that to guide the compiler. There are 3 steps needed:
  1. Instrument for profiling
  2. Run that instrumented executable in various scenarios
  3. Use the resulting performance profile to guide the compiler.




But we can do more (ThinLTO)That's a good start, but we can do better. Let's look at inlining - the compiler takes the code of a called function and inserts all of it at the callsite.


inline int foo() { return 3; };
int fiver_inline(int num) {
  for(int j = 0; j < 5; j++)
    num = num + foo();
return num;
}


When the compiler inlines foo(), it turns into


int fiver_inline(int num) {
  for(int j = 0; j < 5; j++)
    num = num + 3;
return num;
}


Not bad - saves us the function call and all the setup that goes with having a function. But the compiler can in fact even do better - because now all the information is in one place. The compiler can apply that knowledge and deduce that fiver_inline() adds the number three and does so 5 times - and so the entire code is boiled down to


return num + 15;


Which is awesome! But the compiler can only do this if the source code for foo() and the location where it is called are in the same source file - otherwise, the compiler does not know what to inline. (That's the fiver() example). A trivial way around that is to combine all the source files into one giant source file and compile and link that in one go.





There's just one downside - that approach needs to generate all of the machine code for Chrome, all the time. Change one line in one file, compile all of Chrome. And there's a lot of Chrome. It also effectively disables caching build results and so makes remote compilation much less useful. (And we rely a lot on remote compilation & caching so we can quickly build new versions of Chrome.)

So, back to the drawing board. The core insight is that each source file only needs to include a few functions - it doesn't need to see every single other file. All that's needed is "cleverly" mixing the right inline functions into the right source files.





Now we're back to compiling individual source files. Distributed/cached compilation works again, small changes don't cause a full rebuild, and since "ThinLTO" does just inline a few functions, and it is relatively little overhead.

Of course, the question of "which functions should ThinLTO inline?" still needs to be answered. And the answer is still "the ones that are small and called a lot". Hey, we know those already - from the profiles we generated for Profile Guided Optimization (PGO). Talk about lucky coincidences!


But wait, there's more! (Callgraph Sorting)We've done a lot for inlined function calls. Is there anything we can do to speed up functions that haven't been inlined, too? Turns out there is.

One important factor is that the CPU doesn't fetch data byte by byte, but in chunks. And so, if we could ensure that a chunk of data doesn't just contain the function we need right now, but ideally also the ones that we'll need next, we could ensure that we have to go out and get chunks of data less often.

In other words, we want functions that are called right after the other to live next to each other in memory also ("code locality"). And we already know which functions are called close to each other - because we ran our profiling and stored performance profiles for PGO.

We can then use that information to ensure that the right functions are next to each other when we link.


I.e.


g.c
  extern int f1();
  extern int f2();
  extern int f3();
  int g() {
    f1();
    for(..) {
      f3();
  }
  f1();
  f2();
}


could be interpreted as "g() calls f3() a lot - so keep that one really close. f1() is called twice, so… somewhat close. And if we can squeeze in f2, even better". The calling sequence is a "call graph", and so this sorting process is called "call graph sorting".

Just changing the order of functions in memory might not sound like a lot, but it leads to ~3% performance improvement. And to know which functions calls which other ones a lot… yep. You guessed it. Our profiles from the PGO work pay off again.


One more thing.It turns out that the compiler can make even more use of that profile data for PGO. (Not a surprise - once you know where the slow spots are, exactly, you can do a lot to improve!). To make use of that, and enable further improvements, LLVM has something called the "new pass manager". In a nutshell, it's a new way to run optimizations within LLVM, and it helps a lot with PGO. For much more detail, I'd suggest reading the LLVM blog post.

Turning that on leads to another ~3% performance increase, and ~9MB size reduction.


Why Now?Good question. One part of that is that PGO & profiling unlock an entire new set of optimizations, as you've seen above. It makes sense to do that all in one go.

The other reason is our toolchain. We used to have a colorful mix of different technologies for compilers and linkers on different platforms.





And since this work requires changes to compilers and linkers, that would mean changing the build - and testing it - across 5 compilers and 4 linkers. But, thankfully, we've simplified our toolchain (Simplicity - another one of the 4S's!). To be able to do this, we worked with the LLVM community to make clang a great Windows compiler, in addition to partnering with the LLVM community to create new ELF (Linux), COFF (Windows), and Mach-O (macOS, iOS) linkers.





And suddenly, it's only a single toolchain to fix. (Almost. LTO for lld on MacOS is being worked on).

Sometimes, the best way to get more speed is not to change the code you wrote, but to change the way you build the software.

Posted by Rachel Blum, Engineering Director, Chrome Desktop

Data source for all statistics: Speedometer 2.0.









Posted by Theodore Olsauskas-Warren

At Chrome, we’re always looking for ways to help users better understand and manage privacy on the web. Our most recent change provides more clarity on controlling site storage settings.

Starting today, we will be rolling out this change to M97 Beta, we will be re-configuring our Privacy and Security settings related to data a site can store (e.g. cookies). Users can now delete all data stored by an individual site by navigating to Settings > Privacy and Security > Site Settings > View permissions and data stored across files, where they’ll land on chrome://settings/content/all. We will be removing the more granular controls found when navigating to Settings > Privacy and Security > Cookies and other site data > See all cookies and site data at chrome://settings/siteData from Settings. This capability remains accessible for developers, the intended audience for this level of granularity, in DevTools.
OLD: We are removing this page. The controls for web-facing storage are now available at chrome://settings/content/all

NEW: Here, in chrome://settings/content/all, users will be able to delete web-facing storage.





Why the change?
We believe that simplifying the granular controls from Settings creates a clearer experience for users. By providing users the ability to delete individual cookies, they can accidentally change the implementation details of the site and potentially break their experience on that site, which can be difficult to predict. Even more capable users run the risk of compromising some of their privacy protection, by incorrectly assuming the purpose of a cookie.
We see this functionality being primarily used by developers, and therefore remain committed to provide them with the tools they need in DevTools. Developers can visit DevTools to continue to gain access to more technical detail on a per-cookie or per-storage level as needed.
Granular cookie controls remain available in DevTools.
As always, we welcome your feedback as we continue to build a more helpful Chrome. Our next step is working to remove this functionality from Page Info to keep all granular cookie controls in DevTools. If you have any other questions or comments on Storage Controls, please share them with us here.

Unless otherwise noted, changes described below apply to the newest Chrome beta channel release for Android, Chrome OS, Linux, macOS, and Windows. Learn more about the features listed here through the provided links. Chrome 97 is beta as of November 18, 2021.

Preparing for a Three Digit Version Number

Next year, Chrome will release version 100. This will add a digit to the version number reported in Chrome's user agent string. To help site owners test for the new string, Chrome 96 introduces a runtime flag that causes Chrome to return '100' in its user agent string. This new flag called chrome://flags/#force-major-version-to-100 is available from Chrome 96 onward. For more information, see Force Chrome major version to 100 in the User-Agent string.

Features in this Release

Auto-expand Details Elements

Closed details elements are now searchable and can now be linked to. These hidden elements will also automatically expand when find-in-page, ScrollToTextFragment, and element fragment navigation are used.

Content-Security-Policy Delivery via Response Headers for Dedicated Workers.

Dedicated workers are now governed by Content Security Policy. Previously, Chrome incorrectly applied the Content Security Policy of the owner document.

CSS

font-synthesis Property

The font-synthesis CSS property controls whether user agents are allowed to synthesize oblique, bold, and small-caps font faces when a font family lacks oblique, bold, and small-caps faces, respectively. Without the font-synthesis property some web pages that do not have font families with the required variations may have unnatural forms of fonts

transform: perspective(none)

The perspective() function now supports the value 'none' as an argument. This causes the function to behave as though it were passed an argument that is infinite. This makes it easier (or, in some cases, possible) to do animations involving the perspective() function where one of the endpoints of the animation is the identity matrix.

Feature Policy for Keyboard API

Chrome supports a new keyboard-map value for the allow list of a feature policy. Keyboard.getLayoutMap() helps identify a key pressed key for different keyboard layouts such as English and French. This method is unavailable in iframe elements. The architecture of some web apps (Excel, Word, and PowerPoint) that could not use the Keyboard API can now do so.

HTMLScriptElement.supports() Method

The HTMLScriptElement.supports() method provides a unified way to detect new features that use script elements. Currently there is no simple way to know what kind of types can be used for the type attribute of HTMLScriptElement.

Late Newline Normalization in Form Submission

Newlines in form entries are now normalized the same as Gecko and WebKit, solving a long-standing interoperability problem where Gecko and WebKit normalized newlines late, while Chrome did them early. Starting in Chrome 97, early normalization is removed and late normalization is extended to all encoding types.

Standardize Existing Client Hint Naming

Chrome 97 standardizes client hint names by prefixing them with Sec-CH-. Affected client hints are dpr, width, viewport-width, device-memory, rtt, downlink, and ect. Chrome will continue to support existing versions of these hints. Nevertheless, web developers should plan for their eventual deprecation and removal.

WebTransport

WebTransport is a protocol framework that enables clients constrained by the Web security model to communicate with a remote server using a secure multiplexed transport.

Currently, Web application developers have two APIs for bidirectional communications with a remote server: WebSockets and RTCDataChannel. WebSockets are TCP-based, thus having all of the drawbacks of TCP (head of line blocking, lack of support for unreliable data transport) that make it a poor fit for latency-sensitive applications. RTCDataChannel is based on the Stream Control Transmission Protocol (SCTP), which does not have these drawbacks; however, it is designed to be used in a peer-to-peer context, which causes its use in client-server settings to be fairly low. WebTransport provides a client-server API that supports bidirectional transfer of both unreliable and reliable data, using UDP-like datagrams and cancellable streams. WebTransport calls are visible in the Network panel of DevTools and identified as such in the Type column.

For more information, see Experimenting with WebTransport.

JavaScript

This version of Chrome incorporates version x.x of the V8 JavaScript engine. It specifically includes the changes listed below. You can find a complete list of recent features in the V8 release notes.

Array and TypedArray findLast() and findLastIndex()

Array and TypedArray now support the findLast() and fileLastIndex() static methods. These functions are analogous to find() and findIndex() but search from the end of an array instead of the beginning.

Deprecations and Removals

This version of Chrome introduces the deprecations and removals listed below. Visit ChromeStatus.com for lists of current deprecations and previous removals.

Remove SDES Key Exchange for WebRTC

The SDES key exchange mechanism for WebRTC has been declared a MUST NOT in the relevant IETF standards since 2013. The SDES specification has been declared historic by the IETF. Its usage in Chrome has declined significantly over the recent year. Consequently it is removed as of Chrome 97.

Remove WebSQL in Third-Party Contexts

WebSQL in third-party contexts is now removed. The Web SQL Database standard was first proposed in April 2009 and abandoned in November 2010. Gecko never implemented this feature and WebKit deprecated it in 2019. The W3C encourages Web Storage and Indexed Database for those needing alternatives.

Remove SDP Plan B

The Session Description Protocol (SDP) used to establish a session in WebRTC has been implemented with two different dialects in Chromium: Unified Plan and Plan B. Plan B is not cross-browser compatible and is hereby removed.



Mobile devices are generally more resource constrained than laptops or desktops. Optimizing Chrome’s resource usage is critical to give mobile users a faster Chrome experience. As we’ve added features to Chrome on Android, the amount of Java code packaged in the app has continued to grow. In this The Fast and the Curious post we show how our team improved the speed and memory usage of Chrome on Android with Isolated Splits. With these improvements, Chrome on Android now uses 5-7% less memory, and starts and loads pages even faster than before.

The ProblemFor Android apps (including Chrome on Android), compiled Java code is stored in .dex files. The user's experience in Chrome on Android is particularly sensitive to increases in .dex size due to its multi-process architecture. On Android, Chrome will generally have 3+ processes running at all times: the browser process, the GPU process, and one or more renderer processes. The vast majority of Chrome’s Java code is used only in the browser process, but the performance and memory cost of loading the code is paid by all processes.
 
Bundles and Feature ModulesIdeally, we would load the smallest chunk of Java necessary for a process to run. We can get close to this by using Android App Bundles and splitting code into feature modules. Feature modules allow splitting code, resources, and assets into distinct APKs installed alongside the base APK, either on-demand or during app install.

Now, it seems like we have exactly what we want: a feature module could be created for the browser process code, which could be loaded when needed. However, this is not how Android loads feature modules. By default, all installed feature modules are loaded on startup. For an app with a base module and three feature modules “a”, “b”, and “c”, this gives us an Android Context with a ClassLoader that looks something like this:





Having a small minimum set of installed modules that are all immediately loaded at startup is beneficial in some situations. For example, if an app has a large feature that is needed only for a subset of users, the app could avoid installing it entirely for users who don't need it. However, for more commonly used features, having to download a feature at runtime can introduce user friction -- for example, additional latency or challenges if mobile data is unavailable. Ideally we'd be able to have all of our standard modules installed ahead of time, but loaded only when they're actually needed.

Isolated Splits to the RescueA few days of spelunking in the Android source code led us to the android:isolatedSplits attribute. If this is set to “true”, each installed split APK will not be loaded during start-up, and instead must be loaded explicitly. This is exactly what we want to allow our processes to use less resources! The ClassLoader illustrated above now looks like this:





In Chrome’s case, the small amount of code needed in the renderer and GPU processes can be kept in the base module, and the browser code and other expensive features can be split into feature modules to be loaded when needed. Using this method, we were able to reduce the .dex size loaded in child processes by 75% to ~2.5MB, making them start faster and use less memory.

This architecture also enabled optimizations for the browser process. We were able to improve startup time by preloading the majority of the browser process code on a background thread while the Application initializes leading to a 7.6% faster load time. By the time an Activity or other component which needed the browser code was launched, it would already be loaded. By optimizing how features are allocated into feature modules, features can be loaded on-demand which saves the memory and loading cost until the feature is used.

ResultsSince Chrome shipped with isolated splits in M89 we now have several months of data from the field, and are pleased to share significant improvements in memory usage, startup time, page load speed, and stability for all Chrome on Android users running Android Oreo or later:
  • Median total memory usage improved by 5.2%
  • Median renderer process memory usage improved by 7.9%
  • Median GPU process memory usage improved by 7.6%
  • Median browser process memory usage improved by 1.2%
  • 95th percentile startup time improved by 7.6%
  • 95th percentile page load speed improved by 2.3%
  • Large improvements in both browser crash rate and renderer hang rate
Posted by Clark Duvall, Chrome Software Engineer

Data source for all statistics: Real-world data anonymously aggregated from Chrome clients.

The big day is finally here. Today, at Chrome Dev Summit 2021 we shared some of the highlights of what we've been working on — the latest product updates, vision for the web's future and examples of best-in-class web experiences. Over the past year, we've also had a lot of feedback that you want to spend more time learning from and working with the Chrome team and other industry experts. I'm excited to share with you that we've opened up a lot of spaces for 1:1 office hours, workshops and learning lounges to give you more opportunity to connect with the Chrome team.

It's been a busy year for us all and with the continued shift of people moving more of their lives online, it has been more important than ever for us to continue investing in Web Compat, and we've been amazed to see the improvements in compatibility across the board that is helping to make it easier for you to build sites that work across all browsers for everyone who uses the web.

We've also got a number of important updates to core topics that are important to every developer:

  • An update on how we're helping to shift the web towards more privacy-safe technologies and give you more visibility into that process.
  • A showcase on how many major companies have bet on the web and brough advanced app-like experiences to anyone who can use a browser.
  • An update on Core Web Vitals and some new tools that will make it easier for you to measure your sites.
  • A dive into the "New Responsive" with highlights of new tools and capabilities for designers that make it easier than ever to build experiences your users love.

This post is an overview of the latest updates from this year's Chrome Dev Summit keynote.


Paving a Path Toward a More Secure Web

The Privacy Sandbox continues to be a cornerstone of our ongoing efforts to collaboratively build privacy-preserving technologies for a healthy web. Our development timeline, which we'll update monthly, shares when developers and advertisers can expect these technologies to be ready for testing and scaled adoption.
This timeline reflects three developmental phases for Privacy Sandbox proposals:

1) Discussion

Dozens of ideas for privacy-preserving technologies have been proposed by Chrome and others, for public discussion in forums such as the W3C and GitHub . For example, more than 100 organizations are helping to refine FLEDGE, a proposal for privacy-preserving remarketing.

2) Testing

Success at this stage depends on developers engaging in hands-on testing then sharing their learnings publicly. Yahoo! JAPAN's analysis of the Attribution Reporting API and Criteo's machine learning competition for evaluating privacy concepts are examples we're grateful for.

This kind of feedback is critical to getting solutions right. For instance, we're currently improving FLoC — a proposal for anonymized interest groups — with insights from companies such as CafeMedia.

3) Scaled Adoption

Some Privacy Sandbox proposals are already live, such as User-Agent Client Hints which are meant to replace the User-Agent (UA) string. We'll start to gradually reduce the granularity of information in the UA string in April 2022. We know implementing these changes take time, so companies will have the option to use the UA string as is through March 2023 via an origin trial.

Stepping up In-Browser Experiences

With Project Fugu, we've been introducing APIs that elevate web apps so they can do anything native apps can. We've also been inspired by brands building more immersive web experiences with Progressive Web Apps (PWAs) and modern APIs.

Take Adobe, a brand we've been partnering with for more than three years. Photoshop, Creative Cloud Spaces, and Creative Cloud Canvas are now in Public Beta and available in browsers—with more flagship apps to follow. This means creatives can view work, share feedback, and make basic edits without having to download or launch native apps.

PWAs have given online video and web conferencing platforms an upgrade too. TikTok found a way to reach video lovers across all devices while YouTube Premium gives people the ability to watch videos offline on laptops and hybrid devices.

Meet drastically improved the audio and video quality in their PWA, and Kapwing focused on making it easy for users to edit videos collaboratively, anytime, anywhere. Zoom replaced their Chrome App with a PWA, and saw 16.9 million new users join web meetings, an increase of more than seven million users year over year.

Developers who want to learn more, or get started with Progressive Web Apps can check out our new Learn PWA course on web.dev. Three modules were launched today, with many more coming.

Continuously Improving Your Web Experience

Measuring site performance is a key part of navigating browsers as they evolve, which is where Core Web Vitals come in. Compared to a year ago, 20% more page visits in Chrome and 60% of the total visits in Chrome fully meet the recommended Core Web Vitals thresholds.

Content management systems, website builders, e-commerce platforms, and JavaScript frameworks have helped push the Web Vitals initiative forward. As we shared in our Core Web Vitals Technology Report, sites built on many of these platforms are hitting Core Web Vitals out of the park:

While this kind of progress is exciting, optimizing for Core Web Vitals can still be challenging. That's why we've been improving our tools to help developers better monitor, measure, and understand site performance. Some of these changes include:

  • Updates in PageSpeed Insights which make the distinction between "field data" from user experiences and "lab data" from the Lighthouse report more clear.
  • Capabilities in Lighthouse to audit a complete user flow by loading additional pages and simulating scrolls and link clicks.
  • Support for user flows, such as a checkout flow, in DevTools with a new Recorder panel for exporting a recorded user journey to Puppeteer script.

We're also experimenting with two new performance metrics: overall input responsiveness and scrolling and animation smoothness. We'd love to get your feedback, so take a spin through at web.dev/responsiveness and web.dev/smoothness.

Expanding the Toolbox for Digital Interfaces

We've got developers and designers covered with tons of changes coming down the pipeline for UI styling and DevTools, including updates to responsive design. Developers can now customize user experiences in a component-driven architecture model, and we're calling this The New Responsive:



With the new container queries spec—available for testing behind a flag in Chrome Canary—developers can access a parent element's width to make styling decisions for its children, nest container queries, and create named queries for easier access and organization.

This is a huge shift for component-based development, so we've been providing new DevTools for debugging, styling, and visualizing CSS layouts. To make creating interfaces even easier, we also launched a collection of off-the-shelf UI patterns.

Developers who want to learn more can dive into free resources such as Learn Responsive Design on web.dev—a collaboration with Clearleft's Jeremy Keith—and six new modules in our Learn CSS course. There are also a few exciting CSS APIs in their first public working drafts, including:

  • Scroll-timeline for animating an element as people scroll (available via the experimental web platform features flag in Chrome Canary).
  • Size-adjust property for typography (available in Chromium and Firefox stable).
  • Accent-color for giving form controls a theme color (available in Chromium and Firefox stable).

One feature we're really excited to build on is Dark Mode, especially because we found indications that dark themes use 11% less battery power than light themes for OLED screens. Stay tuned for a machine-learning-aided, auto-dark algorithm feature in an upcoming version of Chrome.

Buckling Down for the Road Ahead

Part of what makes the web so special is that it's an open, decentralized ecosystem. We encourage everyone to make the most of this by getting involved in shaping the web's future in places such as:

We can't wait to see what the web looks like by next year's summit. Until then, check out our library of learning resources on the Chrome Dev Summit site and the Chrome Developers YouTube channel, and sign up for the web.dev newsletter.

Users want frequently used applications such as Email, Chat, and other productivity apps to automatically start when they log in to their devices. Auto-starting these apps at login streamlines the user experience as users don't have to manually start apps after logging into their devices.
Windows, Mac, and Linux devices allow users to configure native apps to be launched automatically on startup. In Chrome 91, we introduced the Run on OS Login feature. With the launch of this feature, users can now configure desktop web apps to launch automatically when they log-in to the device on Windows, Mac, and Linux devices. Installed apps will not be permitted to automatically enable themselves to run when the user logs in. A manual user gesture will always be required.
To configure apps to run on OS login, open Chrome browser. Navigate to chrome://apps or click the ‘Apps' icon in your bookmark bar (example below).

To configure an app to start at login, first right click on it. From the context menu, select ‘Start app when you sign in' and you are all set. Next time when you log in to your device, the app will automatically launch on its own. To disable this feature for an app, navigate to chrome://apps. Right click on the app to bring up the context menu and deselect the option, ‘Start app when you sign in'.

Apps launched through Run on OS Login are launched only after the device is running. ‘Run on OS Login' is a browser only feature and doesn't expose any launch source information to app developers.

We're continuously improving the web platform to provide safe, low friction ways for users to get their day-to-day tasks done. Support for running installed web apps on OS login is a small but significant step to simplifying the startup routine for users that want apps like chat, email, or calendar clients to start as soon as they turn on their computer. As always, we're looking forward to your feedback. Your input will help us prioritize next steps!


Chrome has long-term investments in performance improvement across many projects and we are pleased to share improvements across speed, memory, and unexpected hangs in today’s The Fast and the Curious series post. One in six searches is now as fast as a blink of an eye, Chrome OS browsing now uses up to 20% less memory thanks to our PartitionAlloc investment, and we’ve resolved some thorny Chrome OS and Windows shutdown experiences.
Omnibox
You’ve probably noticed that potential queries are suggested to you as you type when you’re searching the web using Chrome’s omnibox (as long as the “Autocomplete searches and URLs” feature is turned on in Chrome settings.) This makes searching for information faster and easier, as you don’t have to type in the entire search query -- once you’ve entered enough text for the suggestion to be the one you want, you can quickly select it.





Searching in Chrome is now even faster, as search results are prefetched if a suggested query is very likely to be selected. This means that you see the search results more quickly, as they’ve been fetched from the web server before you even select the query. In fact, our experiments found that search results are now 4X more likely to be shown within 500 ms!

Currently, this only happens if Google Search is your default search engine. However, other search providers can trigger this feature by adding information to the query suggestions sent from their servers to Chrome, as described in this article.
Chrome OS PartitionAlloc
Chrome’s new memory allocator, PartitionAlloc, rolled out on Android and Windows in M89, bringing improved memory usage [up to 22% savings] and performance [up to 9% faster responsiveness]. Since then, we have also implemented PartitionAlloc on Linux in M92 and Chrome OS in M93. We are now pleased to announce that M93 field data from Chrome OS shows a total memory footprint reduction of 15% in addition to a 20% browser process memory reduction, improving the Chromebook browsing experience for both single and multi-tabs.

Resolving the #1 shutdown hang
Often software engineers add a cache to a system with the goal of improving performance. But a frequent corollary of caching is that the cache may introduce other problems (code complexity, stability, memory consumption, data consistency), and may even make performance worse. In this case, a local cache was added years ago to Chrome's history system with the goal of making startup faster. The premise at the time, which seemed to bear out in lab testing, was that caching Chrome's internal in-memory history index would be faster than reindexing the history at each startup.

Thanks to our continuing systematic investigation into real-world performance using crash data in conjunction with anonymized performance metrics, we uncovered that not only did this cache add code complexity and unnecessary memory usage, but it was also our top contributor to shutdown hangs in the browser. This is because on some OSes, background priority threads can be starved of I/O indefinitely while there is any other I/O happening elsewhere on the system. Moreover, the performance benefits to our users were minimal, based on analysis of field data. We've now removed the cache and resolved our top shutdown hang. This was a great illustration of the principle that caching is not always the answer!

Stay tuned for many more performance improvements to come!

Posted by Yana Yushkina, Product Manager, Chrome Browser

Data source for all statistics: Real-world data anonymously aggregated from Chrome clients.


Unless otherwise noted, changes described below apply to the newest Chrome beta channel release for Android, Chrome OS, Linux, macOS, and Windows. Learn more about the features listed here through the provided links or from the list on ChromeStatus.com. Chrome 96 is beta as of October 21, 2021.

Preparing for a Three Digit Version Number

Next year, Chrome will release version 100. This will add a digit to the version number reported in Chrome's user agent string. To help site owners test for the new string, Chrome 96 introduces a runtime flag that causes Chrome to return '100' in its user agent string. This new flag called chrome://flags/#force-major-version-to-100 is available from Chrome 96 onward.

Origin Trials

This version of Chrome introduces the origin trials described below. Origin trials allow you to try new features and give feedback on usability, practicality, and effectiveness to the web standards community. To register for any of the origin trials currently supported in Chrome, including the ones described below, visit the Chrome Origin Trials dashboard. To learn more about origin trials in Chrome, visit the Origin Trials Guide for Web Developers. Microsoft Edge runs its own origin trials separate from Chrome. To learn more, see the Microsoft Edge Origin Trials Developer Console.

New Origin Trials

Conditional Focus

Applications that capture other windows or tabs currently have no way to control whether the calling item or the captured item gets focus. (Think of a presentation feature in a video conference app.) Chrome 96 makes this possible with a subclass of MediaStreamTrack called FocusableMediaStreamTrack, which supports a new focus() method. Consider the following code:

stream = await navigator.mediaDevices.getDisplayMedia();
let [track] = stream.getVideoTracks();

Where formerly, getVideoTracks() would return an array of MediaStreamTrack objects, it now returns FocusableMediaStreamTrack objects. (Note that this is expected to change to BrowserCaptureMediaStreamTrack in Chrome 97. At the time of this writing, Canary already does this.)

To determine which display media gets focus, the next line of this code would call track.focus() with either "focus-captured-surface" to focus the newly captured window or tab, or with "no-focus-change" to keep the focus with the calling window. On Chrome 96 or later, you can step through our demo code to see this in action.

Priority Hints

Priority Hints introduces a developer-set "importance" attribute to influence the computed priority of a resource. Supported importance values are "auto", "low", and "high". Priority Hints indicate a resource's relative importance to the browser, allowing more control over the order resources are loaded. Many factors influence a resource's priority in browsers including type, visibility, and preload status of a resource.

Other Features in this Release

Allow Simple Range Header Values Without Preflight

Requests with simple range headers can now be sent without a preflight request. CORS requests can use the Range header in limited ways (only one valid range) without triggering a preflight request.

Back-forward Cache on Desktop

The back-forward cache stores pages to allow for instant navigations to previously-visited pages after cross-site navigations.

Cross-Origin-Embedder-Policy: credentialless

Cross-Origin-Embedder-Policy has a new credentialless option that causes cross-origin no-cors requests to omit credentials (cookies, client certificates, etc.). Similarly to COEP:require-corp, it can enable cross-origin isolation.

Sites that want to continue using SharedArrayBuffer must opt-in to cross-origin isolation. Doing so using COEP: require-corp is difficult to deploy at scale and requires all subresources to explicitly opt-in. This is fine for some sites, but creates dependency problems for sites that gather content from users (Google Earth, social media generally, forums, etc).

CSS

:autofill Pseudo Class

The new autofill pseudo class enables styling autofilled form elements. This is a standardization of the :-webkit-autofill pseudo class which is already supported in WebKit. Firefox supports the standard version.

Disable Propagation of Body Style to Viewport when Contained

Some properties like writing-mode, direction, and backgrounds are propagated from body to the viewport. To avoid infinite loops for CSS Container Queries, the spec and implementation were changed to not propagate those properties when containment is applied to HTML or BODY.

font-synthesis Property

The font-synthesis CSS property controls whether user agents are allowed to synthesize oblique, bold, and small-caps font faces when a font family lacks faces.

EME MediaKeySession Closed Reason

The MediaKeySession.closed property now uses an enum to indicate the reason the MediaKeySession object closed. The closed property returns a Promise that resolves when the session closes. Where previously, the Promise simply resolved, it now resolves with a string indicating the reason for closing. The returned string will be one of "internal-error", "closed-by-application", "release-acknowledged", "hardware-context-reset", or "resource-evicted".

HTTP to HTTPS Redirect for HTTPS DNS Records

Chrome will always connect to a website via HTTPS when an HTTPS record is available from the domain name service (DNS).

InteractionID in EventTiming

The PerformanceEventTiming interface now includes an attribute called interactiveID. This is a browser-generated ID that enables linking multiple PerformanceEventTiming entries when they correspond to the same user interaction. Developers can currently use the Event Timing API to gather performance data about events they care about. Unfortunately, it is hard to link events that correspond to the same user interaction. For instance, when a user taps, many events are generated, such as pointerdown, mousedown, pointerup, mouseup, and click.

New Media Query: prefers-contrast

Chrome supports a new media query called 'prefers-contrast', which lets authors adapt web content to the user's contrast preference as set in the operating system (specifically, increased contrast mode on macOS and high contrast mode on Windows). Valid options are 'more', 'less', 'custom', or 'no-preference'.

Unique id for Desktop PWAs

Web app manifests now support an optional id field that globally identifies a web app. When the id field is not present, a PWA falls back to start_url. This field is currently only supported on desktop.

URL Protocol Handler Registration for PWAs

Enable web applications to register themselves as handlers of custom URL protocols/schemes using their installation manifest. Operating system applications often register themselves as protocol handlers to increase discoverability and usage. Web sites can already register to handle schemes via registerProtocolHandler(). The new feature takes this a step further by letting web apps be launched directly when a custom scheme link is invoked.

WebAssembly

Content Security Policy

Chrome has enhanced Content Security Policy to improve interoperability with WebAssembly. The wasm-unsafe-eval controls WebAssembly execution (with no effect on JavaScript execution). Additionally, the script-src policies now include WebAssembly.

Reference Types

WebAssembly modules can now hold references to JavaScript and DOM objects. Specifically, they can be passed as arguments, stored in local and global variables, and stored in WebAssembly.Table objects.

Deprecations and Removals

This version of Chrome introduces the deprecations and removals listed below. Visit ChromeStatus.com for lists of current deprecations and previous removals.

The "basic-card" Method of PaymentRequest API

The PaymentRequest API has deprecated the basic card payment method. Its usage is low and declining. It underperforms when compared to other payment methods in time-to-checkout and completion rate. Developers can switch to other payment methods as an alternative. Examples include Google Pay, Apple Pay, and Samsung Pay.

Removal timeline:

  • Chrome 96: the basic-card method is deprecated in the Reporting API.
  • Chrome 100: the basic-card method will be removed.

The Payment Request API is a soon-to-be-recommended web standard that aims to make building low-friction and secure payment flows easier for developers. The browser facilitates the flow between a merchant website and "payment handlers". A payment handler can be built-in to the browser, a native app installed on user’s mobile device, or a Progressive Web App. Today, developers can use the Payment Request API to access several payment methods, including “basic-card” and Google Pay in Chrome on most platforms, Apple Pay in Safari, Digital Goods API on Google Play, and Secure Payment Confirmation in Chrome.


Earlier last year, we announced that we will deprecate the "basic-card" payment handler on iOS Chrome, followed by other platforms in the future. The "basic-card" is a payment method that is typically built into the browser to help users easily enter credit card numbers without remembering and typing them. This was designed to make a good transition from a form based credit card payment to an app based tokenized card payment. In order to better pursue the goal of app based payment (and a few other reasons), the Web Payments WG decided to remove it from the specification.


Starting from version 96, Chrome will show a warning message in DevTools Console (together with creating a report to Reporting API) when the "basic-card" payment method is used. In version 100, the "basic-card" payment method will be no longer available and canMakePayment() will return false unless other capable payment methods are specified. This applies to all platforms including Android, macOS, Windows, Linux, and Chrome OS.


If you are using the Payment Request API with the "basic-card" payment handler, we suggest removing it as soon as possible and using an alternative payment method such as Google Pay or Samsung Pay.


Posted by Eiji Kitamura, Developer Advocate on the Chrome team