[go: up one dir, main page]


We are constantly working to meet the needs of our enterprise customers, including enhanced security for their communications. Our aim is to offer a secure method to transport sensitive information despite insecure channels with email today and without compromising Gmail extensive protections for spam, phishing and malware.

Why hosted S/MIME?
Client-side S/MIME has been around for many years. However, its adoption has been limited because it is difficult to deploy (end users have to manually install certificates to their email applications) and the underlying email service cannot efficiently protect against spam, malware and phishing because client-side S/MIME makes the email content opaque.

With Google’s new hosted S/MIME solution, once an incoming encrypted email with S/MIME is received, it is stored using Google's encryption. This means that all normal processing of the email can happen, including extensive protections for spam/phishing/malware, admin services (such as vault retention, auditing and email routing rules), and high value end user features such as mail categorization, advanced search and Smart Reply. For the vast majority of emails, this is the safest solution - giving the benefit of strong authentication and encryption in transit - without losing the safety and features of Google's processing.

Using hosted S/MIME provides an added layer of security compared to using SMTP over TLS to send emails. TLS only guarantees to the sender’s service that the first hop transmission is encrypted and to the recipient that the last hop was encrypted. But in practice, emails often take many hops (through forwarders, mailing lists, relays, appliances, etc). With hosted S/MIME, the message itself is encrypted. This facilitates secure transit all the way down to the recipient’s mailbox.

S/MIME also adds verifiable account-level signatures authentication (versus only domain-based signature with DKIM). This means that email receivers can ensure that incoming email is actually from the sending account, not just a matching domain, and that the message has not been tampered with after it was sent.

How to use hosted S/MIME?
S/MIME requires every email address to have a suitable certificate attached to it. By default, Gmail requires the certificate to be from a publicly trusted root Certificate Authority (CA) which meets strong cryptographic standards. System administrators will have the option to lower these requirements for their domains.

To use hosted S/MIME, companies need to upload their own certificates (with private keys) to Gmail, which can be done by end users via Gmail settings or by admins in bulk via the Gmail API.

From there, using hosted S/MIME is a seamless experience for end users. When receiving a digitally signed message, Gmail automatically associates the public key with the contact of the sender. By default, Gmail automatically signs and encrypts outbound messages if there is a public S/MIME key available for the recipient. Although users have the option to manually remove encryption, admins can set up rules that override their action.

Hosted S/MIME is supported on Gmail web/iOS/Android, on Inbox and on clients connected to the Gmail service via IMAP. Users can exchange signed and encrypted emails with recipients using hosted S/MIME or client-side S/MIME.

Which companies should consider using hosted S/MIME?
Hosted S/MIME provides a solution that is easy to manage for administrators and seamless for end users. Companies that want security in transit and digital signature/non-repudiation at the account level should consider using hosted S/MIME. This is a need for many companies working with sensitive/confidential information.

Hosted S/MIME is available for G Suite Enterprise edition users.


Despite constant advancements in online safety, phishing — one of the web’s oldest and simplest attacks — remains a tough challenge for the security community. Subtle tricks and good old-fashioned con-games can cause even the most security-conscious users to reveal their passwords or other personal information to fraudsters.
New advancements in phishing protection

This is why we’re excited about the news for G Suite customers: the launch of Security Key enforcement. Now, G Suite administrators can better protect their employees by enabling Two-Step Verification (2SV) using only Security Keys as the second factor, making this protection the norm rather than just an option. 2SV with only a Security Key offers the highest level of protection from phishing. Instead of entering a unique code as a second factor at sign-in, Security Keys send us cryptographic proof that users are on a legitimate Google site and that they have their Security Keys with them. Since most hijackers are remote, their efforts are thwarted because they cannot get physical possession of the Security Key.

Users can also take advantage of new Bluetooth low energy (BLE) Security Key support, which makes using 2SV Security Key protection easier on mobile devices. BLE Security Keys, which work on both Android and iOS, improve upon the usability of other form factors.
A long history of phishing protections

We’ve helped protect users from phishing for many years. We rolled out 2SV back in 2011, and later strengthened it in 2014 with the addition of Security Keys. These launches complement our many layers of phishing protections — Safe Browsing warnings, Gmail spam filters, and account sign-in challenges — as well as our work with industry groups like the FIDO Alliance and M3AAWG to develop standards and combat phishing across the industry. In the coming months, we’ll build on these protections and offer users the opportunity to further protect their personal Google Accounts.



We created our Vulnerability Rewards Program in 2010 because researchers should be rewarded for protecting our users. Their discoveries help keep our users, and the internet at large, as safe as possible.

The amounts we award vary, but our message to researchers does not; each one represents a sincere ‘thank you’.

As we have for 2014 and 2015, we’re again sharing a yearly wrap-up of the Vulnerability Rewards Program.


What was new?

In short — a lot. Here’s a quick rundown:

Previously by-invitation only, we opened up Chrome's Fuzzer Program to submissions from the public. The program allows researchers to run fuzzers at large scale, across thousands of cores on Google hardware, and receive reward payments automatically.


On the product side, we saw amazing contributions from Android researchers all over the world, less than a year after Android launched its VRP. We also expanded our overall VRP to include more products, including OnHub and Nest devices.


We increased our presence at events around the world, like pwn2own and Pwnfest. The vulnerabilities responsibly disclosed at these events enabled us to quickly provide fixes to the ecosystem and keep customers safe. At both events, we were able to close down a vulnerability in Chrome within days of being notified of the issue.


Stories that stood out

As always, there was no shortage of inspiring, funny, and quirky anecdotes from the 2016 year in VRP.
  • We met Jasminder Pal Singh at Nullcon in India. Jasminder is a long-time contributor to the VRP, but this research is a side project for him. He spends most of his time growing Jasminder Web Services Point, the startup he operates with six other colleagues and friends. The team consists of: two web developers, one graphic designer, a developer for Android and iOS respectively, one Linux administrator, and a Content Manager/Writer. Jasminder’s VRP rewards fund the startup. The number of reports we receive from researchers in India is growing, and we’re growing the VRP’s presence there with additional conference sponsorships, trainings, and more.

Jasminder (back right) and his team
  • Jon Sawyer worked with his colleague Sean Beaupre from Streamlined Mobile Solutions, and friend Ben Actis to submit three Android vulnerability reports. A resident of Clallam County, Washington, Jon donated their $8,000 reward to their local Special Olympics team, the Orcas. Jon told us the reward was particularly meaningful because his son, Benji, plays on the team. He said: “Special Olympics provides a sense of community, accomplishment, and free health services at meets. They do incredible things for these people, at no cost for the athletes or their parents. Our donation is going to supply them with new properly fitting uniforms, new equipment, cover some facility rental fees (bowling alley, gym, track, swimming pool) and most importantly help cover the biggest cost, transportation.”
  • VRP researchers sometimes attach videos that demonstrate the bug. While making a great proof-of-concept video is a skill in itself, our researchers raised it to another level this year. Check out this video Frans Rosén sent us. It’s perfectly synchronized to the background music! We hope this trend continues in 2017 ;-)

Researchers’ individual contributions, and our relationship with the community, have never been more important. A hearty thank you to everyone that contributed to the VRP in 2016 — we’re excited to work with you (and others!) in 2017 and beyond. 

*Josh Armour (VRP Program Manager), Andrew Whalley (Chrome VRP), and Quan To (Android VRP) contributed mightily to help lead these Google-wide efforts.


In support of our work to implement HTTPS across all of our products (https://www.google.com/transparencyreport/https/) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.

As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (https://pki.goog/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.

The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.

We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority. This change will enable us to begin the process of migrating to our new, independent infrastructure.

Google Trust Services now operates the following Root Certificates:

Public Key Fingerprint (SHA1) Valid Until
GTS Root R1 RSA 4096, SHA-384 e1:c9:50:e6:ef:22:f8:4c:56:45:72:8b:92:20:60:d7:d5:a7:a3:e8 Jun 22, 2036
GTS Root R2 RSA 4096, SHA-384 d2:73:96:2a:2a:5e:39:9f:73:3f:e1:c7:1e:64:3f:03:38:34:fc:4d Jun 22, 2036
GTS Root R3 ECC 384, SHA-384 30:d4:24:6f:07:ff:db:91:89:8a:0b:e9:49:66:11:eb:8c:5e:46:e5 Jun 22, 2036
GTS Root R4 ECC 384, SHA-384 2a:1d:60:27:d9:4a:b1:0a:1c:4d:91:5c:cd:33:a0:cb:3e:2d:54:cb Jun 22, 2036
GS Root R2 RSA 2048, SHA-1 75:e0:ab:b6:13:85:12:27:1c:04:f8:5f:dd:de:38:e4:b7:24:2e:fe Dec 15, 2021
GS Root R4 ECC 256, SHA-256 69:69:56:2e:40:80:f4:24:a1:e7:19:9f:14:ba:f3:ee:58:ab:6a:bb Jan 19, 2038


Due to timing issues involved in establishing an independently trusted Root Certificate Authority, we have also secured the option to cross sign our CAs using:


Public Key Fingerprint (SHA1) Valid Until
GS Root R3 RSA 2048, SHA-256 d6:9b:56:11:48:f0:1c:77:c5:45:78:c1:09:26:df:5b:85:69:76:ad Mar 18, 2029
GeoTrust RSA 2048, SHA-1 de:28:f4:a4:ff:e5:b9:2f:a3:c5:03:d1:a3:49:a7:f9:96:2a:82:12 May 21, 2022


If you are building products that intend to connect to a Google property moving forward you need to at minimum include the above Root Certificates. With that said even though we now operate our own roots, we may still choose to operate subordinate CAs under third-party operated roots.


For this reason if you are developing code intended to connect to a Google property, we still recommend you include a wide set of trustworthy roots. Google maintains a sample PEM file at (https://pki.goog/roots.pem) which is periodically updated to include the Google Trust Services owned and operated roots as well as other roots that may be necessary now, or in the future to communicate with and use Google Products and Services.

Posted by Rahul Mishra, Android Security Program Manager
[Cross-posted from the Android Developers Blog]

In April 2016, the Android Security team described how the Google Play App Security Improvement (ASI) program has helped developers fix security issues in 100,000 applications. Since then, we have detected and notified developers of 11 new security issues and provided developers with resources and guidance to update their apps. Because of this, over 90,000 developers have updated over 275,000 apps!
ASI now notifies developers of 26 potential security issues. To make this process more transparent, we introduced a new page where developers can find information about all these security issues in one place. This page includes links to help center articles containing instructions and additional support contacts. Developers can use this page as a resource to learn about new issues and keep track of all past issues.

Developers can also refer to our security best practices documents and security checklist, which are aimed at improving the understanding of general security concepts and providing examples that can help tackle app-specific issues.

How you can help:
For feedback or questions, please reach out to us through the Google PlayDeveloper Help Center.
security+asi@android.com.

Posted by Megan Ruthven, Software Engineer

[Cross-posted from the Android Developers Blog]

In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system.

This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps

To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.

Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.

  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.

Difference between a regular and DOI app download on the same device.


Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.



Encryption is a foundational technology for the web. We’ve spent a lot of time working through the intricacies of making encrypted apps easy to use and in the process, realized that a generic, secure way to discover a recipient's public keys for addressing messages correctly is important. Not only would such a thing be beneficial across many applications, but nothing like this exists as a generic technology.

A solution would need to reliably scale to internet size while providing a way to establish secure communications through untrusted servers. It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more.

The result is Key Transparency, which we’re making available as an open-source prototype today.

Why Key Transparency is useful

Existing methods of protecting users against server compromise require users to manually verify recipients’ accounts in-person. This simply hasn’t worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won’t use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge.

One of our goals with Key Transparency was to simplify this process and create infrastructure that allows making it usable by non-experts. The relationship between online personas and public keys should be automatically verifiable and publicly auditable. Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible. This also ensures that senders will always use the same keys that account owners are verifying.

Key Transparency is a general-use, transparent directory that makes it easy for developers to create systems of all kinds with independently auditable account data. It can be used in a variety of scenarios where data needs to be encrypted or authenticated. It can be used to make security features that are easy for people to understand while supporting important user needs like account recovery.

Looking ahead
It’s still very early days for Key Transparency. With this first open source release, we’re continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone.

We’d also like to thank our many collaborators during Key Transparency’s multi-year development, including the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google.

Our goal is to evolve Key Transparency into an open-source, generic, scalable, and interoperable directory of public keys with an ecosystem of mutually auditing directories. We welcome your apps, input, and contributions to this new technology at KeyTransparency.org.


Last year we helped launch USENIX Enigma, a conference focused on bringing together security and privacy experts from academia, industry, and public service. After a successful first run, we’re supporting Enigma again this year and looking forward to more great talks and an ever-engaging hallway track!

Our speakers this year include practitioners from many tech industry leaders, researchers and professors at universities from around the world, and civil servants working at agencies in the U.S. and abroad. In addition to sessions focused specifically on software security, spam and abuse, and usability, the program will cover interdisciplinary topics, like the intersection of security and artificial intelligence, neuroscience, and society.

I’m also very proud to have some of my Google colleagues speaking at Enigma:

  • Sunny Consolvo will present results from a qualitative study of the privacy and security practices and challenges of survivors of intimate partner abuse. Sunny will also share how technology creators can better support the victims and survivors of such abuse.
  • Damian Menscher has been battling botnets for a decade at Google and has witnessed the evolution of Distributed Denial-of-Service (DDoS) scaling and attack ingenuity. Damian will describe his operation of a DDoS honeypot, and share specific things he learned while protecting KrebsOnSecurity.com.
  • Emily Schechter will be talking about driving HTTPS adoption on the web. She’ll go over some of the unexpected speed bumps major web sites have encountered, share how Chrome approaches feature changes that encourage HTTPS usage, and discuss what’s next to get to a default encrypted web.
As we did last year, my program co-chair, David Brumley, and I are developing a program full of short, thought-provoking presentations followed by lively discussion. Our program committee has worked closely with each speaker to help them craft the best version of their talk. Everyone is excited to share them, and there’s still time to register! I hope to see many of you in Oakland at USENIX Enigma later this month.

PS: Warning! Self-serving Google notice ahead… We’re hiring! We believe that most security researchers do what they do because they love what they do. What we offer is a place to do what you love—but in the open, on real-world problems, and without distraction. Please reach out to us if you’re interested.


We’re excited to announce the release of Project Wycheproof, a set of security tests that check cryptographic software libraries for known weaknesses. We’ve developed over 80 test cases which have uncovered more than 40 security bugs (some tests or bugs are not open sourced today, as they are being fixed by vendors). For example, we found that we could recover the private key of widely-used DSA and ECDHC implementations. We also provide ready-to-use tools to check Java Cryptography Architecture providers such as Bouncy Castle and the default providers in OpenJDK.

The main motivation for the project is to have an achievable goal. That’s why we’ve named it after the Mount Wycheproof, the smallest mountain in the world. The smaller the mountain the easier it is to climb it!

In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long. Good implementation guidelines, however, are hard to come by: understanding how to implement cryptography securely requires digesting decades' worth of academic literature. We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means.

These observations have prompted us to develop Project Wycheproof, a collection of unit tests that detect known weaknesses or check for expected behaviors of some cryptographic algorithm. Our cryptographers have surveyed the literature and implemented most known attacks. As a result, Project Wycheproof provides tests for most cryptographic algorithms, including RSA, elliptic curve crypto, and authenticated encryption.

Our first set of tests are written in Java, because Java has a common cryptographic interface. This allowed us to test multiple providers with a single test suite. While this interface is somewhat low level, and should not be used directly, we still apply a "defense in depth" argument and expect that the implementations are as robust as possible. For example, we consider weak default values to be a significant security flaw. We are converting as many tests into sets of test vectors to simplify porting the tests to other languages.

While we are committed to develop as many tests as possible and external contributions are welcome — if you want to contribute, please read CONTRIBUTING before sending us pull requests — Project Wycheproof is by no means complete. Passing the tests does not imply that the library is secure, it just means that it is not vulnerable to the attacks that Project Wycheproof tries to detect. Cryptographers constantly discover new weaknesses in cryptographic protocols. Nevertheless, with Project Wycheproof developers and users now can check their libraries against a large number of known attacks without having to sift through hundreds of academic papers or become cryptographers themselves.

For more information about the tests and what you can do with them, please visit our homepage on GitHub.


[Cross-posted from the Google Testing Blog and the Google Open Source Blog]

We are happy to announce OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software.

Open source software is the backbone of the many apps, sites, services, and networked things that make up “the internet.” It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it.

Recent security stories confirm that errors like buffer overflow and use-after-free can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's where fuzz testing comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly.

In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. AFL and libFuzzer), and we use them to fuzz various components of the Chrome browser. These fuzzers, when combined with Sanitizers, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and sometimes even logical bugs.

OSS-Fuzz’s goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, AddressSanitizer) and provides a massive distributed execution environment powered by ClusterFuzz.
Early successes

Our initial trials with OSS-Fuzz have had good results. An example is the FreeType library, which is used on over a billion devices to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, was an early adopter of OSS-Fuzz. Recently the FreeType fuzzer found a new heap buffer overflow only a few hours after the source change:
ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa 
READ of size 2 at 0x615000000ffa thread T0
SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds)
   #0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31
OSS-Fuzz automatically notified the maintainer, who fixed the bug; then OSS-Fuzz automatically confirmed the fix. All in one day! You can see the full list of fixed and disclosed bugs found by OSS-Fuzz so far.

Contributions and feedback are welcome

OSS-Fuzz has already found 150 bugs in several widely used open source projects (and churns ~4 trillion test cases a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software.

OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it needs to have a large user base and/or be critical to Global IT infrastructure, a general heuristic that we are intentionally leaving open to interpretation at this early stage. See more details and instructions on how to apply here.

Once a project is signed up for OSS-Fuzz, it is automatically subject to the 90-day disclosure deadline for newly reported bugs in our tracker (see details here). This matches industry’s best practices and improves end-user security and stability by getting patches to users faster.

Help us ensure this program is truly serving the open source community and the internet which relies on this critical software, contribute and leave your feedback on GitHub.