[go: up one dir, main page]

opensource.google.com

Menu

Announcing the Atheris Python Fuzzer

Friday, December 4, 2020

Fuzz testing is a well-known technique for uncovering programming errors. Many of these detectable errors have serious security implications. Google has found thousands of security vulnerabilities and other bugs using this technique. Fuzzing is traditionally used on native languages such as C or C++, but last year, we built a new Python fuzzing engine. Today, we’re releasing the Atheris fuzzing engine as open source.What can Atheris do?
Atheris can be used to automatically find bugs in Python code and native extensions. Atheris is a “coverage-guided” fuzzer, which means that Atheris will repeatedly try various inputs to your program while watching how it executes, and try to find interesting paths.

One of the best uses for Atheris is for differential fuzzers. These are fuzzers that look for differences in behavior of two libraries that are intended to do the same thing. One of the example fuzzers packaged with Atheris does exactly this to compare the Python “idna” package to the C “libidn2” package. Both of these packages are intended to decode and resolve internationalized domain names. However, the example fuzzer idna_uts46_fuzzer.py shows that they don’t always produce the same results. If you ever decided to purchase a domain containing (Unicode codepoints [U+0130, U+1df9]), you’d discover that the idna and libidn2 libraries resolve that domain to two completely different websites.

In general, Atheris is useful on pure Python code whenever you have a way of expressing what the “correct” behavior is - or at least expressing what behaviors are definitely not correct. This could be as complex as custom code in the fuzzer that evaluates the correctness of a library’s output, or as simple as a check that no unexpected exceptions are raised. This last case is surprisingly useful. While the worst outcome from an unexpected exception is typically denial-of-service (by causing a program to crash), unexpected exceptions tend to reveal more serious bugs in libraries. As an example, the one YAML parsing library we tested Atheris on says that it will only raise YAMLErrors; however, yaml_fuzzer.py detects numerous other exceptions, such as ValueError from trying to interpret “-_” as an integer, or TypeError from trying to use a list as a key in a dict. (Bug report.) This indicates flaws in the parser.

Finally, Atheris supports fuzzing native Python extensions, using libFuzzer. libFuzzer is a fuzzing engine integrated into Clang, typically used for fuzzing C or C++. When using libFuzzer with Atheris, Atheris can still find all the bugs previously described, but can also find memory corruption bugs that only exist in native code. Atheris supports the Clang sanitizers Address Sanitizer and Undefined Behavior Sanitizer. These make it easy to detect corruption when it happens, rather than far later. In one case, the author of this document found an LLVM bug using an Atheris fuzzer (now fixed).
What does Atheris support?
Atheris supports fuzzing Python code and native extensions in Python 2.7 and Python 3.3+. When fuzzing Python code, using Python 3.8+ is strongly recommended, as it allows for much better coverage information. When fuzzing native extensions, Atheris can be used in combination with Address Sanitizer or Undefined Behavior Sanitizer.

OSS-Fuzz is a fuzzing service hosted by Google, where we execute fuzzers on open source code free of charge. OSS-Fuzz will soon support Atheris!
How can I get started?
Take a look at the repo, in particular the examples. For fuzzing pure Python, it’s as simple as:

pip3 install atheris

And then, just define a TestOneInput function that runs the code you want to fuzz:

import atheris
import sys


def TestOneInput(data):
    if data == b"bad":
        raise RuntimeError("Badness!")


atheris.Setup(sys.argv, TestOneInput)
atheris.Fuzz()

That’s it! Atheris will repeatedly invoke TestOneInput and monitor the execution flow, until a crash or exception occurs.

For more details, including how to fuzz native code, see the README.


By the Google Information Security team

From MLPerf to MLCommons: moving machine learning forward

Thursday, December 3, 2020

Today, the community of machine learning researchers and engineers behind the MLPerf benchmark is launching an open engineering consortium called MLCommons. For us, this is the next step in a journey that started almost three years ago.

ML Comms chart
Early in 2018, we gathered a group of industry researchers and academics who had published work on benchmarking machine learning (ML), in a conference room to propose the creation of an industry standard benchmark to measure ML performance. Everyone had doubts: creating an industry standard is challenging under the best conditions and ML was (and is) a poorly understood stochastic process running on extremely diverse software and hardware. Yet, we all agreed to try.

Together, along with a growing community of researchers and academics, we created a new benchmark called MLPerf. The effort took off. MLPerf is now an industry standard with over 2,000 submitted results and multiple benchmarks suites that span systems from smartphones to supercomputers. Over that time, the fastest result submitted to MLPerf for training the classic ML network ResNet improved by over 13x.

We created MLPerf because we believed in three principles:
  • Machine learning has tremendous potential: Already, machine learning helps billions of people find and understand information through tools like Google’s search engine and translation service. Active research in machine learning could one day save millions of lives through improvements in healthcare and automotive safety.
  • Transforming machine learning from promising research into wide-spread industrial practice requires investment in common infrastructure -- especially metrics: Much like computing in the ‘80s, real innovation is mixed with hype and adopting new ideas is slow and cumbersome. We need good metrics to identify the best ideas, and good infrastructure to make adoption of new techniques fast and easy.
  • Developing common infrastructure is best done by an open, fast-moving collaboration: We need the vision of academics and the resources of industry. We need the agility of startups and the scale of leading tech companies. Working together, a diverse community can develop new ideas, launch experiments, and rapidly iterate to arrive at shared solutions.
Our belief in the principles behind MLPerf has only gotten stronger, and we are excited to be part of the next step for the MLPerf community with the launch of MLCommons.

MLCommons aims to accelerate machine learning to benefit everyone. MLCommons will build a a common set of tools for ML practitioners including:
  • Benchmarks to measure progress: MLCommons will leverage MLPerf to measure speed, but also expand benchmarking other aspects of ML such as accuracy and algorithmic efficiency. ML models continue to increase in size and consequently cost. Sustaining growth in capability will require learning how to do more (accuracy) with less (efficiency).
  • Public datasets to fuel research: MLCommons new People’s Speech project seeks to develop a public dataset that, in addition to being larger than any other public speech dataset by more than an order of magnitude, better reflects diverse languages and accents. Public datasets drive machine learning like nothing else; consider ImageNet’s impact on the field of computer vision. 
  • Best practices to accelerate development: MLCommons will make it easier to develop and deploy machine learning solutions by fostering consistent best practices. For instance, MLCommons’ MLCube project provides a common container interface for machine learning models to make them easier to share, experiment with (including benchmark), develop, and ultimately deploy.
Google believes in the potential of machine learning, the importance of common infrastructure, and the power of open, collaborative development. Our leadership in co-founding, and deep support in sustaining, MLPerf and MLCommons has echoed our involvement in other efforts like TensorFlow and NNAPI. Together with the MLCommons community, we can improve machine learning to benefit everyone.

Want to get involved? Learn more at mlcommons.org.


By Peter Mattson – ML Metrics, Naveen Kumar – ML Performance, and Cliff Young – Google Brain

Accessibility best practices for virtual events

Tuesday, December 1, 2020





An image with 4 hands together in the colors: red, blue, green, yellow.
As everyone knows, most of our open source events have transformed from in-person to digital this year. However, due to the sudden change, not everything is accessible. We took this issue seriously and decided to work with one of our accessibility experts, Neighborhood Access, to share best practices for our community. We hope this will help you organize your digital events!
.