By Jennifer Lynch | July 2, 2020 |
Should the police be able to force Google to turn over identifying information on every phone within a certain geographic area—potentially hundreds or thousands of devices—just because a crime occurred there? We don’t think so. As we argued in an amicus brief filed recently in People v. Dawes, a case in San Francisco Superior Court, this is a general search and violates the Fourth Amendment.
The court is scheduled to hear the defendant’s motion to quash and suppress evidence on July 7, 2020.
In 2018, police in San Francisco were trying to figure out who robbed a house in a residential neighborhood. They didn’t have a suspect. Instead of using traditional investigative techniques to find the culprit, they turned to a new surveillance tool that’s been gaining interest from police across the country—a “geofence warrant.”
Unlike traditional warrants for electronic records, a geofence warrant doesn’t start with a suspect or even an account; instead it directs Google to search a vast database of location history information to identify every device (for which Google has data) that happened to be in the area around the time of the crime, regardless of whether the device owner has any link at all to the crime under investigation. Because these investigations start with a location before they have a suspect, they are also frequently called “reverse location” searches.
Google has a particularly robust, detailed, and searchable collection of location data, and, to our knowledge, it is the only company that complies with these warrants. Much of what we know about the data Google provides to police and how it provides that data comes from a declaration and an amicus brief it filed in a Virginia case called United States v. Chatrie. According to Google, the data it provides to police comes from its database called “Sensorvault,” where it stores location data for one of its services called “Location History.” Google collects Location History data from different sources, including wifi connections, GPS and Bluetooth signals, and cellular networks. This makes it much more precise than cell site location information and allows Google to estimate a device’s location to within 20 meters or less. This precision also allows Google to infer where a user has been (such as to a ski resort), what they were doing at the time (such as driving), and the path they took to get there.
Location History is offered to users on both Android and IOS devices, but users must opt in to data collection. Google states that only about one-third of its users have opted in to Location History, but this represents “numerous tens of millions of Google users.”
Police have been increasingly seeking access to this treasure trove of data over the last few years via geofence warrants. These warrants reportedly date to 2016, but Google states that it received 1500% more geofence warrants in 2018 than 2017 and 500% more in 2019 than in 2018. According to the New York Times, the company received as many as 180 requests in a single week in 2019.
Geofence warrants typically follow a similar multi-stage process, which appears to have been created by Google. For the first stage, law enforcement identifies one or more geographic areas and time periods relevant to the crime. The warrant then requires Google to provide information about any devices, identified by a numerical identifier, that happened to be in the area within the given time period. Google says that, to comply with this first stage, it must search through its entire store of Location History data to identify responsive data—data on tens of millions of users, nearly all of whom are located well outside the geographic scope of the warrant. Google has also said that the volume of data it produces at this stage depends on the size and nature of the geographic area and length of time covered by the warrant, which vary considerably from one request to another, but the company once provided the government with identifying information for nearly 1,500 devices.
After Google releases the initial de-identified pool of responsive data, police then, in the second stage, demand Google provide additional location history outside of the initially defined geographic area and time frame for a subset of users that the officers, at their own discretion, determine are “relevant” to their investigation. Finally, in the third stage, officers demand that Google provide identifying information for a smaller subset of devices, including the user’s name, email address, device identifier, phone number and other account information. Again, officers rely solely on their own discretion to determine this second subset and which devices to target for further investigation.
There are many problems with this kind of a search. First, most of the information provided to law enforcement in response to a geofence warrant does not pertain to individuals suspected of the crime. Second, as not all device owners have opted in to Location History, search results are both over and under inclusive. Finally, Google has said there is only an estimated 68% chance that the user is actually where Google thinks they are, so the users Google identifies in response to a geofence warrant may not even be within the geographic area defined by the warrant (and therefore are outside the scope of the warrant).
Unsurprisingly, these problems have led to investigations that ensnare innocent individuals. In one case, police sought detailed information about a man in connection with a burglary after seeing his travel history in the first step of a geofence warrant. However, the man’s travel history was part of an exercise tracking app he used to log months of bike rides—rides that happened to take him past the site of the burglary. Investigators eventually acknowledged he should not have been a suspect, but not until after the man hired an attorney and after his life was upended for a time.
This example shows why geofence warrants are so pernicious and why they violate the Fourth Amendment. They lack particularity because they don’t properly and specifically describe an account or a person’s data to be seized, and they result in overbroad searches that can ensnare countless people with no connection to the crime. These warrants leave it up to the officers to decide for themselves, based on no concrete standards, who is a suspect and who isn’t.
The Fourth Amendment was written specifically to prevent these kinds of broad searches.
As we argued in Dawes, a geofence warrant is a digital analog to the “general warrants” issued in England and Colonial America that authorized officers to search anywhere they liked, including people or homes —simply on the chance that they might find someone or something connected with the crime under investigation. The chief problem with searches like this is that they leave too much of the search to the discretion of the officer and can too easily result in general exploratory searches that unreasonably interfere with a person’s right to privacy. The Fourth Amendment’s particularity and probable cause requirements as well as the requirement of judicial oversight were designed to prevent this.
Reverse location searches are the antithesis of how our criminal justice system is supposed to work. As with other technologies that purport to pull a suspect out of thin air—like face recognition, predictive policing, and genetic genealogy searches—there’s just too high a risk they will implicate an innocent person, shifting the burden of proving guilt from the government to the individual, who now has to prove their innocence. We think these searches are unconstitutional, even with a warrant.
The defendant’s motion to quash the geofence warrant and motion to suppress the evidence will be heard in San Francisco Superior Court on July 7, 2020.
See EFF.org for original article and links to court documents HERE.