Apple and Google Are Introducing New Ways to Defeat Cell Site Simulators, But Is it Enough?

September 13, 2023 | By Cooper Quintin | Electronic Frontier Foundation (EFF) |

Cell-site simulators (CSS)—also known as IMSI Catchers and Stingrays—are a tool that law enforcement and governments use to track the location of phones, intercept or disrupt communications, spy on foreign governments, or even install malware. Cell-site simulators are also used by criminals to send spam and engage in fraud. We have written previously about the privacy implications of CSS, noting that a common tactic is to trick your phone into connecting to a fake 2G cell tower. In the U.S. every major carrier except for T-Mobile has turned off their 2G and 3G network.1
But many countries outside of the U.S. have not taken steps to turn off their 2G networks yet, and there are still areas where 2G is the only option for cellular connections. Unfortunately almost all phones still support 2G, even those sold in countries like the U.S. where carriers no longer use the obsolete protocol. This is cause for concern; even if every 2G network was shut down tomorrow the fact that phones can still connect to 2G networks leaves them vulnerable.  Upcoming changes in iOS and Android could protect users against fake base station attacks, so let’s take a look at how they’ll work.

In 2021, Google released an optional feature for Android to turn off the ability to connect to 2G cell sites. We applauded this feature at the time. But we also suggested that other companies could do more to protect against cell-site simulators, especially Apple and Samsung, who had not made similar changes. This year more improvements are being made.

Google’s Efforts to Prevent CSS Attacks
Earlier this year Google announced another new mobile security setting for Android. This new setting allows users to prevent their phone from using a “null cipher” when making a connection with a cell tower. In a well-configured network, every connection with a cell tower is authenticated and encrypted using a symmetric cipher, with a cryptographic key generated by the phone’s sim card and the tower it is connecting to. However, when the null cipher is used, communications are instead sent in the clear and not encrypted. Null ciphers are useful for tasks like network testing, where an engineer might need to see the content of the packets going over the wire. Null ciphers are also critical for emergency calls where connectivity is the number one priority, even if someone doesn’t have a SIM card installed. Unfortunately fake base stations can also take advantage of null ciphers to intercept traffic from phones, like SMS messages, calls, and non-encrypted internet traffic.

By turning on this new setting, users can prevent their connection to the cell tower from using a null cipher (except in the case of a call to emergency services if necessary,) thus ensuring that their connection to the cell tower is always encrypted.

We are  excited to see Google putting more resources into giving Android users tools to protect themselves from fake base stations. Unfortunately, this setting has not been released yet in vanilla Android and it will only be available on newer phones running Android 14 or higher,2 but we hope that third-party manufacturers—especially those who make lower cost Android phones—will bring this change to their phones as well.

Apple Is Taking Steps to Address CSS for the First Time
Apple has also finally taken steps to protect users against cell site simulators after being called on to do so by EFF and the broader privacy and security community. Apple announced that in iOS 17, out September 18, iPhones will not connect to insecure 2G mobile towers if they are placed in Lockdown Mode. As the name implies, Lockdown Mode is a setting originally released in iOS 16 that locks down several features for people who are concerned about being attacked by mercenary spyware or other nation state level attacks. This will be a huge step towards protecting iOS users from fake base station attacks, which have been used as a vector to install spyware such as Pegasus.

We are excited to see Apple taking active measures to block fake base stations and hope it will take more measures in the future, such as disabling null ciphers, as Google has done.

Samsung Continues to Fall Behind
Not every major phone manufacturer is taking the issue of fake base stations seriously. So far Samsung has not taken any steps to include the 2G toggle from vanilla Android, nor has it indicated that it plans to any time soon. Hardware vendors often heavily modify Android before distributing it on their phones, so even though the setting is available in the Android Open Source Project, Samsung has so far chosen not to make it available on their phones. Samsung also failed to protect its users earlier this year when for months it did not take action against a fake version of the Signal app containing spyware hosted in the Samsung app store. These failures to act suggest that Samsung considers its users’ security and privacy to be an afterthought. Those concerned with the security and privacy of their mobile devices should strongly consider using other hardware.

Recommendations
We applaud the changes that Google and Apple are introducing with their latest round of updates. Cell-site simulators continue to be a problem for privacy and security all over the world, and it’s good that mobile OS manufacturers are starting to take the issue seriously.

We recommend that iOS users who are concerned about fake base station attacks turn on Lockdown Mode in anticipation of the new protections in iOS 17. Android users with at least a Pixel 6 or newer Android phone should disable 2G and disable null ciphers as soon as their phone supports it.


1.T-Mobile plans to disable its 2G network on April 2nd, 2024
2.Specifically phones must be running the latest version of the hardware abstraction layer or HAL.


Read the original article HERE.

Mozilla: Here’s Why Your Connected Car’s Privacy Sucks

September 6, 2023 | By Rob Pegoraro | PC Mag |

The Mozilla Foundation’s latest report flunks all 25 car brands evaluated, with Tesla ranked worst.

Buying a new car means your privacy might as well be left up on blocks, according to a study released Wednesday by the Mozilla Foundation.

“Modern cars are a privacy nightmare,” researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald write (emphasis in the original) in their introduction to that report, published under the equally scathing headline “It’s Official: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy.”

The report, based on what the authors say was “over 600 hours researching the car brands’ privacy practices,” concludes that the 25 carmakers profiled might as well have been asleep at the wheel for the last 10 years of data breaches: They collect too much data from the sensors stuffed into their increasingly connected vehicles, share or sell too much of that, and grant drivers too little control over this collection and sharing.

Tesla fared worst of them all in Mozilla’s evaluation, with demerits in all five categories (data use, data control, track record, security, and AI), notwithstanding the upfront statement in Tesla’s privacy policy that it “never sells or rents your data to third-party companies.”

The researchers instead objected to the volume of data that Tesla vehicles collect, the history of it being misused (such as April’s report that employees shared video from Tesla car cameras), language that suggests Tesla won’t insist on a court order before handing over data to law-enforcement investigators, and what they regarded as opaque and untrustworthy “Autopilot” and “Full Self-Driving” systems.

Sixteen brands from eight companies—Ford and its Lincoln brand; Honda and its Acura subsidiary; Hyundai and Kia; GM’s Cadillac, Chevrolet, Buick, and GMC; Mercedes-Benz; Nissan; Toyota and Lexus; and Volkswagen Group’s Audio and VW—received a failing grade on the first four of those categories.

Nissan drew extra scorn from the researchers in an all-caps verdict that suggests this company, not Tesla, should have been at the far end of the junkyard: “THEY STINK AT PRIVACY!”

A key factor in that harsh judgment was a facepalm-inducing privacy policy that says Nissan may collect data points up to and including “sexual activity” (per the policy, if they somehow come up in conversations between customers and Nissan employees) and build a marketing profile that covers your “psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”

Another six makes from three firms (BMW; Stellantis brands Chrysler, Dodge, Fiat, and Jeep; and Subaru) only got dinged in the data use, data control and security categories. Two other makes, Renault and its subsidiary Dacia, escaped with failing marks in data use and security–but since neither sells in the United States, that’s of little benefit to US customers.

(It’s unclear why Mozilla included those last two brands in a report with so many references to US law enforcement instead of, say, Mini, Rivian or Volvo.)

The report, the latest chapter in the “Privacy Not Included” series that the nonprofit behind the Firefox browser began publishing in 2017, says Mozilla contacted all of these companies with requests for comment. But it received vague-to-useless replies from only Ford, Honda, and Mercedes.

It further notes that all of these companies besides Renault and Tesla have signed the Consumer Privacy Protection Principles document (PDF) first released in 2014 by the Alliance For Automotive Innovation but contends that none follow those terms. For example, that document says carmakers should require a warrant or court order before handing over location and other sensitive information to law enforcement, but Hyundai’s privacy notice suggests that “informal” requests from police may suffice.

That Washington-based trade group wrote those principles after an outcry over remarks at CES in January 2014 by Ford executive Jim Farley in which he seemed to brag about how much data the company collected from its cars. Apparently, nearly a decade hasn’t been enough time for this industry to learn about the virtues of data minimization and writing privacy policies that expressly limit their action instead of maximizing their future flexibility.

Mozilla’s report doesn’t offer much actionable advice to drivers beyond opting out of whatever categories of data collection are available and ensuring you factory-reset a car’s software before selling or trading it.

It does not, however, advise voting for candidates who will enact stronger privacy regulations–something that’s happened in Europe with the General Data Protection Regulation and in California with the California Consumer Privacy Act, as noted by the report’s nod to that 2018 law, but which has eluded the grasp of Congress to date.

Read the original article HERE

Connected cars and cybercrime: A primer

September 5, 2023 | By Rainer Vosseler | Help Net Security |

Original equipment suppliers (OEMs) and their suppliers who are weighing how to invest their budgets might be inclined to slow pedal investment in addressing cyberthreats. To date, the attacks that they have encountered have remained relatively unsophisticated and not especially harmful.

Analysis of chatter in criminal underground message exchanges, however, reveals that the pieces exist for multi-layered, widespread attacks in the coming years. And given that the automotive industry’s customary development cycles are long, waiting for the more sophisticated cyberattacks on connected cars to appear is not a practical option.

What should the world’s automotive OEMs and suppliers do now to prepare for the inevitable transition from today’s manual, car-modding hacks to tomorrow’s user impersonation, account thefts and other possible attacks?

How connectivity is changing car crime
As our vehicles become more connected to the outside world, the attack surface available to cybercriminals is rapidly increasing, and new “smart” features on the current generation of vehicles worldwide open the door for new threats.

Our new “smartphones on wheels”—always connected to the internet, utilizing many apps and services, collecting tremendous amounts of data from multiple sensors, receiving over-the-air software updates, etc.—stand to be attacked in similar ways to how our computers and handheld devices already are today.

Automotive companies need to think now about those potential future threats. A car that an OEM is planning today will likely reach the market in three to five years. It will need to be already secured against the cyberthreat landscape that might be in existence by then. If the car hits the market without the required cybersecurity capabilities, the job of securing it will become significantly more difficult.

The likelihood of substantially more frequent, devious, and harmful attacks is portended by the complex attacks on connected cars that we have seen devised by industry researchers. Fortunately, the attacks to this point largely have been limited to these theoretical exercises in the automotive industry. Car modding – e.g., unlocking a vehicle’s features or manipulating mileage – is as far as real-world implementation has gotten.

Connectivity limits some of the typical options that are available to criminals specializing in car crime. The trackability of contemporary vehicles makes reselling stolen cars significantly more challenging, and even if a criminal can manage to take a vehicle offline, the associated loss of features renders the car less valuable to potential buyers.

Still, as connectivity across and beyond vehicles grows more pervasive and complicated, so will the threat. How are attacks on tomorrow’s connected cars likely to evolve?

Emerging fronts for next-generation attacks
Because the online features of connected cars are managed via user accounts, attackers may seek access to those accounts to attain control over the vehicle. Takeover of these car-user accounts looms as the emerging front for attack for would-be car cybercriminals and even criminal organizations, creating ripe possibilities for user impersonation and the buying and selling of the accounts.

Stealing online accounts and selling them to rogue collaborators who can act on that knowledge tee up a range of future possible attacks for tomorrow’s automotive cybercriminals:

– Selling car user accounts

– Impersonating users via phishing, keyloggers or other malware

– Remote unlocking, starting and controlling connected cars

– Opening cars and looting for valuables or committing other one-off crimes

– Stealing cars and selling for parts

– Locating cars to pinpoint owners’ residential addresses and to identify when owners are not home

The crime triangle takes shape
Connected car cybercrime is still in its infancy, but criminal organizations in some nations are beginning to recognize the opportunity to exploit vehicle connectivity. Surveying today’s underground message forums quickly reveals that the pieces could quickly fall into place for more sophisticated automotive cyberattacks in the years ahead. Discussions on underground crime forums around data that could be leaked and needed/available software tools to enable attacks are already intensifying.

A post from a publicly searchable auto-modders forum about a vehicle’s multi-displacement system (MDS) for adjusting engine performance, is symbolic of the current activity and possibilities.

Another, in which a user on a criminal underground forum offers a data dump from car manufacturer, points to the possible threats that likely are coming to the industry.

Though they still seem to be limited to accessing regular stolen data, compromises and network accesses are for sale in the underground. The crime triangle (as defined by crime analysts) for sophisticated automotive cyberattacks is solidifying:

          – Target — The connected cars that serious criminals will seek to exploit in the years ahead are becoming more and more prevalent in the global marketplace.

          – Desire — Criminal organizations will find ample market incentive to monetize stolen car accounts.

          – Opportunity — Hackers are steeped in inventive methods to hijack people’s accounts via phishing, infostealing, keylogging, etc.

Penetrating and exploiting connected cars
The ways for seizing access to the data of users of connected cars are numerous: introducing malicious in-vehicle infotainment (IVI) apps, exploiting unsecure IVI apps and network connections, taking advantage of unsecure browsers to steal private data, and more.

Also, there’s a risk of exploitation of personally identifiable information (PII) and vehicle telemetric data (on a car’s condition, for example) stored in smart cockpits, to inform extremely personalized and convincing phishing emails.

Here’s one method by which it could happen:
– An attacker identifies vulnerabilities that can be exploited in a browser.

– The attacker creates a professional, attractive webpage to offer hard-to-resist promotions to unsuspecting users (fast-food coupons, discounts on vehicle maintenance for the user’s specific model and year, insider stock information, etc.)

– The user is lured into visiting the malicious webpage, which bypasses the browser’s security mechanisms

– The attacker installs backdoors in the vehicle IVI system, without the user’s knowledge or permission, to obtain various forms of sensitive data (driving history, conversations recorded by manufacturer-installed microphones, videos recorded by built-in cameras, contact lists, text messages, etc.)

 

The possible crimes enabled by such a process are wide ranging. By creating a fraudulent scheme to steal the user’s identity, for example, the attacker would be able to open accounts on the user’s behalf or even trick an OEM service team into approving verification requests—at which point the attacker could remotely open the vehicle’s doors and allow a collaborator to steal the car.

Furthermore, the attackers could use the backdoors that they installed to infiltrate the vehicle’s central gateway via the IVI system by sending malicious messages to electronic control units (ECUs). A driver could not only lose control of the car’s IVI system and its geolocation and audio and video data, but also the ability to control speed, steering and other safety-critical functions of the vehicle, as well as the range of vital data stored in its digital clusters.

Positioning today for tomorrow’s threat landscape
Until now there might have been reluctance among OEMs to invest in averting cyberattacks, which haven’t yet materialized in the real world. But a 2023 Gartner Research report, “Automotive Insight: Vehicle Cybersecurity Ecosystem Creates Partnership Opportunities,” is among the industry research documenting a shift in priorities.

Driven by factors such as the significant risk of brand and financial damage from cyberattacks via updatable vehicle functions controlled by software, as well as emerging international regulatory pressures such as the United Nations (UN) regulation 155 (R155) and ISO/SAE 21434, OEMs have begun to emphasize cybersecurity.

And today, they are actively evaluating and, in some cases, even implementing a few powerful capabilities:
– Security for IVI privacy and identity

– Detection of IVI app vulnerabilities

– Monitoring of IVI app performance

– Protection of car companion apps

– Detection of malicious URLs

– 24/7 surveillance of personal data

Investing in cybersecurity in the design stage, versus after breaches, will ultimately prove less expensive and more effective in terms of avoiding or mitigating serious crimes involving money, vehicle and identity theft from compromised personal data by the world’s most savvy and ambitious business criminals.

Read the original article HERE

Using technology to help find missing children

September 8, 2023 | Fox 5 Atlanta | By Denise Dillon |

ATLANTA – The National Center for Missing and Exploited Children has a new way to alert people to be on the lookout for missing kids in their area.

The organization has helped law enforcement across the country find more than 450,000 missing children since 1994. One thing they do with all the children is push their photo out to the public as quickly as possible. They’ve done this through posters, billboards, social media, and now QR codes.

“It puts the most recent missing child images literally in the palm of your hand,” said John Bischoff with the National Center of Missing and Exploited Children.

Just by scanning the QR code with your cell phone, you’ll be able to see photos of all the missing children in your area.

“These kids are out there. They need our help. It just takes the right person to help them out,” said Bischoff.

Bischoff says on any given day there are about 7,000 missing children in the U.S.

“It’s a scary number. These are kids where their families don’t know their whereabouts,” said Bischoff.

Bischoff understands the importance of putting out pictures of these children, in hopes that someone sees them or has information about them.

“We’d send out loads of printed posters just to keep engaged with the community to remind them this child is missing in your area,” said Bischoff.

The QR code gets the images out faster and lets you see images of missing children within 50 miles of your location.

I did it from Marietta and the images of 51 missing children popped up with their name, age, how long they’ve been missing and the last place they were seen. By clicking on a photo, you can easily submit a tip or share the image.

“We know someone is going to use this QR code. We know they’re going to recognize something. All it takes is one set of eyes to be a hero,” said Bischoff.

The QR code was just launched a couple of weeks ago… Now the national center for missing and exploited children is working on partnerships to get the QR code on water bottles, chip bags, storefronts, anywhere they can.

You can scan the QR code at: https://www.missingkids.org/blog/2023/new-posters-qr-code

Read the original article HERE

Meta plans to roll out default end-to-end encryption for Messenger by the end of the year

August 23, 2023 | By Ivan Mehta | TechCrunch |

Meta said today that the company plans to enable end-to-end encryption by default for Messenger by the end of this year. The tech giant is also expanding its test of end-to-end encryption features to “millions more people’s chats.”

The company has been building end-to-end encryption features in Messenger for years now. However, most of them have been optional or experimental. In 2016, Meta started rolling out end-to-end encryption protection through a “secret conversations” mode. In 2021, it introduced such an option for voice and video calls on the app. The company made a similar move to provide an end-to-end encryption option for group chats and calls in January 2022. In August 2022, Meta started testing end-to-end encryption for individual chats.

There is increasing pressure on Meta to enable end-to-end encryption so the company or others can’t access users’ chat messages. Protecting individual communications has become more important after a girl and her mother in Nebraska pleaded guilty to abortion charges in July after Meta handed over her DMs to cops. Last year, the police prosecuted the 17-year-old based on data about her direct messages from Messenger provided by Meta soon after the Supreme Court overturned Roe v. Wade, a 1973 judgment to make abortion legal.

In a letter to the digital rights advocacy group Fight for the Future (via The Verge) this month, Meta’s deputy privacy officer Rob Sherman said that it will roll out end-to-end encryption to Instagram DMs after the Messenger rollout. He also mentioned that “the testing phase has ended up being longer than we anticipated” because of engineering challenges.

In a blog post, the company explained that there were significant challenges in building out encryption features for Messenger. The company had to shed the old server architecture and build a new way for people to manage their chat history through protections like a PIN.

Meta added that it had to rebuild over 100 features like showing link previews in conversations to accommodate end-to-end encryption. The company’s popular messaging app WhatsApp has had end-to-end encryption for years, and in recent years it has figured out a way to support multi devices for one account without breaking encryption. Meta said that the Messenger team is learning lessons from WhatsApp to implement end-to-end encryption.

After the incident, multiple organizations, including Amnesty International, Access Now, and Fight for the Future wrote a petition to Meta and other platforms to enable end-to-end encryption for private chats.

Authorities around the world have been exploring rules that could put encryption in messaging apps at risk. While Meta has pushed back on these proposals through WhatsApp to support end-to-end encryption, it is yet to fully build out these protections for Messenger and Instagram DMs.

Read the original article HERE.