Mozilla: Here’s Why Your Connected Car’s Privacy Sucks

September 6, 2023 | By Rob Pegoraro | PC Mag |

The Mozilla Foundation’s latest report flunks all 25 car brands evaluated, with Tesla ranked worst.

Buying a new car means your privacy might as well be left up on blocks, according to a study released Wednesday by the Mozilla Foundation.

“Modern cars are a privacy nightmare,” researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald write (emphasis in the original) in their introduction to that report, published under the equally scathing headline “It’s Official: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy.”

The report, based on what the authors say was “over 600 hours researching the car brands’ privacy practices,” concludes that the 25 carmakers profiled might as well have been asleep at the wheel for the last 10 years of data breaches: They collect too much data from the sensors stuffed into their increasingly connected vehicles, share or sell too much of that, and grant drivers too little control over this collection and sharing.

Tesla fared worst of them all in Mozilla’s evaluation, with demerits in all five categories (data use, data control, track record, security, and AI), notwithstanding the upfront statement in Tesla’s privacy policy that it “never sells or rents your data to third-party companies.”

The researchers instead objected to the volume of data that Tesla vehicles collect, the history of it being misused (such as April’s report that employees shared video from Tesla car cameras), language that suggests Tesla won’t insist on a court order before handing over data to law-enforcement investigators, and what they regarded as opaque and untrustworthy “Autopilot” and “Full Self-Driving” systems.

Sixteen brands from eight companies—Ford and its Lincoln brand; Honda and its Acura subsidiary; Hyundai and Kia; GM’s Cadillac, Chevrolet, Buick, and GMC; Mercedes-Benz; Nissan; Toyota and Lexus; and Volkswagen Group’s Audio and VW—received a failing grade on the first four of those categories.

Nissan drew extra scorn from the researchers in an all-caps verdict that suggests this company, not Tesla, should have been at the far end of the junkyard: “THEY STINK AT PRIVACY!”

A key factor in that harsh judgment was a facepalm-inducing privacy policy that says Nissan may collect data points up to and including “sexual activity” (per the policy, if they somehow come up in conversations between customers and Nissan employees) and build a marketing profile that covers your “psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”

Another six makes from three firms (BMW; Stellantis brands Chrysler, Dodge, Fiat, and Jeep; and Subaru) only got dinged in the data use, data control and security categories. Two other makes, Renault and its subsidiary Dacia, escaped with failing marks in data use and security–but since neither sells in the United States, that’s of little benefit to US customers.

(It’s unclear why Mozilla included those last two brands in a report with so many references to US law enforcement instead of, say, Mini, Rivian or Volvo.)

The report, the latest chapter in the “Privacy Not Included” series that the nonprofit behind the Firefox browser began publishing in 2017, says Mozilla contacted all of these companies with requests for comment. But it received vague-to-useless replies from only Ford, Honda, and Mercedes.

It further notes that all of these companies besides Renault and Tesla have signed the Consumer Privacy Protection Principles document (PDF) first released in 2014 by the Alliance For Automotive Innovation but contends that none follow those terms. For example, that document says carmakers should require a warrant or court order before handing over location and other sensitive information to law enforcement, but Hyundai’s privacy notice suggests that “informal” requests from police may suffice.

That Washington-based trade group wrote those principles after an outcry over remarks at CES in January 2014 by Ford executive Jim Farley in which he seemed to brag about how much data the company collected from its cars. Apparently, nearly a decade hasn’t been enough time for this industry to learn about the virtues of data minimization and writing privacy policies that expressly limit their action instead of maximizing their future flexibility.

Mozilla’s report doesn’t offer much actionable advice to drivers beyond opting out of whatever categories of data collection are available and ensuring you factory-reset a car’s software before selling or trading it.

It does not, however, advise voting for candidates who will enact stronger privacy regulations–something that’s happened in Europe with the General Data Protection Regulation and in California with the California Consumer Privacy Act, as noted by the report’s nod to that 2018 law, but which has eluded the grasp of Congress to date.

Read the original article HERE

Connected cars and cybercrime: A primer

September 5, 2023 | By Rainer Vosseler | Help Net Security |

Original equipment suppliers (OEMs) and their suppliers who are weighing how to invest their budgets might be inclined to slow pedal investment in addressing cyberthreats. To date, the attacks that they have encountered have remained relatively unsophisticated and not especially harmful.

Analysis of chatter in criminal underground message exchanges, however, reveals that the pieces exist for multi-layered, widespread attacks in the coming years. And given that the automotive industry’s customary development cycles are long, waiting for the more sophisticated cyberattacks on connected cars to appear is not a practical option.

What should the world’s automotive OEMs and suppliers do now to prepare for the inevitable transition from today’s manual, car-modding hacks to tomorrow’s user impersonation, account thefts and other possible attacks?

How connectivity is changing car crime
As our vehicles become more connected to the outside world, the attack surface available to cybercriminals is rapidly increasing, and new “smart” features on the current generation of vehicles worldwide open the door for new threats.

Our new “smartphones on wheels”—always connected to the internet, utilizing many apps and services, collecting tremendous amounts of data from multiple sensors, receiving over-the-air software updates, etc.—stand to be attacked in similar ways to how our computers and handheld devices already are today.

Automotive companies need to think now about those potential future threats. A car that an OEM is planning today will likely reach the market in three to five years. It will need to be already secured against the cyberthreat landscape that might be in existence by then. If the car hits the market without the required cybersecurity capabilities, the job of securing it will become significantly more difficult.

The likelihood of substantially more frequent, devious, and harmful attacks is portended by the complex attacks on connected cars that we have seen devised by industry researchers. Fortunately, the attacks to this point largely have been limited to these theoretical exercises in the automotive industry. Car modding – e.g., unlocking a vehicle’s features or manipulating mileage – is as far as real-world implementation has gotten.

Connectivity limits some of the typical options that are available to criminals specializing in car crime. The trackability of contemporary vehicles makes reselling stolen cars significantly more challenging, and even if a criminal can manage to take a vehicle offline, the associated loss of features renders the car less valuable to potential buyers.

Still, as connectivity across and beyond vehicles grows more pervasive and complicated, so will the threat. How are attacks on tomorrow’s connected cars likely to evolve?

Emerging fronts for next-generation attacks
Because the online features of connected cars are managed via user accounts, attackers may seek access to those accounts to attain control over the vehicle. Takeover of these car-user accounts looms as the emerging front for attack for would-be car cybercriminals and even criminal organizations, creating ripe possibilities for user impersonation and the buying and selling of the accounts.

Stealing online accounts and selling them to rogue collaborators who can act on that knowledge tee up a range of future possible attacks for tomorrow’s automotive cybercriminals:

– Selling car user accounts

– Impersonating users via phishing, keyloggers or other malware

– Remote unlocking, starting and controlling connected cars

– Opening cars and looting for valuables or committing other one-off crimes

– Stealing cars and selling for parts

– Locating cars to pinpoint owners’ residential addresses and to identify when owners are not home

The crime triangle takes shape
Connected car cybercrime is still in its infancy, but criminal organizations in some nations are beginning to recognize the opportunity to exploit vehicle connectivity. Surveying today’s underground message forums quickly reveals that the pieces could quickly fall into place for more sophisticated automotive cyberattacks in the years ahead. Discussions on underground crime forums around data that could be leaked and needed/available software tools to enable attacks are already intensifying.

A post from a publicly searchable auto-modders forum about a vehicle’s multi-displacement system (MDS) for adjusting engine performance, is symbolic of the current activity and possibilities.

Another, in which a user on a criminal underground forum offers a data dump from car manufacturer, points to the possible threats that likely are coming to the industry.

Though they still seem to be limited to accessing regular stolen data, compromises and network accesses are for sale in the underground. The crime triangle (as defined by crime analysts) for sophisticated automotive cyberattacks is solidifying:

          – Target — The connected cars that serious criminals will seek to exploit in the years ahead are becoming more and more prevalent in the global marketplace.

          – Desire — Criminal organizations will find ample market incentive to monetize stolen car accounts.

          – Opportunity — Hackers are steeped in inventive methods to hijack people’s accounts via phishing, infostealing, keylogging, etc.

Penetrating and exploiting connected cars
The ways for seizing access to the data of users of connected cars are numerous: introducing malicious in-vehicle infotainment (IVI) apps, exploiting unsecure IVI apps and network connections, taking advantage of unsecure browsers to steal private data, and more.

Also, there’s a risk of exploitation of personally identifiable information (PII) and vehicle telemetric data (on a car’s condition, for example) stored in smart cockpits, to inform extremely personalized and convincing phishing emails.

Here’s one method by which it could happen:
– An attacker identifies vulnerabilities that can be exploited in a browser.

– The attacker creates a professional, attractive webpage to offer hard-to-resist promotions to unsuspecting users (fast-food coupons, discounts on vehicle maintenance for the user’s specific model and year, insider stock information, etc.)

– The user is lured into visiting the malicious webpage, which bypasses the browser’s security mechanisms

– The attacker installs backdoors in the vehicle IVI system, without the user’s knowledge or permission, to obtain various forms of sensitive data (driving history, conversations recorded by manufacturer-installed microphones, videos recorded by built-in cameras, contact lists, text messages, etc.)

 

The possible crimes enabled by such a process are wide ranging. By creating a fraudulent scheme to steal the user’s identity, for example, the attacker would be able to open accounts on the user’s behalf or even trick an OEM service team into approving verification requests—at which point the attacker could remotely open the vehicle’s doors and allow a collaborator to steal the car.

Furthermore, the attackers could use the backdoors that they installed to infiltrate the vehicle’s central gateway via the IVI system by sending malicious messages to electronic control units (ECUs). A driver could not only lose control of the car’s IVI system and its geolocation and audio and video data, but also the ability to control speed, steering and other safety-critical functions of the vehicle, as well as the range of vital data stored in its digital clusters.

Positioning today for tomorrow’s threat landscape
Until now there might have been reluctance among OEMs to invest in averting cyberattacks, which haven’t yet materialized in the real world. But a 2023 Gartner Research report, “Automotive Insight: Vehicle Cybersecurity Ecosystem Creates Partnership Opportunities,” is among the industry research documenting a shift in priorities.

Driven by factors such as the significant risk of brand and financial damage from cyberattacks via updatable vehicle functions controlled by software, as well as emerging international regulatory pressures such as the United Nations (UN) regulation 155 (R155) and ISO/SAE 21434, OEMs have begun to emphasize cybersecurity.

And today, they are actively evaluating and, in some cases, even implementing a few powerful capabilities:
– Security for IVI privacy and identity

– Detection of IVI app vulnerabilities

– Monitoring of IVI app performance

– Protection of car companion apps

– Detection of malicious URLs

– 24/7 surveillance of personal data

Investing in cybersecurity in the design stage, versus after breaches, will ultimately prove less expensive and more effective in terms of avoiding or mitigating serious crimes involving money, vehicle and identity theft from compromised personal data by the world’s most savvy and ambitious business criminals.

Read the original article HERE

Using technology to help find missing children

September 8, 2023 | Fox 5 Atlanta | By Denise Dillon |

ATLANTA – The National Center for Missing and Exploited Children has a new way to alert people to be on the lookout for missing kids in their area.

The organization has helped law enforcement across the country find more than 450,000 missing children since 1994. One thing they do with all the children is push their photo out to the public as quickly as possible. They’ve done this through posters, billboards, social media, and now QR codes.

“It puts the most recent missing child images literally in the palm of your hand,” said John Bischoff with the National Center of Missing and Exploited Children.

Just by scanning the QR code with your cell phone, you’ll be able to see photos of all the missing children in your area.

“These kids are out there. They need our help. It just takes the right person to help them out,” said Bischoff.

Bischoff says on any given day there are about 7,000 missing children in the U.S.

“It’s a scary number. These are kids where their families don’t know their whereabouts,” said Bischoff.

Bischoff understands the importance of putting out pictures of these children, in hopes that someone sees them or has information about them.

“We’d send out loads of printed posters just to keep engaged with the community to remind them this child is missing in your area,” said Bischoff.

The QR code gets the images out faster and lets you see images of missing children within 50 miles of your location.

I did it from Marietta and the images of 51 missing children popped up with their name, age, how long they’ve been missing and the last place they were seen. By clicking on a photo, you can easily submit a tip or share the image.

“We know someone is going to use this QR code. We know they’re going to recognize something. All it takes is one set of eyes to be a hero,” said Bischoff.

The QR code was just launched a couple of weeks ago… Now the national center for missing and exploited children is working on partnerships to get the QR code on water bottles, chip bags, storefronts, anywhere they can.

You can scan the QR code at: https://www.missingkids.org/blog/2023/new-posters-qr-code

Read the original article HERE

Meta plans to roll out default end-to-end encryption for Messenger by the end of the year

August 23, 2023 | By Ivan Mehta | TechCrunch |

Meta said today that the company plans to enable end-to-end encryption by default for Messenger by the end of this year. The tech giant is also expanding its test of end-to-end encryption features to “millions more people’s chats.”

The company has been building end-to-end encryption features in Messenger for years now. However, most of them have been optional or experimental. In 2016, Meta started rolling out end-to-end encryption protection through a “secret conversations” mode. In 2021, it introduced such an option for voice and video calls on the app. The company made a similar move to provide an end-to-end encryption option for group chats and calls in January 2022. In August 2022, Meta started testing end-to-end encryption for individual chats.

There is increasing pressure on Meta to enable end-to-end encryption so the company or others can’t access users’ chat messages. Protecting individual communications has become more important after a girl and her mother in Nebraska pleaded guilty to abortion charges in July after Meta handed over her DMs to cops. Last year, the police prosecuted the 17-year-old based on data about her direct messages from Messenger provided by Meta soon after the Supreme Court overturned Roe v. Wade, a 1973 judgment to make abortion legal.

In a letter to the digital rights advocacy group Fight for the Future (via The Verge) this month, Meta’s deputy privacy officer Rob Sherman said that it will roll out end-to-end encryption to Instagram DMs after the Messenger rollout. He also mentioned that “the testing phase has ended up being longer than we anticipated” because of engineering challenges.

In a blog post, the company explained that there were significant challenges in building out encryption features for Messenger. The company had to shed the old server architecture and build a new way for people to manage their chat history through protections like a PIN.

Meta added that it had to rebuild over 100 features like showing link previews in conversations to accommodate end-to-end encryption. The company’s popular messaging app WhatsApp has had end-to-end encryption for years, and in recent years it has figured out a way to support multi devices for one account without breaking encryption. Meta said that the Messenger team is learning lessons from WhatsApp to implement end-to-end encryption.

After the incident, multiple organizations, including Amnesty International, Access Now, and Fight for the Future wrote a petition to Meta and other platforms to enable end-to-end encryption for private chats.

Authorities around the world have been exploring rules that could put encryption in messaging apps at risk. While Meta has pushed back on these proposals through WhatsApp to support end-to-end encryption, it is yet to fully build out these protections for Messenger and Instagram DMs.

Read the original article HERE.

Cellebrite asks cops to keep its phone hacking tech ‘hush hush’

August 19, 2023 | By Lorenzo Franceschi-Bicchierai | TechCrunch|

The phone hacking tech company asks law enforcement users to keep the use of its technology as secret as possible

For years, cops and other government authorities all over the world have been using phone hacking technology provided by Cellebrite to unlock phones and obtain the data within. And the company has been keen on keeping the use of its technology “hush hush.”

As part of the deal with government agencies, Cellebrite asks users to keep its tech — and the fact that they used it — secret, TechCrunch has learned. This request concerns legal experts who argue that powerful technology like the one Cellebrite builds and sells, and how it gets used by law enforcement agencies, ought to be public and scrutinized.

In a leaked training video for law enforcement customers that was obtained by TechCrunch, a senior Cellebrite employee tells customers that “ultimately, you’ve extracted the data, it’s the data that solves the crime, how you got in, let’s try to keep that as hush hush as possible.”

“We don’t really want any techniques to leak in court through disclosure practices, or you know, ultimately in testimony, when you are sitting in the stand, producing all this evidence and discussing how you got into the phone,” the employee, who we are not naming, says in the video.

For legal experts, this kind of request is troubling because authorities need to be transparent in order for a judge to authorize searches, or to authorize the use of certain data and evidence in court. Secrecy, the experts argue, hurts the rights of defendants, and ultimately the rights of the public.

“The results these super-secretive products spit out are used in court to try to prove whether someone is guilty of a crime,” Riana Pfefferkorn, a research scholar at the Stanford University’s Internet Observatory, told TechCrunch. “The accused (whether through their lawyers or through an expert) must have the ability to fully understand how Cellebrite devices work, examine them and determine whether they functioned properly or contained flaws that might have affected the results.”

“And anyone testifying about those products under oath must not hide important information that could help exonerate a criminal defendant solely to protect the business interests of some company,” said Pfefferkorn.

Hanni Fakhoury, a criminal defense attorney who has studied surveillance technology for years, told TechCrunch that “the reason why that stuff needs to be disclosed, is the defense needs to be able to figure out ‘was there a legal problem in how this evidence was obtained? Do I have the ability to challenge that?’”

The Cellebrite employee claims in the video that disclosing the use of its technology could help criminals and make the lives of law enforcement agencies harder.

“It’s super important to keep all these capabilities as protected as possible, because ultimately leakage can be harmful to the entire law enforcement community globally,” the Cellebrite employee says in the video. “We want to ensure that widespread knowledge of these capabilities does not spread. And if the bad guys find out how we’re getting into a device, or that we’re able to decrypt a particular encrypted messaging app, while they might move on to something much, much more difficult or impossible to overcome, we definitely don’t want that.”

Cellebrite spokesperson Victor Cooper said in an email to TechCrunch that the company “is committed to support ethical law enforcement. Our tools are designed for lawful use, with the utmost respect for the chain of custody and judicial process.”

“We do not advise our customers to act in contravention with any law, legal requirements or other forensics standards,” the spokesperson said. “While we continue protecting and expect users of our tools to respect our trade secrets and other proprietary and confidential information, we also permanently continue developing our training and other published materials for the purpose of identifying statements which could be improperly interpreted by listeners, and in this respect, we thank you for bringing this to our attention.”

When asked whether Cellebrite would change the content of its training, the spokesperson did not respond.

The Electronic Frontier Foundation’s senior staff attorney Saira Hussain and senior staff technologist Cooper Quintin told TechCrunch in an email that “Cellebrite is helping create a world where authoritarian countries, criminal groups, and cyber-mercenaries also are able to exploit these vulnerable devices and commit crimes, silence opposition, and invade people’s privacy.”

Cellebrite is not the first company that asks its customers to keep its technology secret.

For years, government contractor Harris Corporation made law enforcement agencies who wanted to use its cellphone surveillance tool, known as stingrays, sign a non-disclosure agreement that in some cases suggested dropping cases rather than disclosing what tools the authorities used. These requests go as far back as the mid 2010s, but are still in force today.

Here’s the full transcript of the training video…

Continue reading the full article HERE.