Fear, Cybersecurity, and Right to Repair
Massachusetts is the latest state to grapple with Right to Repair legislation. A ballot question in the 2020 election asked the state’s voters to decide whether or not automobile manufacturers must make the telematics data collected by cars’ on-board computers available to independent repair shops. What seems like a debate over who can access the data generated by a car has become part of a much larger issue with broad implications for cybersecurity.
On one side of the Massachusetts debate is the Coalition for Safe and Secure Data. This group, funded primarily by automobile manufacturers, released a series of television advertisements, at least 2 of which linked the passage of this ballot question to the possibility of hackers gaining access to vehicle data.
One scenario, in an ad that is no longer on the Coalition’s YouTube channel but can still be seen in another video critiquing the original ad, describes “anyone” accessing vehicle data and using it to stalk a victim. Another ad by the coalition, available on YouTube, describes foreign government hackers intentionally crashing a vehicle remotely.
We know these types of attack are possible, as shown in a high-profile demonstration by security researchers a few years ago. Whether or not they are a realistic threat to the average person is a different question, and deciding what we should do to prevent this scenario requires a much deeper understanding of how hackers steal data and how we can stop them.
If we assume the attacker is not physically breaking into the vehicle or tricking a person into providing access, both of which are potential methods of attack but not what seems to be implied by the ads, this means the attacker must have remotely hacked into the car’s software. The good news is that being individually targeted by a hacker is not a very realistic scenario for the average person. Finding a software flaw and figuring out how to use it to gather data or control something usually requires a high degree of technical skill and a lot of time. The bad news, at least for our stalking victim, is that remote GPS tracking devices are available on the Internet for about $40. A determined stalker would likely find it much faster and easier to purchase one of these devices and conceal it on the victim’s car.
There may be cases where a skilled attacker with plenty of time on their hands, such as a foreign government described in one of the ads, is willing to put in the time and effort to target a high-profile individual. Another potential scenario, not outlined in any of the ads but also within the realm of possibility, is a criminal organization attempting to remotely disable a fleet of vehicles and hold them for ransom. With these scenarios in mind, we need to consider the means by which a hacker may be able to gain access to a vehicle’s systems and how this issue could be addressed. This takes us into how vulnerabilities are found, reported, and fixed.
Sensitive computer software, including the software in automobiles, should be designed and programmed so access is only allowed for authorized individuals. If a hacker gains access to any sensitive system, whether it’s the computers in a car or an e-commerce website full of credit card numbers, it means there is an underlying security flaw in the authorization process that needs to be fixed. This could either be an oversight in the design of the software or a programming mistake that allows a hacker to bypass an otherwise well-designed system. Whether the flaw is a design oversight or a programming bug, we'll call it a “vulnerability” in the software that an attacker can find and “exploit” to access the sensitive system.
There are generally three categories of people who search for vulnerabilities in software:
- Criminals who are part of an economy of stealing and ransoming valuable data;
- Intelligence agencies who want to spy on targets of national interest or protect their own systems from their foreign counterparts; and
- Security researchers who want to fix vulnerabilities so criminals and hostile intelligence agencies can’t exploit them.
Dealing with vulnerabilities is essentially a race. The “Bad Folks” (criminals and hostile intelligence agencies) are constantly searching for vulnerabilities they can use to access data while the “Good Folks” (security researchers, and friendly intelligence agencies) are trying to find those vulnerabilities and get the software developers to release a fix so attacks can be stopped or prevented.
Many software developers are receptive to reports of vulnerabilities in their products. In the best cases, software developers will work with security researchers so that a fix can be released in a timely manner. These developers may also offer “bug bounties” so that researchers receive compensation and recognition for their efforts. Tesla is an example of an auto maker that has long had a bug bounty program and is known for quickly fixing flaws in their software.
On the other hand, some software developers have used legal threats to prevent or punish researchers for disclosing the existence of vulnerabilities and delayed or outright refused to release fixes for legitimate problems. This lets the developer avoid the time, expense, disruption, and potential embarrassment of acknowledging a vulnerability and releasing a fix, and often leaves the public vulnerable for longer than necessary. When a software developer conceals a vulnerability by silencing a security researcher, it does not prevent that vulnerability from being independently discovered and exploited by criminals or hostile intelligence agencies.
Right to Repair laws, along with copyright laws and computer crime laws, factor into the game of cat-and-mouse between researchers and hackers by encouraging or discouraging the work of security researchers who are trying to find vulnerabilities and get them fixed. They also factor into the balance between researchers who want to get vulnerabilities fixed and some software developers who would rather conceal vulnerabilities’ existence entirely.
Civil and criminal cybercrime laws that can be interpreted as penalizing security research, as may be the case with poorly written legislation, or the absence of laws explicitly allowing research make it easier for companies to conceal vulnerabilities through legal threats or prevent research entirely through the chilling effect on researchers who are unwilling to operate in a legal grey area. Regulations that allow broader access to data, such as research exceptions to existing cybercrime laws, can encourage security researchers to find and report vulnerabilities to software developers, as they are no longer operating in a legal grey area and may have increased access to the specifications that can help them more quickly understand the system they are analyzing.
This brings us back to the debate over Right to Repair legislation. Vulnerabilities that can be found and exploited by hackers already exist in automobiles and all of the other computerized electronics we use every day, with or without Right to Repair laws. Restricting access to these systems will only dissuade legitimate security researchers. If a vulnerability exists, a skilled attacker intent on committing a crime will still be able to find it regardless of the law.
Legally mandating broader access will allow security researchers to more easily find these vulnerabilities and try to get them addressed before they are exploited to steal or damage sensitive data. It will also limit the ability of the companies that would rather ignore the consequences of flaws in their software to use intimidation to silence legitimate vulnerability reports.
There may be plenty of valid arguments for or against right to repair legislation. Auto manufacturers may be trying to limit access to their systems to boost profits by monopolizing the data, in turn boosting their dealer repair shops and preventing the competition from figuring out how their systems work. On the other hand, providing access to third-parties may create legitimate privacy concerns as independent repair shops seek to monetize the data that auto manufacturers already access. Spreading fear about the remote possibility of stalkers hacking into cars is not a constructive part of that conversation. Opening access to security researchers should reduce this risk by increasing the number of vulnerabilities that are found and reported before they are exploited.
If a company is concerned about the number of vulnerabilities that would be revealed by allowing broader access the systems they created, perhaps this is an indicator that security wasn’t properly addressed when the system was developed. Restricting access by an authorized party (i.e., an independent repair shop) will not solve the underlying problem. The security flaws in the software need to be found and addressed in the same fashion a mechanical defect in the automobile would be addressed through a recall.
It would behoove a company that has doubts about the security of their software to hire some skilled researchers or create a bug bounty program to help find their vulnerabilities before their real adversaries do it for them.