This question is inspired from reading this article at the Huffington Post, where a US senator is very concerned about security issues in driverless cars. http://www.huffingtonpost.com/2013/05/17/driverless-car-hack_n_3292748.html.
Is the security technology at a sufficiently mature level now that the senator's security concerns are, for the most part, unreasonable?
There's quite a threat evolving for cars in general. Right now it isn't really fiancially interesting to attack vehicles, but this may change in the future. Besides that there's always the risks of attacks that are designed to disrupt traffic (or crash a vehicle, in the extreme case). However, it should be remaked that this isn't specific to AI-controlled vehicles: this also applies to other connectivity like bluetooth, wifi, tire pressure sensors and pretty much anything that communicates through the car's CAN bus. For examples, see the following resources, which include IEEE S&P and USENIX SEC publications:
http://www.autosec.org/
http://static.usenix.org/event/sec10/tech/full_papers/Rouf.pdf
Once the attacker controls the CAN bus, it seems extremely obtruse to attack the AI system instead of the engine or the breaks, if you want to do real damage. Yes, the AI system should be protected from attacks, but it isn't exactly the most interesting target. For driverless cars, the insurance is much more interesting -- who is liable when the car crashes? In any case, I would say that current non-regulated use of driverless cars is not a good idea, also because the technology is still maturing.
There are some examples of hacking into a car through the entertainment system, and hijacking medical equipment (insulin pumps, etc.), so there is a real issue there, but so far there doesn't seem to be enough reward compared to the effort (except, apparently, there is a big deal in europe of people hacking into their own cars to change the mileage and driving history, after they have driven their cars to ruin, so they can take advantage of the warranties to get new parts)
i don't have access to the relevant references just now, but i can find them later on
There's quite a threat evolving for cars in general. Right now it isn't really fiancially interesting to attack vehicles, but this may change in the future. Besides that there's always the risks of attacks that are designed to disrupt traffic (or crash a vehicle, in the extreme case). However, it should be remaked that this isn't specific to AI-controlled vehicles: this also applies to other connectivity like bluetooth, wifi, tire pressure sensors and pretty much anything that communicates through the car's CAN bus. For examples, see the following resources, which include IEEE S&P and USENIX SEC publications:
http://www.autosec.org/
http://static.usenix.org/event/sec10/tech/full_papers/Rouf.pdf
Once the attacker controls the CAN bus, it seems extremely obtruse to attack the AI system instead of the engine or the breaks, if you want to do real damage. Yes, the AI system should be protected from attacks, but it isn't exactly the most interesting target. For driverless cars, the insurance is much more interesting -- who is liable when the car crashes? In any case, I would say that current non-regulated use of driverless cars is not a good idea, also because the technology is still maturing.
The motive for hacking is often, but not always, financial. Imagine Anonymous targeting a political figure they disagree with. Or perhaps thousands of cars with components made by a foreign power that degrade the performance of the car enough to quietly cost the country millions of dollars.
Thanks for the feedback, Christopher & Rens. If you'd like to see some of the different known ways to hack into a car, check out this article: http://www.caranddriver.com/features/can-your-car-be-hacked-feature . The key fob hack, in particular, could be done by a 14 year old very easily.
This said, I agree with both of you that the risk is probably low at the present time. But with an AI mind, rather than a human mind, controlling the car in the future, the vulnerability would probably be a lot greater. There would likely be more access points to break into the car. A hacker, for example, could try to inject false data via the car sensors/networks in a way that the AI believes is true, causing the AI to make bad decisions and putting people or cargo in danger.
Hard security measures, such as encryption and authentication, may be able to provide reasonable defenses against this type of vulnerability; but the weakness in these types of security measures lie in the fact that after you get past a hard security event (i.e, decrypt a message, enter in a password, etc), there are often no additional security measures present to prevent someone from misusing the system.
I am personally working on soft security methods to address hard security weaknesses to ensure system behaviors stay within acceptable limits and expectations. My work is specific to trust algorithms at this time.
The most menacing attacks on cars require a physical connection as the telematics, like Onstar, are on a different data bus and the gateway could filter any rouge messages out, but if you are connected to the CAN bus directly and are knowledgeable enough to send the right message with the right frequency then you might be able to do some damage. But, the requirement of the physical connection rules out your Anonymous-type hackers. As one of your sources suggests it might not be impossible, but many things make it harder than to hack your PC. By the way, the runaway scenario is not so menacing today because since the Toyota problem most automatic transmission cars have a software feature that disables acceleration when your foot is on the brake pedal.
Taking control of driverless cars will be probably most interesting for various kind of terrorists, because of massive damage and injuries possibility, especially on the highways. It can be much more effective than home-made bombs. It will become more atractive with increasing number of driverless cars...
Rodrigo: have a look at the USENIX paper I posted. There, the (wireless) tire pressure meter is used to gain access to the CAN bus. Similarly, the autosec guys have used bluetooth (although, if memory serves, they needed physical access beforehand).
As for claims about "terrorism" and such -- I am very hesitant to resort to this kind of argument. Terrorism is already widely exploited as an excuse to enforce questionable laws, the last thing we need is security research motivated by preventing terrorism. Terrorism is a vague concept of some irrational attacker with (virtually) unlimited resources. It requires us to consider insider attackers at all levels, and it requires us to consider social engineering as an attack vector. Under those conditions, it is extremely difficult to provide a usable system -- the requirements make sense for high security locations, but not for each and every vehicle. There's also the perspective of cost, as car companies are not willing to invest high amounts of money into car security without any visible gain. Building a cheap, usable, backwards-compatible and highly secure system* is not feasible.
To be clear: I'm not saying we should not do security research, or that we should never consider terrorism. However, we should keep in mind that our end result should provide a level of safety and security that is higher than the current one. The goal is not to build a perfect system. We don't need to defend against an aircraft crashing into the vehicle we're designing. We don't need to defend against some maniac crashing his car into everyone on the highway. We need to build secure communication protocols and properly compartmentalize the ECUs in a vehicle. We need to design a resilient AI, which operates on verified information.
(* system here refers to the entire transportation network, not to an individual vehicle)
Rens: TPMS tampering to gain access to the CAN network is terribly platform specific, and again would require previous knowledge and experience to achieve. Again not your run of the mill hacker.
True, but it doesn't require the physical connection, which would make the attack (much) more difficult. The thing about software attacks is that they can be wrapped in a user interface and distributed or sold to people that can then execute the attack. Attacks from guys like Anonymous and Lulzsec mostly function using this paradigm -- see the LOIC tool [1], for example. Similarly, we have tools like metasploit, which allow both us security researchers and attackers to rapidly deploy software attacks. It only takes one guy to publish the code, and there is plenty of financial incentive to sell attacks in the current Internet. There is no reason to assume that this will somehow be different for either in-vehicle networks or vehicular networks. Of course, we're talking (mostly) about future systems here, so we can learn from what has gone wrong in the past, hopefully leading to more secure systems.
[1] there's a whole series of them on sourceforge, see e.g. http://sourceforge.net/projects/loic/
I mentioned the terrorists without intention to secure all vehicles again such thread. It' s just a potential one. The efforts (in particular scientific and financial) to fight such threads should be proportional to the level of real hazard. So nowadays it is insignificant, but not non-existing.
Why just addressing negative aspects of poor security features regarding vehicle systems? I think you could also have a look at the advantages:
Ever thought of pimping your car software-wise? I think it is just a matter of time until some sort of hacker garages offer neat extra vehicle functions. The existing big players in the auto industry are quite conservative - they have a lot to lose. Therefore, it should not be surprising if extra functionalities and gimmicks like smartphone remote or lane keeping pilots are offered by small companies or even garages. That, of course, is only possible if the hardware is already onboard and can be hacked without too much effort.
Interesting point Thomas. That said, I'm sure if someone choses to modify their vehicle with their own software to control primary functions, then it would likely void some warranty or even expose the owner to liability in the event the vehicle is involved in an accident. Companies selling these after-market software updates would likely need to go through some safety certification process to protect themselves and vehicle owners from litigation.
Regarding the "terrorism" discussion, I think considering the possibility of a terrorist attack as an extreme case is useful in hardening the underlying security mechanisms in a driverless vehicle. There may be some general solutions that cover extreme cases elegantly without impacting solutions to more common security issues.
Thomas,
the driverless car is not a satelite receiver or wireless router to put inside a hacked software. A difference is a possibility to hurt anyone. I'm pretty sure, that nobody hacks software in aeroplanes, just because it is too dangerous.
By the way, taking control of unmanned platform similar to one we are talking about happened in Georgia (country, not state of USA :) ) during war with Russia. Russian forces take remotely control of military Unmanned Aerial Vehicle of Georgia. So it is possible (it this case probably with a little help of producer of UAV from Israel).
@ Dariusz: Well, I am sure that you will lose all your warranties in case you pimp your car software-wise. Btw, the same applies today if you get your car tuned engine-wise because you would like to have some extra horse power. And engine tuning is already often done by modifying or changing the engine control unit.
@ Artur: Maybe my description provoked the wrong impressions. I am not talking about any reckless hackers who fiddle around with your car electronics for an extra dollar. I am talking about people who make a business out of that and therefore make these efforts at best faith and with proper safety testing. Currently people are paying a lot of money to get their vehicle tuned “conventionally” (e.g., Mugen, Brabus, RUF) even though they lose the warranty of the OEM. How is that possible? Because these tuning companies do have a proper quality standard, maybe they even give you a warranty on the changes they made.
My point is: Often small companies bring changes to an existing market because they can act much more flexible and thus take higher risks compared to the huge, well established players. On the other hand, you as a customer also expect the big companies to offer you perfect quality. If your accelerator pedal gets trapped in a serious production car you surely think of suing the manufacturer in case of an accident. If you ask a car tuner to modify your accelerator pedal and he already tells you that he cannot guarantee a proper function: In case you still ask for the modification, you surely would not be as aggressive about bringing it to court.
In my view until the time a dominant software platform emerges (like an O/S for driverless cars), massive cyberattacks will be difficult. In the Internet world massive cyberattacks succeed by taking advantage of some fundamental weakness in one of two things: a common protocol used by almost everyone or a commonly used "implementation". Most attacks in the Internet world take advantage of weakness in implementations -- bugs in Windows, Linux, bind, IE, Safari. They succeed when the specific implementation exploited dominates the market. Some attacks take advantage of weaknesses in protocols as well -- such as, TCP handshake. But they usually decline over time as protocols become more secure.
In other words, I certainly expect to see exploits for specific makes/models of cars -- someone hacking into model X of car from company Y. But those attacks will not translate to massive attacks. These niche attacks could still provide some underworld business -- say ransom from a high value person trapped in a car. (Similar to the Russian's hijacking Georgia's UAVs). They will be a nuisance, but they will not bring the society to a standstill. In current security terminology, they may called "targeted persistent attacks", where an attacker studies the specific make of a vehicle driven by his/her target and crafts a very specific attack. Such attacks can be very expensive and would need to be well worth the financial costs.
I do see an avenue where there may be need for strengthening protocols. Driverless cars are very likely to communicate with other cars and with the infrastructure. Ensuring that this communication is secure and trusted is likely to be a challenge. The current method of secure communication relies on cryptographic signatures. It requires a pretty significant infrastructure (PKI) to verify that the messages are signed by the person/entity who claims to have signed it. I do not know whether the current infrastructure can provide real-time verification of signatures. So if you are driving through a new location, and your car receives information that there is a detour ahead, can it verify in real-time that the message can be trusted. Do you trust the message and find a new path, or do you keep going and verify whether there is really a broken bridge ahead.
Having said that, I think the scenario I have outlined is conceivable, but it makes the false assumption that driverless cars will operate without any human supervision. It also assumes that all communication will be only via electronic means, and that flashing yellow lights and signs will be history.
In reality though you can expect human in the loop -- a la computer assisted driving. Something like cruise control. You'd be able to turn the AI on and off.
So to address the original question -- I think that existing security technology is sufficient to ensure secure communication between components of the same vehicle -- such as between the steering wheel and the tires. However, I am not sure if the technology exists to ensure secure real-time communications between arbitrary actors, especially actors that are moving around at significant speeds.
This issue is very interesting and is in close correspondence with modern driver assisting systems. IMHO the AI in driverless cars will not rely in its childhood on electronic communication, but will use standard human user interface like flashing lights and road sign recognition (similar systems are in use now). Such approach is much more reliable and prone to attacks.
Maybe later an electronic communication with the road infrastructure will be used in parallel (source of information easy to verification and reliable, on the same basis as PKI works). Only few authorities will be allowed to sign and send such messages. It can work safely.
Last (and nowadays sci-fi) stage will be direct real-time communication with other vehicles - information much less reliable, difficult for verification, potentially very dangerous.
Imagine the safety system sending messages from car to car about activating of emergency breaking. Potentially it can protect from collision, but sending such fake message and subsequent activation of full break during normal traffic on highway - the horrible scenario...
Real-time communication between vehicles is not science-fiction. Both in Europe and America, the first standardization projects are coming to an end- day one deployment for these systems is supposed to arrive in high end vehicles in this decade, according to the last discussions I had with other researchers that are more closely involved in standardization.
OK, maybe sci-fi is not a good word, standardization work is necessary. To establish communication, the reliable protocols should be defined. But according to the topic we are talking about, how much faith can we have in such informations coming from other car? The security, verification, and selection an informations we can trust are probably a big troubles in such standards. This problem is less serious, if it's only informational activity for human driver , which can filter information himself (probably this is the purpose of systems you've mentioned about), but for driverless car, its completely different issue. That is why I've used word "sci-fi".
Fair enough. From my perspective, a fully driverless car is further in the future than V2V communication, but I'm rather biased as the latter is my research topic.
With respect to protocols; The current state (at least for ETSI, the european standardization organisation) is that two message types are finalized: the CAM (cooperative awareness message) and the DENM (decentralized event notification message). Further types are under development (which I don't recall from the top of my head), but they are mainly sent by infrastructure. Messages are protected by signatures using changing pseudonyms (for privacy), which are issued by a special PKI (the car-to-car communications consortium is performing a pilot run of one year for this PKI in the near future). The message contents are mostly strictly defined and informative, rather than instructive. The idea is (at least in my research) that vehicles can use sensors and information from other vehicles to verify data. Presenting data to the user is more or less an open problem, though, as far as I am aware.
Yikes!
Re: "Is the security technology at a sufficiently mature level now that the senator's security concerns are, for the most part, unreasonable?"
No.
There is not a whisper of a doubt that security technology is not even close to that mature, certainly not as currently implemented. Your daily/weekly Windows, Java and Adobe Security updates should be ample proof of that.
It seems as if the majority here think there is no appreciable vulnerability. They are dead wrong. You will see all manner of nasty security breaches with these things before they begin in earnest to secure them.
My research into 'data packaging' involves security. Although I am not a security expert, I know enough to see how fundamental systems are open to security breaches. If I can see those flaws, you can bet professional crackers already have tools to exploit those weaknesses.
Security infrastructure is woefully inadequate all around and I do not exclude myself from that assessment. It is difficult to secure systems against well armed attackers. If there is a prize like being able to tell a car to drive itself to a chop shop and present itself for dis-assembly and sale as parts, you can bet attackers will present themselves. More worrying to me is that the cars can be used as weapons.
I am a software developer and one of the reasons security is not deeply embedded into systems is that it is cheaper to temporarily tear it out during development so you don't spend your life debugging problems with security. Think about all the ridiculous bugs you have seen over the years. Trust me, if Microsoft can't even fix a message that says 'save' when it should say 'read' in its *backup* software for Server 2008 R2, you can bet they can't lock down systems untested until after the fact. Again, I refer to the constant security updates. Fixing a single buffer overflow problem in a program does not fix a program badly enough constructed that it has buffer overflows.
Unfortunately, that means:
1) Working programmers do not know much about security.
2) Crackers know *much* more about cracking than ordinary working programmers. Programmers who do not know much about security cannot secure a system even when they try.
3) Security is never tested nearly enough.
There are a million or more people involved in developing software in North America. When was the last time you saw a job posting for 'cracker'? If you want to test if your software is proof against crackers you have to test with at least one of them taking a run at it.
Many years ago, I designed and wrote the secure remote dial-in (sic -- long ago) software for one of our banks here in Canada. It was used by IT and senior executives to log right into the banking production system. It had to be secure. One of the first things I did was consult a cracker with respect to the design and implementation of the system.
The system protected a billion dollar banking system for six years without a breach.
If you do not even realistically *try* to secure your software against attack, what are the odds it will be secure against attack? In my not so uninformed opinion, the odds are very, very small.
Security is dependent on scenarios and we do not know enough about the scope. However, from a general computer architectural perspective, we (almost) do not have hardware space limitation in a vehicle. Thus most of the control can be hardwired leaving only limited part to Software. That way we can make it more secure. Software are more easier to hack. Hardware are more reliable.
Well, it took a few hours, but here's some news on that front:
http://www.theregister.co.uk/2013/06/06/electronics_skeleton_key_has_police_stumped/
Jiwan:
Most car manufacturers I know about have a pen-testing lab. They typically manage to attack the CAN bus and the connected ECUs without too much trouble, even in a black-box testing scenario. For example, fuzzing is a pretty useful technique. Youtube has numerous videos of people that steal cars by reprogramming it to accept a new (physical) key.
The idea that hardware is more secure than software is largely an illusion. Yes, in theory one could build a more secure system by separating the CAN in separate networks physicially.
Currently, this seems to be "achieved" through a gateway, which is supposed to contain a firewall. However, this gateway can be accessed through the OBD-II port, and debug modes allow reading out of every network (see also the discussion in Bob's post). It's probably not going away either; I'd speculate that it is mainly used to test ECU functionality, based on some hear-say from discussions with researchers in the area.
The issue isn't off in some distant future with driverless cars - it's here today. Cars can already be "driven" outside of the driver's input using current software and hardware (electronic throttle control, electronic stability control, etc.).
There are commercial businesses that routinely hack auto manufacturer's CAN and ECU security to provide nonfactory approved calibrations to customers. I saw a demonstration by one tuning company of completely remapped fuel, ignition and cam-phasing calibration on a recent Bosch MEDC-series controller that was done to support aftermarket turbocharging conversion. This was done without any factory support and used modified code in a factory ECU that passed checksum and did not throw an OBDII code. The company also hacked the body-computer and had their corporate logo revolving inside the factory console LCD.
Considering the use of ETC and ESC on most, if not all, current light duty vehicles, this already poses serious implications with respect to vehicle safety. In my experience, some of the hacks used by even well intentioned tuners (e.g., disassembling machine code and using trial and error probing of hardware functionality) are very risky since there is far from a full understanding of the underlying control algorythms and interelations between calibratible parameters for hacks of that sort. Powertrain calibration is tricky enough even with full factory support via software documentation and many of those providing aftermarket "tuning" capability often rely upon only slightly educated guesswork to ply their trade and some have very limited understanding of the underlying engineering involved in powertrain control. As you can imagine, beyond safety there are also potential vehicle emissions, durability and warranty implications.
So far I haven't heard of ECU hacks done purely for nefarious reasons, such as introducing a virus to cause a vehicle to go out of control, but if ECU hacking can be done by tuners for commercial hire, the potential is there for it to be done by others or for someone working for a commercial tuner to turn rogue.
I actually just got a call for papers for the CyCAR workshop on the Security, Privacy, and dependability of cyber vehicles (http://cycar.trust.cased.de/). If you are interested you should look them up.
@Richard - I believe the motive for someone to hack into a car is kind of a seperate issue from the car's security. If it wasn't, then one far-out security mechanism could be to bait potential attackers with easier, more lucrative targets to avoid them from putting in the energy to attack our vehicles. But I tend to agree with you that vulnerabilities will likely always exist that security technology cannot fully protect, simply because it wouldn't be practical to do so.
Practicality is an interesting aspect. In your view, is the technical practicality the biggest constraint, or the financial one? In vehicular communication scenarios, at least those being standardized at the moment, the latter seems to be a core issue. In particular, because every message is (planned to be) signed with ECDSA-256, the crypto processor would be a financial constraint.
I think that, in the end, both autonomous and connected cars will be comparable to GSM: everyone knows it is broken, but noone wants to bear the financial burden to fix the problem, and this burden is huge due to the massive deployment of the system. I hope I'm wrong -- at least so far, the standardization process is a lot more public.
Although by driverless car technology accident chances may reduced, automakers are developing complex systems that allow cars to drive themselves. this all things are happening in Artificial Intelligence. But there is a risk factor and this technology may be so dangerous. It may also be used in terrorism activities some one who can hack the driverless car may use it to rush into a security zone. How?? A terrorist may put the explosive into the car and give the commands or instructions to the car at the security check post car will not stop a security person will target on driver seat, but there will be no driver at that time and in the mean time car will hit its target!!!
I've read the article on Sen. John Rockefeller's concern about
unmanned cars or AI-cars. Commenting on issues like this; one
needs to be an innovator/researcher with a product actually
in-use by at least one person/company. Reason: one will be
able to see and know the big deep between science and reality.
Frankly, if, in Nigeria, I develop and start using an unmanned
car that does not have an Internet-Access, GPS, UMTS or GPRS
functionalities, then there is NO WAY an angered child in
Indonesia can hack into it. We cannot eat our cake and still
have it. The first step should be to develop unmanned cars
that run on systems that can only be modified by wired-links
for a start. Then, we observe the scenario for a few years.
Then, we can plan to take the next step to have such cars'
systems remotely modifiable. It is very important not to try
to JUMP THE GUN.
@Richard:
We live in a different time than when Windows overtook *NIX. In the 1980s (and even 1990s) people still left 'open relays' on their servers as a courtesy. The incidence of bad behavior was low enough back then to make convenience to the good guys worth more than risk of trouble from the bad guys.
Back when it was common to have open relays. Admins who left them up were considered mannerly. Eventually, these were abused enough by spammers that it became *unmannerly* to leave open relays up.
Automobiles are highly useful, big, dangerous and intrinsically valuable. As targets, they are tempting enough to guarantee attack.
Security has to be embedded deep into the DNA of the systems by design and this aspect of the design has to be done with and vetted by security experts. Non-experts have no idea of the challenges involved.
You have to assume that attack can come from anywhere. That includes things, people and organizations you trust. Every component of the system should be on a strict 'need to know' basis for any knowledge about the system. Any authority should be the barest minimum that works.
Murphy's law rules these things. Any avenue that you leave open to attack will eventually be used to attack -- probably sooner than you would think. To some extent, your system was under attack before you even knew you were going to design it.
Unless you are a very accomplished cracker yourself, you have virtually a zero chance of assessing the security of a system. You need to seek expert advice on this sort of thing and I think you would be best to seek more than one expert.
Look at how many security patches are required to keep modern systems even a little secure. Those systems were designed by teams of experts and vetted by other teams of experts. They have been field hardened against actual attack. Despite that, you will be applying patches to those systems next month. If systems carefully crafted by experts fail, you can be certain that systems crafted by non-experts will fail faster, more often and more completely.
[Note: securing systems, by design, during design is significantly more certain than doing it after the fact.]
Modern professional attacks are enormously sophisticated. Using a non-professional to evaluate your security is worse than bringing a knife to a gunfight. It is like attempting to use your bare knuckles against an opponent with nuclear weapons.
If I were involved in the design of such a system for a real production car, I would be inclined to offer prizes for successful attacks. I would start such a program with some low-hanging fruit (security weaknesses you expect a reasonable cracker would find) and let the community know that such things exist as an incentive. For some of the best crackers, this would be free money. It would also quickly identify the dumbest weaknesses you have overlooked and give you plausible deniability if your system has some extreme errors.
I should add:
We live in a time when disclaimers are the norm. To shield manufacturers against liability, instructions on appliances warn against every conceivable danger, no matter how remote.
For most things, the warnings have an unwritten subtext: "it is not very likely, but it is possible that such a danger exists"
My notes above should not be confused with the standard disclaimer. I am not only saying that these breaches are likely. I am saying that your system is guaranteed to be breached unless you take extraordinary measures to secure it.
Most of the original comments were to the effect that security was not much of a problem. It is telling that after my contrary comment that breach was likely it only took a few hours for such a breach to hit the news. The security of our systems and the public's apprehension of that security are a long way apart.
@Richard,
WRT 'need to know', I was not talking about restricting information about APIs. etc from developers. What I was talking about was that the design of the software and hardware should be done with very strong encapsulation such that it is not possible for code in one spot to interfere with code in another. We still routinely see software patches for stuff like buffer over-runs which should be completely impossible. Something capable of a buffer over-run is insecure by design. What I am saying is that to be secure by design you need to compartmentalize the system at a fine level of detail such that it is effectively not possible to gain any kind of illegal privilege escalation.
I am on the fence as to whether or not existing tools can be trusted to build secure systems. In theory, you could build a perfectly secure system using hand coded machine code. I think I can agree, though, that the current generation of tools as wielded by most developers make the generation of secure systems less *probable* than I would like.
The drive to create provably correct systems is understandable and to some extent tools and techniques exist that work. However, their use is limited until a problem is well enough characterized to be specified correctly. Unfortunately, all of the hard work that requires skilled professional developers occurs *before* the code is in that state. Automated proofs are admirable, but they only represent a tiny portion of the development workload. Until you get to a certain point, you need skilled developers working with a security focused development ethic. You may ultimately bring the problem into focus enough to be amenable to formal proofs, etc, but you sure will not start out that way for any non-trivial problem. Creating workable secure systems for highly automated cars is decidedly non-trivial.
Re: offering prizes for vulnerabilities does not contribute to secure by design
I have been a professional software developer for more than thirty years. I emphatically disagree. Not only will hiring crackers help to create a secure design, it is the only way to create a secure design. There is no other way to have a hope of securing a non-trivial system. You cannot get 100% security test coverage. By definition, the unexpected is not something you expect.
The ways that a complex system like a network connected automobile can and will be attacked are fantastically varied.
In practice, there is a great deal of tension between security and usability. Ideally, you would like a car to effortlessly respond to any command given by its owner and to entirely resist responding in any way to an attacker. Getting a good balance is doable in my opinion but it is not going to be easy. It surely will not be done without a variety of experts involved and security experts are pretty much the only way you can hope to get security. Do you really want to bet the farm on a defensive 'security expert' unable to demonstrate knowledge of attack?
Re: But the last part - "unless you take extraordinary measures" - is not only unlikely to happen given the current vendor trajectory...
Windows has lots of bugs and many have been used to breach security. Many have led to significant losses in the aggregate as programs have crashed and taken people's data with them. You might think, given that dreadful track record, that it would be repeated with driver-less cars. It might be initially, but the agencies responsible for signing off on the safety of automobiles are not likely to allow many large scale thefts, kidnappings and deaths before they clamp down hard. Those same people won't let you have a baby in the car without a child car seat they have specifically approved (here in Canada, anyway).
If they won't allow you to risk injuring children in a car accident, what are the odds they will allow you to risk a car turning into a weapon of mass destruction?
The choice will not be between having a dangerous driver-less car or a safe one at a higher cost with less 'coolness'. It will be between a safe driver-less car or a non-driver less car.
Consumers will demand driver-less cars as soon as they see it is reasonably possible. The people who approve vehicles will not let a known dangerous one on the road. Manufacturers will either build and sell driver-less cars or they will not sell cars. Soon, the only manufacturers left will be the ones that can build cars that successfully pass safety standards.
Security will soon reach a critical mass that will necessitate much better controls than we have now, across the board. It will soon not be practical for most things to remain unattached to the global network. This will result in a massive increase in the size of attack surfaces for broad classes of things, some of which can become quite dangerous if they are incompetent with respect to security.
I am interested in the emerging driver-less cars because they represent a near ideal leading edge for the automation of the rest of the world. They are large enough, expensive enough and important enough to support very sophisticated on-board systems. They generally travel along routes well served by the global network. There is an excellent business case for a competent driver less car and a supporting infrastructure. Savings in time, fuel, insurance costs, highway maintenance, labor, etc may pay not only for the automated systems but perhaps a big chunk of the car itself. Attaching cars to the network is already well underway.
I expect we will see a couple of nasty security breakdowns before we get serious about securing these systems, but I expect them to happen quickly and I expect the response to be dramatic and sure in favor of rational security.
BTW -- you might think that I am a 'security expert' or 'cracker' promoting myself here. I am neither. System security is an important aspect of the work I do in 'data packaging' and network systems. However, if I were responsible for actually producing prototypes of driver-less vehicles I would hire bona-fide experts rather than attempt to do it myself. I have a better than average knowledge of these things, but that is not saying much. A real expert would probably assess the possibility of attacking systems such as electro-magnetic monitoring and interference. They would be able to render a sound judgment as to the vulnerability of a system. Even with their help, your system will be vulnerable. You cannot make it invulnerable. However, they will help you make every known avenue of attack 'expensive enough' to meet reasonable security needs.
@Richard:
You are very right. Once a technology is out there on the street, you no longer have
control of the agenda. People will want, not just more and better, but request for customizations. Actually, I was talking about maintenance control by remote means, which is what I feel the internet is all about. In real-life designs, it will be dangerous to allow operational control by remote means. But just as you said, we will have to
wait to see what threats will arise out of the present usage, since Google staff are already using these type of cars; so I read.
@Bob:
NOBODY HOLDS A MONOPOLY/SECRECY OVER KNOWLEDGE, SKILL,
EXPERTISE OR AN APPROACH TO A SYSTEM DESIGN. The concept of a full-proof secure system is only an IDEAL, not a REALITY. Then, don't let us forget that NECESSITY IS THE MOTHER OF INVENTION. So, how truly can every possible
hack approach be exhausted by simulation, considering the fact that there should
be over ten million system professional over the world with different mind-set. But, frankly, the issue is about driverless car maintenance control by remote means
(the Internet). I make bold to let know that the Internet IS ONLY SECURE BY AGREEMENT. In reality, IT IS NOT (Supporting Reference: Internet Insecurity, By Adam Cohen, TIME International Magazine, Sunday, Jun. 24, 2001). Furthermore, a car is a MACHINE that depends on many systems of which the Automatic Braking System (Fuzzy Logic System), Automatic Gear-Selection System, Air-Bag System for passenger safety, and much more. So, an Operating System is in no way an
analogy to a car. Also, security design within an OS and within a car are on two different PATHS. Software bugs and security issues are two different things. One is a representation of system flawed by designed, though already being used (Credits to Microsoft DOS 4.0) and the other is about access control/ management into a trusted system. A technical article in PC Magazine, back in 1990, wrote on an interesting observation that: good software programs rarely become popular. What an irony: the investor want the product quickly in the market, the engineer is concerned about security, safety and performance, which are features that demand longer design period. In my opinion, as researchers/developers, we need to let users out there know the stakes. Just as in justice, "You have the right to remain silent. Everything you say will be...".