What Psychology Means for the Future of Self-Driving Cars
Driving is a form of personal expression, which may deter automation.
Posted December 14, 2022 | Reviewed by Devon Frye
- Cars, for those who can afford them, are an extension of one’s personality.
- An individual who chooses to drive recklessly is one thing; programming a robot car to do so is another.
- Human psychology may be an impediment to fully automated self-driving systems.
Short of teleportation, few technological dreams are more appealing than getting from point A to point B in a self-driving car: a vehicle able to whisk you away at a moment’s command, and that grants countless hours of newfound free time en route.
Humans have already figured out the “car” part of the equation, with millions of vehicles on the road today that come in myriad styles and sizes, which can run at very high efficiency and in almost any weather or terrain. The problem is the “self-driving” part.
Engineers have been working on self-driving technology for decades, and with some success. With the help of ubiquitous GPS, airliners are largely computer-controlled while in the air.
For other vehicles, engineers have great varieties of sensors and actuators at their disposal, from the radar-like sensors that dot your car’s bumper to video cameras integrated with powerful computer vision systems as well as infrared sensors, accelerometers, and air flow gauges. But the challenge is not in sensing the environment. It is in ensuring that all vehicles on the roadways play nicely together. As long as we must stick to land travel—and the physics of air taxis makes them far too expensive to be widely adopted—there needs to be a system to coordinate the movement of individual cars on the roadways.
Engineers have not solved the coordination problem. But in my view, they face an even bigger challenge that is not technological. The issue is that driving is a highly personal experience. It is human psychology that will, in my view, deter large-scale adoption of self-driving cars.
Central vs. Local Control
There are two basic solutions to managing traffic with autonomous cars. You can either control the flow of traffic centrally or you can do it locally.
Central control means relinquishing decision-making power about how you will get to your desired destination to a higher authority, which coordinates all movements. Aside from the question of who the central authority would be—government, car maker, or another entity—central control would turn the entire system into something more like all-encompassing public transport than a fleet of private robot chauffeurs.
This could be a good solution, especially in more collectivist cultures. The system as a whole could be optimized for speed, safety, and efficiency, as long as all aspects of vehicle movement are centrally dictated.
But this would not be driving as we know it today. It is hard to see how we would transition to such a system from our current one.
To give individuals more freedom and untether them from central authorities, autonomous vehicle traffic can instead be controlled locally. We are already moving in this direction with driver assistance modes available in some vehicles. The car senses its environment and, in more advanced systems, its robot brain is “taught” the basics of road travel.
However, these systems’ AI is not very good at anticipating situations that they haven’t been explicitly trained to handle. As described in a recent white paper co-authored by leading researchers in brain-inspired AI, “a self-driving car does not inherently know about the danger of a crate falling off a truck in front of it, unless it has literally seen examples of crates falling off trucks leading to bad outcomes. And even if it has been trained on the dangers of falling crates, the system might consider an empty plastic bag being blown out of the car in front of it as an obstacle to avoid at all cost rather than an irritant, again, because it doesn’t actually understand what a plastic bag is or how unthreatening it is physically.”
To make a fully automated, locally-controlled system work, each vehicle will need to communicate directly with everyone around them—including pedestrians, who might need to carry beacons that broadcast their movements. Vehicles would communicate with each other by broadcasting precise measurements of their movements, as well local weather and traffic conditions. The system would be a bit like the internet, which also supports flexible communication without central control (similar also to the brain in this respect, as I describe in my book An Internet in Your Head).
Designs for this kind of system are still in their infancy and some urban designers see them as a potential path to dystopia. There are privacy concerns if everyone’s whereabouts and movements are potentially broadcast to everyone in the vicinity. Regulation of such a system might start to resemble a centrally-controlled regime. There are also suggestions that fully automated systems would lead to catastrophic congestion, which might prove unbearable even if we can watch a movie while sitting in traffic.
Anger Essential Reads
Driving is Self-Expression
The challenges of governance and technology facing self-driving cars are daunting enough. But I believe the biggest impediment to any fully automated self-driving technology is human psychology.
Cars, for those who can afford them, are an extension of one’s personality. The way we drive is an expression of who we are. Any visitor to the roadways around my native city of Boston will apprehend this fact very quickly.
But there is a basic incompatibility between self-expression and automated personal transport. Any autonomous vehicle control system, whether centrally or locally controlled, will require a high level of safety and predictable behavior from whatever entity is operating a given vehicle. There is little room for self-expression. But the feeling of autonomy granted by the personal car may prove to be something we don’t want to part with, especially in strongly individualistic countries like the U.S.
Tesla demonstrates this problem all too well. In January 2022, the car-maker introduced an “assertive” driving mode in some Tesla models (along with normal and "chill" modes). “A**hole mode,” or “road-rage mode” as some called it, tuned the car’s driver assistance system so that it would tailgate the car ahead more closely and perform illegal rolling stops at stop signs, among other behaviors. By February, Tesla was forced by federal regulators to disable parts of the system because of the potential for increased crash risk.
Clearly, we want to customize more than just the look of our ride—we want robot cars to reflect how we feel and behave. But deliberately designing an autonomous driving system with this capability is difficult to justify when safer solutions exist. An individual who chooses to drive recklessly is one thing; programming a robot car to do so is another.
The desire to drive like Bostonians—or in any other style of one's choosing—may well derail large-scale transition to self-driving vehicles. It will take more than technological breakthroughs to change this.
Copyright © 2022 Daniel Graham. Unauthorized reproduction of any content on this page is forbidden. For reprint requests, email firstname.lastname@example.org.
Speck, J. (2022). Walkable city rules: 101 steps to making better places. Island Press.
Zador, A., Richards, B., Ölveczky, B., Escola, S., Bengio, Y., Boahen, K., ... & Tsao, D. (2022). Toward next-generation artificial intelligence: Catalyzing the neuroai revolution. arXiv preprint arXiv:2210.08340.