Breaking the Rules: Are Humans Drivers Better Than Self-Driving Cars?
We’ve imagined a world where we are driven to and fro by our self-driving cars since 1935’s short story “The Living Machine” by David H. Keller. Could anything be better than taking a nap or streaming the latest episode of one’s favorite TV show while being driven directly to work by one’s smart car? Well, […]
Technologies

We’ve imagined a world where we are driven to and fro by our self-driving cars since 1935’s short story “The Living Machine” by David H. Keller. Could anything be better than taking a nap or streaming the latest episode of one’s favorite TV show while being driven directly to work by one’s smart car?
Well, in full honesty, yes, a lot of things could be. However, we can all admit that it would be pretty nice to have a chauffer-car — carffeur?
People like to imagine that this world of self-driving cars will lead to safer roads, but will it really? How much of safe driving is intuitive, and how much can be coded?
Some people imagine self-driving cars will be the future of transportation. Ideally, they should increase safety, reduce traffic congestion, and offer more efficiency. However, one thing is often overlooked. AI is limited in how it can handle complex and potentially rule-bending scenarios.
AI’s Rigid Rule-Following
Autonomous vehicles operate based on a predefined set of rules and extensive data analysis. This programming dictates strict adherence to traffic laws and regulations. While ideal in theory, this doesn’t always work out so well in real-world driving.

For example, a self-driving car will always stop at a red light. Yes, this is good. However, there will be situations where a human driver will choose to go through a red light. Maybe that red light is malfunctioning in the middle of the night with no other cars in sight. Perhaps a policeman is directing traffic and motions you to proceed. Maybe a horde of zombies is chasing down one’s car.
AI programming can also get things wrong. After all, AI still hasn’t seemed to figure out that most humans only have two arms, two legs, and one set of teeth.
Human’s Situational Flexibility
We assess and react to nuanced situations. Our flexibility allows us to make split-second decisions that may technically bend traffic rules but are often safer or more practical in a given situation.
Driving manuals and AI’s training are very clear on the don’t-cross-a-double-yellow-line rule. However, if there’s a choice between hitting a person, driving into a tree blocking the road, or avoiding some other hazard, crossing that double yellow line might be the safest choice.
Even if no rules are being broken, there’re still times when drivers communicate with hand signals, eye contact, or other gestures. At a four-way stop where drivers arrived at the same time, it can be difficult to decide who got there first and has the right of way. Drivers often let each other know they will yield in order to keep traffic moving. Self-driving cars currently struggle with this give-and-take due to their rigid programming and reliance on sensor data.
Plus, sometimes, it’s just nice to brighten another soul’s day by letting him or her go first.
AI’s Ethical and Moral Decision-Making
While an AI somehow “growing a soul” might make an interesting storyline for a movie, AI doesn’t have — and will never have — a soul. Self-driving cars will struggle to make ethical decisions. People will come up with their own solution to the “trolley problem.” Faced with a choice, a driver will break rules and might even choose to sacrifice his or her own safety to save the life of another person. AI does not have the ability to break rules — even if it’s only breaking a rule to minimize harm.
AI’s devotion to rules comes with some issues. Those rules were set by someone–whose ethics and morals might not match your own. Plus, rules can bring a rigidity to instances where no rigidity is required. For example, AI image generators have rules that don’t allow them to create violent and graphic images. This is a great rule, but it can lead to some frustration when an AI interprets an image prompt as likely to produce graphic content when that wasn’t the user’s intent. For example “shoot,” “blade,” and “crash” can all have completely innocent contexts and not-so-innocent contexts. So AI will refuse to generate the image of a “young man shooting a basketball” due to the potential for the prompt to produce violent content.

Human’s Contextual Understanding
Drivers interpret context. We can read subtle cues from pedestrians, anticipate the actions of other drivers, and adapt to changing road conditions in ways AI cannot. This contextual understanding allows us to make informed decisions. Any human asked to produce a picture of a blade of grass will know that the request is innocuous–so even if they have rules against producing violent art, he or she will still take the commission. Driving sometimes requires the necessity of bending rules within the context of it being for the greater good.
In an emergency situation, a driver might run red lights, speed, and take the right-of-way at a stop sign. Typically, he or she will use emergency flashers to let other drivers know that he or she is dealing with special circumstances. Other drivers will understand that an emergency is happening and excuse the wild driving.
Our ability to empathize even allows us to feel compassion for the situation that requires the erratic driver to ignore road rules.
And we can’t ignore that in some places traffic and other local customs require some creative driving to get anywhere.
Human Intuition
While road rules are rules for a reason and should be followed, our intuition often helps us realize hazards and dangers that a self-driving car would not anticipate. A child playing with a ball by the road or a unleashed dog will have human drivers proceed with caution. Autonomous cars will probably just assume that everything follows the same strict rules and guidelines as they do.
That’s not to say that someday we might have cars that we can implicitly trust to get us where we need to go in perfect safety.
For now … we should proceed with caution.
And if you’re looking to proceed with developing a custom application, contact Swan Software Solutions to schedule a free assessment.