Home » Programming the morals of the self-driving car

Comments

Programming the morals of the self-driving car — 41 Comments

  1. There’s the old chestnut about the first all automated passenger airliner. The after takeoff announcement:

    “Ladies and gentleman, welcome aboard the first all-automated, pilotless jet liner. We are presently climbing to our pre-planned altitude of 37,000 feet. When the air is smooth, the seat belt sign will be turned off. In the meantime, please keep your seat belts fastened for your own safety. Sit back, relax, and enjoy your pilotless flight. Remember that with this newly perfected technology nothing can go wrong, go wrong, go wrong, go wrong,……..”

  2. Cars are freedom. I can hop in my car and go thousands of miles with nobody’s permission and nobody knowing. I worry that several new automotive technologies, including self-driving cars, will erode that freedom. How about the device that the insurance company puts in your car to track your mileage and speeds? Or a network of road cameras that can keep track of individual cars’ activities? I fear this.

  3. I simply abhor the idea. The idea that it is safer and will result in fewer accidents is very speculative at this point.

    Related to JJ’s post above. Autopilots are a great tool, and serve a fine purpose in airliners. They generally provide smoother handling for the passengers, and with modern ground-air technology can typically land with less visibility than a piloted airplane. (At least they are legally allowed to.)

    On the other hand, there have been a rash of accidents that occurred simply because pilots (mostly foreign) have become so reliant on the A/P and other technology, that they are marginally capable of actually flying the airplane when it becomes necessary. Human control does become necessary in airplanes, as it will in cars.

    I find it ironic that folks are willing to talk about driverless cars, but get very upset at safety enhancing technology such as red light and speed cameras (both common in England). We badly need better traffic enforcement, and should use technology to assist since human monitoring is sparse. Most auto accidents could be avoided if folks simply obeyed the laws, and paid attention to business.

  4. An argument can be made that computer controlled cars make sense on freeways, where traffic is predictable. Not so off freeways on regular roads.

    As for the morality of the given example, perhaps that’s not so hard to do after all. Proven by the admiration we all feel when a pilot avoids crashing his falling aircraft into homes below. We admire that person simply because they did the right thing despite the personal cost. That said, taking that decision away from the individual is wrong because it disempowers the individual.

  5. Geoffrey Britain:

    I’m not sure I get your analogy.

    Of course a pilot in a plane that’s already going down is going to direct it away from homes if he has any control of it left at all, in order to minimize the casualties. But in that situation his plane is going down anyway. He’s not hurting himself by his action.

    Unless you’re talking about a situation where he is flying solo and has a chance to parachute out and save himself, but foregoes it in order to steer the plane away from homes, and therefore forfeits his life in order to save others?

  6. My question is, how freaking fast were you going around that corner if the result of striking a building, rather than the people, is your death or serious injury?

    Yes, I know that’s not the point, but really, if I am turning a 90-degree corner in the city, I’m probably going 20 mph or less and I’m just not going to get killed. Now, if I hit pedestrians at 20 mph, they, being unprotected, will have some serious injuries so the answer is, hit the building. But how fast are they going in this dumb question?

    And let’s not forget that a car programmed for in-city driving is going to, I hope, be programmed and equipped to watch out for jaywalking idiots and dummies on bikes and other annoying sorts of people.

    Perhaps it would be better if people come up with realistic scenarios to fret over.

  7. BTW, I deal with auto insurance claims so dumb and entirely preventable accidents make me crankier than they probably do the average bear.

  8. NNC – I think that’s exactly what he means. A similar though smaller scale example would be something that happened to a couple of friends of mine. They were formation landing a pair of P-51s at an airshow. During the landing, the elder pilot found himself above, and behind the younger pilot such that if he continued his landing, he would have landed *on top* of the other aircraft. He didn’t have a whole lot of time to react, but he did. Pulling up and adding power, even though he had to know he was too close to the stall speed at that point in the landing for the aircraft not to stall.

    It did, it dropped a wing, rolled onto the grass and killed the elder of my friends. I truely believe he knew there was a good chance that avoiding the collision that was coming was going to kill him, but he did it to save another life.

    No computer can or should make that decision for you.

  9. I think that this is a poor example of the kinds of decisions faced by automated cars, whoever wrote it. No decently programmed car would ever turn so quickly onto a blind, walled street that it couldn’t stop in time should it encounter an unexpected situation. You have to understand the cars are continuously scanning the space around it and driving in such a way that it can avoid anything that reasonably might happen. When there is uncertainty (as in turning into a blind ally like this) they proceed slowly or stop and turn over control.

    A more reasonable example of the kind of dilemmas that would cause issues would be things like the following:
    – An out of control car swerves across the road heading straight for your automated car – the car detects pedestrians on the sidewalk in the most obvious escape route. What does the car do? Most likely, try to turn into the other lane while stopping. If that’s not feasible, it’ll try to slow down to reduce the energy of the crash as much as possible and present the best (most well protected and furthest from occupants) side of the car for this maniac (or incapacitated driver) to hit.

    – You are driving on a highway when a bus comes off the overpass above you right in front of your car – essentially you have a giant wall of metal right in front of you.

    The interesting thing about these two situations is that in the first situation, unless the driver was driving completely normally until the second they turned into your lane the car would start making speed adjustments and looking for alternative paths well before a human would. Frankly, current automated cars already predict what other cars around them will do all the time, assuming worst cases and they have precomputed actions to take. In the second case, it LIDAR would most likely detect the bus before it even hit the ground and the car would start reacting before the bus landed. It’s hard to understand how quickly these vehicles can react. They can actually react before a human would even realize there was a problem. They can be programmed to take into account factors (such as pedestrians) that a human may not see or be able to process mentally in time.

    Also, as to the original circumstance, what, exactly would a human driver who was driving so crazily do? Whatever it is, it’s more likely to be a worse decision than the car.

    This isn’t to say that programming these cars for these situations is easy or that programmers won’t make a mistake. I’m really trying to point out that the situations we can easily think of to present ethical dilemmas are generally not actually hard cases. The actually hard cases might be things we wouldn’t imagine would be hard.

  10. For those readers who think bad programming has never killed anybody, you’ve never heard of the Therac-25.

    However, the example given is on its face suspect. If I’ve just turned a corner, I’m not going so fast that I can’t hit the brakes and get stopped. In addition to slowing down for safety, altering the course of any vehicle depletes its inertia. And ten people, suddenly appearing all at once? Not going to happen. First one person becomes visible, and then another, and what do you know, the driver (or software) realizes that caution is advised, and so doesn’t accelerate. And on and on.

    This is why ethicists are held in such low regard by people who work real jobs. The “ethics” they come up with have no justification in the real world, where people make life-and-death decisions every day.

  11. They can also remotely turn off the brakes and then you end up in a “car crash” that is fatal, somehow, because the air bags refused to deploy.

  12. Tonestaple, Bill, Ed:

    Very interesting comments. I’m going to add an addendum alerting readers to the discussion going on here.

    I’m certainly no expert on the topic of self-driving cars, but my distrust of highly automated systems remains. Perhaps it’s irrational, but the surrender of autonomy feels dangerous to me in a different way.

  13. neo,

    I’m not as certain that “a pilot in a plane that’s already going down is going to direct it away from homes if he has any control of it left at all, in order to minimize the casualties.” if it were such a given, there would not be the outpouring of approval.

    But the larger point is that we all know that if necessary, sacrificing ourselves to avoid the unintended killing of innocents is the right thing to do. Thus an algorithm guiding an autonomous vehicle to follow such a decision making process would be morally correct were it not for its taking away the driver’s autonomy and decision making. It is that to which we object.

  14. An old joke; after takeoff, passengers on a flight are greeted by the pilot’s voice announcing that they are part of a history making event, the very first passenger jet without human pilots. That they are aboard an autonomous aircraft controlled completely be computers. The pilot’s voice goes on to assure the nervous passengers that the system is tried and true and that, “nothing can go wrong, nothing can go wrong, nothing can go wrong…”

  15. Is there any REAL demand for self-driving cars?

    Costs v benefits?

    Looks to me to be a nerd engineering project.

  16. I agree with the “this is foolish to worry about” crowd, but for different reasons. Have any of you objecting ever driven in traffic? People do the most dumb-assed things.

    If they are asserting any sort of moral agency, it is “to hell with the rest of you.” Eating, make-up, texting, talking, or fighting with the dog while driving are not exactly moral decisions.

    A human driver in this situation (assuming it would happen) is just going to slam on the brakes in a panic. There isn’t time to think about anything and most drivers are stupid and ALWAYS slam on the brakes for any problem.

    I hardly think that computers will do worse – even if they do end up killing people or their passenger.

  17. It really gets scary when you can’t turn the automation off. Or when you do, the automation rats you out to somebody. At that point, I don’t want to play anymore.

  18. Cornhead: There is absolutely a demand for self-driving cars.

    The most basic reason: commuters in high-traffic areas. Commuting in dense traffic requires constant attention to drive very short distances at a time–stop and go traffic or repeated red lights. Self-driving cars can communicate with one another to make those small movements automatically, with a much lower likelihood of some idiot trying to text and stop-and-go at the same time causing a car accident. Because commuters could focus on things other than the road, much of the frustration that comes from commuting in dense traffic would be eliminated. Stress levels would go down enormously, and road rage would be much more limited than it is now.

    I would bet (and the makers of self-driving cars are similarly betting) that almost everyone who drives to work in dense traffic would, if they could, shell out for a car with the option of automatically driving them to and from work.

    Self-driving cars also change the parking dynamic. I pay fairly high fees to park at my office, and I am limited in where I can park by walking distance to my office. If I had a self-driving car, it could drive me to work in the morning, drive back home, and drive back in the evening, removing the need for parking permits entirely. Or, I would have the choice of having my car park somewhere else, where the cost of parking was cheaper.

    Additionally, families with staggered schedules could more easily share a single car. Parent A is dropped off to work by the car, which then returns home, the kids get in and are dropped off at school, and when the car gets home, it is usable by Parent B until it needs to go back to pick up the kids or Parent A.

    Self-driving cars can also act as long-distance cabs. Say I want to go to a wedding in northern NH, but I live in Boston. There’s not really any way to get there and back except for driving, and I don’t want to get a hotel room–say I don’t know the happy couple well enough to shell out $400 for a room in their B&B. That means I can’t really drink that much, and I’d have to leave early-ish because I don’t want to be driving home at 3am. But if I had a self-driving car, I could go, get drunk and/or leave at 3am, and still get home safe without having to pay hundreds of dollars for a multi-hour cab ride.

    I should clarify that I think all self-driving cars should have the option of a human assuming control at any time, but I for one welcome this change.

  19. First to those who are saying this particular situation could never happen, you’ve missed the point.

    The scenario is chosen to be as improbable as one can imagine. The reason the scenario is made to be so unlikely is that it is humanly impossible to envision every possible scenario real life can throw at you. The point of the exercise is to look at how to deal with situations where there is no good outcome.

    So here is a scenario where something like this could happen. It’s a very improbable, but impossible.

    An individual works for a logging or mining company and is headed out to a worksite. The road is a highway that passes through hilly/mountainous terrain. As the individual is about to exit an area where the road was cut through a hilltop, a group of environmental zealots jump into the road. These people have decided that the destruction of natural resources is the most terrible thing ever and giving their lives will help make it end.

    The real dilemma comes from the fact that they brought their children into the road with them.

    What is the right thing to do? Sacrifice the driver to save the children or strike the crowd and hope for the best?

    Here is where it gets sticky for a company producing the automated car. Which ever option is chosen, the car company decided who lived and died. The liability of decision making has been shifted from those directly involved to a group of business people far removed from the actual incident.

    Do you trust Google to decide if you live or die, if your parents live or die, if your wife and children live or die? Who is the company going to put first when deciding the best outcomes of scenarios? It isn’t you, it isn’t society, it is what would be best for the company.

    While I don’t think half the people on the road should be operating tons of metal at high speeds, I still trust their sense of self preservation to minimize the risks over the self interest of some faceless corporation.

  20. Driving is fun, but not for city dwellers, who are mostly Democrats. Draw your own conclusions.

  21. Driverless cars will have their own protocols and formations for keeping traffic flowing smoothly … as long as there aren’t any humans in the way. Thus I predict that within 5 years after self-driving cars become the majority human drivers will become the most hated creatures on the road. Unless you’re one of them you’ll hate them too — and hypocritically so, as which side of the line you’re on will depend on whether you happen to be driving that day or not.

    “But it’s different when I drive! I know what I’m doing! Those other jerks holding up traffic instead of letting the computer handle it are the REAL problem! I unlike most people can of course drive better than the computer.”

    Your amazing driving skills are how you’re able to focus through all the distractions like honking and waving of middle fingers …

  22. … assuming of course the computer-controlled driving system will still allow people to honk the horn.

  23. The three things that will have to be pried from my cold dead fingers: my guns, my frying pan of bacon, and my steering wheel.

  24. Obviously, in a life-and-death decision, the car will kill Republicans to save Democrats.

  25. Just another thing to consider, are these self-driving systems vulnerable to hacking? Are you sure? Are you “all the cars on the road suddenly crash into each other at high speed at one time due to a day one vulnerability” sure?

  26. Your moral dilemma is quite similar to one proposed for a human driver by the philosopher, Philippa Foot, in 1967, “The Trolley Problem”.

    Foot being rather inventive, the discussion also wandered off into related matters. Here is a discussion:

    https://en.wikipedia.org/wiki/Trolley_problem

    You should probably rethink this statement:

    “any one-size-fits-all solution is a surrender of individual autonomy”

    This particular example is complex, with competing values at stake, so it’s easy to think that after you have reached a conclusion someone could plausibly reach another conclusion. But many moral decisions are quite clear, and the same result should occur each time they come up, without any loss of autonomy

  27. I agree that driving is freedom, and I loooove to drive. Do not want driverless cars.

    Here’s the concern/issue I have with driverless cars based on years driving in places like Boston and DC rush hour: How do you program it to respond to the other drivers? How is/would this be tweaked by region? For example, in Hawaii they drive below the speed limit; they are so laid back.

    In your scenario, I would assume that there may be other cars nearby (e.g. maybe the one behind is tailgating because it’s programmed to not exceed the speed limit) – so will the response be solely based on the pedestrians and the car’s occupant(s), or will it also factor in potential collisions w/other cars based on their proximity?

    So much of driving is situational – based on the current traffic conditions, the weather, road conditions, the behavior of nearby drivers (are they texting? swerving? have loose items in their truck bed? and so on). When I drove in Boston, I was much more aggressive (to fit in), and I learned to read the body language of drivers to anticipate things like being cut off, etc.

    The only way I see driverless cars working is if it 100% driverless cars on the road, or if the cars have separate roadways (like HOV lanes). The possibility of forced 100% driverless cars upsets me greatly.

  28. Driving in heavy traffic requires a certain amount of playing chicken. The algorithm will lose. Self driving trucks will be the preferred target for pirates since there is no unpredictable driver to deal with. Terrorists will no longer need a kamikaze driver when using a vehicle as a weapon.

    I am not betting on this technology just yet.

  29. What are we sacrificing for a predicted increase in physical safety, and is it worth it?]

    When something protects you, it also has ownership and control over you, if it is not something you yourself built and control.

    The idea that people control their technology is quaint, since half of them have no idea how it was built or what makes it tick. Some “control” there.

  30. There aren’t even just the questions of morality, but those of practicality.

    Scenario:
    Autonomous car is driving someone down a 2 lane road. 500ft ahead, it “spots” (or detects, or whatever) a fallen tree limb that is obstructing your side of the road, but not the other side of the road. The limb is too big to go over/through. What happens?

    Doe the autonomous car wait until it’s “safe” to go into the other lane, going across the double yellow lines, in clear violation of traffic laws? Or does it drive up to the tree limb, and just stop? If it stops, does it just sit there until something happens to remove the obstruction? What if there are other autonomous cars coming up behind you?

    Simply stopping in the middle of the road is another violation of the law, so either way, we start programming computers to violate the law.

    There’s no place to turn around, so recalculating a route is a pointless exercise.

    There are hundreds of interactions on the road that require what is in effect human interaction, and sans actual intelligence, simply cannot realistically be broken down into math.

    The point about red light cameras is a very valid one in this. It’s nice to say “if people would just follow the laws”, shrug, and continue to support them. But the problem is that when broken down to computer terms, there’s no “2”. Everything is binary. You were either in the intersection when the light turned red, or you weren’t.

    So what happens when the light turns yellow, and you’re at that point of no return? If you go, you risk getting ticketed by something fully automated that functions under a set of rules outside of yours. If you decide not to go, you may have to slam on your brakes to avoid going into the intersection before the light turns red. If you decide to slam on your brakes, maybe the person behind you was following too closely, and rear ends you, pushing you into the intersection, at which point, everything is a whole lot worse than had you just run the red light as it changed at the last second.

    Anyone who has driven a car has had to make that split second decision at a yellow light to go or not. It’s a conscious decision that is largely determined by 1000 variables that you have to process nearly instantaneously. Some of these variables have various weights to them, and some of them have been learned through trial and error. One of the things my dad always taught me when I was learning to drive… “Don’t make sudden movements that might take someone by surprise”, he said (incidentally, I hate people who don’t use their signals because outside of your horn, it’s your only means of communicating with others around you who are driving 1.5+ tons of machine… why not use them?). As such, I tend to be of the mind that slamming on your brakes falls under that unexpected category, and is a great way to effectively cause an accident. Ever driven in a city like Chicago where it’s largely bumper to bumper, even going 45+ mph? I’d invite you to drive up Lakeshore Drive when the Cubs are playing, and see how easy you think it would be to program the best way through that mess without improv. Or how to best deal with the red light camera at the end of it. That intersection of Sheridan and Lakeshore is quite busy, and the interactions there are not always binary.

    Swerving to avoid an animal is similar. It’s unexpected. If you see headlights coming on, the best bet is to just kill the animal (or small child who just ran into the road). If not, your best bet may very well be to swerve into the oncoming lane to avoid the obstacle that appeared. The point is that we need to figure out how to program in concrete terms, every single possible variation, and determine what the “best” outcome is. That’s an impossibility. Thing are further compounded when we allow both autonomous and human controlled automobiles on the road at the same time, since humans are by nature largely unpredictable.

    If our computer models aren’t even capable of determining weather or climate outside of general trends, how on earth could we be vain enough to think we could program them to determine how to act in traffic?

    Chaos theory effectively tells us that this sort of automation is impossible, unless every single possible scenario is mapped, weighted, and accounted for. We might be able to make it work like 95-99% of the time, but should we just dismiss the outliers? My instinct says “no”.

    Unless we create ACTUAL artificial intelligence, and then completely trust it with this job, the autonomous car is a rich man’s pipe dream. This is also very, VERY different from airplane autopilot which assumes that you’re in the air, and have largely limitless ability to just swerve in a direction without impacting structure (or even the ground). You have near free reign of x, y, and Z. Applying that sort of AI to a city street, which is largely a series of enclosures that respect only x and y is something else entirely.

    To quote a terrible movie called, “The Core”… “Space is easy, it’s empty!”

  31. My second career for the last 25 years has been programming. I am not so vain as to think I’m really good, but my company is still in the business built on my skill (or persistence ). I would never knowingly put my life in the “hands” of a driverless car. Programmers cannot anticipate all of the variables of something as complex as traffic on the existing vehicular infrastructure.

  32. Those arguing that the scenario in the ethical challenge would never come to pass are naively optimistic. People end up in a LOT of places where they should not be. If our wondrous self-driving cars always drove at a speed that they could always stop before hitting something, then they’d be doing less than 30 miles per hour usually. These cars are NOT omniscient. They cannot see around corners. They cannot see THROUGH other vehicles. They are NOT going to be programmed to go 10 mph every time they pass a group of people on the sidewalk, even though a kid walking on the sidewalk may dash in front of the car, with a parent chasing.

    I’m with our hostess on this one. I see the attraction of self-driving cars, (doesn’t hold any attraction to me), but consider them to be more of a threat to freedom and autonomy for most while being a boon to a very few.

    Remember, one of the first things the Soviets did when the Poles decided to give their Russian “friends” the boot was shut down public transit. Self-driving cars will depend on public wi-fi infrastructure….

  33. “and what the cars are programmed to do in these situations could play a huge role in public adoption of the technology.”

    If MY car is not programmed to save MY ass in ALL situations I say it’s spinach and I say to hell with it.

  34. Allegedly real insurance claim:
    “The other guy was all over the road!
    I had to swerve three times before I hit him.”

  35. Tom Burhoe:

    Yes, of course. But right now we have freedom and autonomy in our cars, and some of us don’t want to give it up.

    And of course we don’t have total freedom and autonomy in our cars either—those of us who obey the laws, that is.

    Your point? That we don’t have complete freedom in every aspect of our lives?

  36. Driverless cars will work well when all vehicles in proximity can communicate position, velocity, direction and capability to each other. If one vehicle goes rogue, the rest will respond accordingly. Same applies to the remaining human drivers, who will probably be given wide berth by the driverless network of vehicles.

    We are already starting to see vehicles with driverless features to protect drivers from themselves – collision avoidance, snooze detection, lane departure warnings, emergency braking systems, and the like.

    One huge problem that driverless vehicles would solve is DUI. If the vehicle detects the driver is impaired, it can take over (good) or call the cops (maybe not so good).

  37. I’m sure everyone will have a pocket taxi driver on top their shoulder, who may or may not do what they are told, when you start driving and going home.

    You don’t have control in a taxi either, right, so what difference would it make.

  38. Pingback:sef-driving car | programming | ethics

Leave a Reply

Your email address will not be published. Required fields are marked *

HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>