Several recent pieces raise what seems like a very silly question: If faced with the choice, should your self-driving car kill you or instead kill multiple other people? If, say, your car is careening toward a family of five, but it can swerve at the last minute and instead crash into a brick wall and kill the "driver", should it do so?
This seems categorically different from asking what a random person should do if he finds himself in a situation where he can choose to save himself or save five other people. (In the traditional trolley problem, the decision-maker is not contemplating self-sacrifice but is trading one stranger's life for several others.) I think this "problem" also goes away if we are capable of total automation. The reason is that we can specify our desired outcome ahead of time and pre-commit to it.
The trolley problem is an interesting mental exercise because the scenario is novel and unlikely to actually happen. A human being with limited capacity for rationality is plopped into this novel scenario with no context, no legal advice, and no information about similar cases and how they were adjudicated. And this person is asked to derive some moral philosophy on the spot and make a decision. But with the question of self-driving cars, we're discussing ahead of time what the default should be. It's not some mysterious moral dilemma presented in a vacuum.
Suppose you ask someone: "Occasionally there will be scenarios where one person can die or five people can die, depending on a split-second decision. You are no more likely to be any single one of those people. Should the one die, or the five?" It's a silly question. Even if you are completely selfish and your goal is to favor yourself at any cost, you opt to save the five rather than the one. Obviously we want to set the default rules so that the maximum number of people survive these kinds of scenarios. (That's even the "selfish" answer once we remove any mechanism for specifically favoring ourselves.) Generally we want to set society's default rules to prevent as much harm as possible. Of course, you can get people to flip their answers by stipulating, "By the way, you're not one of the five. You're the one." People are selfish. And people will impulsively make the selfish choice when presented with a split second decision. I think this is what's happening when people raise the "Should self-driving cars sometimes kill their passengers?" question. The proper way to raise this question is to not tell the person whether they are a pedestrian or a passenger in the self-driving car. (Or perhaps even more clearly, make the hypothetical choice be between a self-driving car containing one passenger and another containing five.) We should try to answer this question from behind a Rawlsian Veil of Ignorance, which basically means not telling people who they are in the hypothetical. The Veil of Ignorance avoids self-serving answers and the associated self-serving rationalizations. Doing otherwise is like asking, "Would you sometimes cheat to benefit yourself, even though the socially optimal rule is to be self-sacrificing?" Duh. Most would cheat a little, and many would cheat a lot.
This video invites the viewer to "imagine shopping for a car and having the choice between a car that sacrifices the driver on occasion or a 'preserve the driver at all costs' car." If presented with this survey question, our courage may fail us and we may tick the 'save my own ass' box. But this is the wrong hypothetical. Let's try to fix it. Imagine you're buying a car from an autonomous car company that has done its due diligence on the "liability" question. It knows it will be liable for the occasional injury to passengers and pedestrians, so this company has engineered its software to minimize overall injury (within other constraints, of course). Then another company offers to sell you a car that will favor the driver at all costs. It faces a greater liability, because the car occasionally plows into a crowd rather than killing the passenger. So it has to charge you more for the privilege. Will you buy the more expensive car? It probably depends on the price difference. My best guess is that this would be an obscure feature and almost nobody would think to ask for it. I think even a modest price difference (say, a few hundred dollars) would deter most would-be purchasers. They'd give a different answer when actually shelling out for the privilege of saving themselves in the rare event of a trolley-like scenario, different from the one they gave on a bullshit survey. I also think a company making that sort of car would be a lightning rod for lawsuits. It would incur far more than the expected costs of the additional liability it's intentionally assuming. ("Our vehicles kill an additional ten people per year, multiply ten by the average value of a wrongful death claim, divide by the number of vehicles...Our actuaries tell us this is the expected surcharge per vehicle.") No, I think every single accident involving this manufacturer's cars would be under a black cloud. Every marginal case would be adjudicated to their detriment, and then some. Plus there would be social censure for people buying such cars. "Oh, rich boy thinks he can shell out to save his own skin rather than plow into a group of nuns, does he?" Everyone would know that that brand is the "coward who plows into nuns and orphans to save his own skin" brand. You'd likely see an escalated version of painting "EARTHKILLER" on Humvees.
Change the scenario this way. Cars are sold with the "correct" software, the kind that saves the greatest number of lives. But perhaps it's easy to tweak the code to favor the driver. (Maybe all you need to do is wire someone $10 worth of Bitcoin and someone on the dark web sends you a software patch. Your car connects to your laptop by Bluetooth and installs the patch. Insane hypothetical, I know.) Is anyone going to do this, even if it's super easy? In the rare even that you do plow into a crowd of nuns, everyone is going to suspect your car's software was tampered with. An investigation confirms there was tampering, and suddenly you become liable for the actual harm caused. Anticipating this sequence of events, almost nobody tries it.
I think this particular trolley problem is extremely rare, making it all the more silly. Why focus attention on something so inconsequential when there are bigger fish to fry? But I think the "correct" answer is obvious enough. The people who are raising this as some kind of serious moral dilemma are being kind of absurd.
No comments:
Post a Comment