1x54: The Trolley Problem

Nice episode, thanks!

Frameworks are made to get a product ready faster, but 100% of the frameworks suffer from this. Yes, the examples on each frameworks website are great and they all look like this is what you need. You start using one (or more) of these, and at some point decide to add a new feature, something a bit off-the-path. And… you can’t. Or it’s very cumbersome. Or it’s very difficult because the framework always gets in your way and you have to hack to make this works.

I’ve seen this complaint more and more often recently, especially in the Web developers World.

Video game development has been doomed with the Unity3D effect. A lot of game dev studios use Unity3D because they think developing a video game using traditional tools would be very complicated. A friend of mine worked in two different game dev companies, both using Unity3D only, and as a developer, he was very frustrated and sad to have to use that, because no matter what he wanted to do, he would always have to end up on script kiddies forums and try to copy/paste meaningless pieces of code to get some basic things to work. In the end he hated his job and left after a while.

I think Unity3D and the like can be very useful for people who want to develop a video game but have little to no coding abilities. A graphic designer with a very neat game idea could achieve something pretty nice using that. But if your game dev studio and/or your coding team start to grow, it’s probably better to switch to something else.

But… the new generation of developers are afraid of lower level things (that’s why people like Casey Muratori have been doing incredible work to show how to code a game from scratch).

Frameworks and help tools are like the little wheels on a bicycle: at some point you have to remove them, or you look like a retard.

I’m thinking a combination of computer control and human input. And who would be better than Microsoft for that? I could just see the interface scenario:

Car: A collision with pedestrians is imminent. Emergency Avoidance App wants to access system. App made by *MICROSOFT wishes to access system. Do you trust apps from *Microsoft? yes

Car: Do you wish to run Emergency Avoidance App? yes

Car: Do you wish to avoid the pedestrians? yes

Car: To avoid pedestrians, the car will go off of a cliff and you will die. Do you wish to die? NO

Car: So, you wish to kill pedestrians? no

Car: Uhhmmmm, would you like to play mines?

:smile:

An article about the cost of frameworks. It was held recently in Brighton, @sil were you aware of this conf?

Full Frontal? Yep, I was aware of it. I didn’t go this year, but I spoke at the very first one of them. Run by Remy Sharp, who is a good chap, and also built Confwall which we used for the live Twitter wall at Live Voltage in Fulda 2015.

What kind of decision would we hold a human driver accountable for? We don’t make the human driver of a car today responsible for choosing how to crash into the smallest number of bystanders possible.

I think as long as the car isn’t programmed to “kill the driver at all costs”, the safety features shouldn’t be -trying- to solve ethical problems. We don’t expect a hammer to choose between hitting our thumb and breaking a hole in the wall when our swing is off and we’re not hitting the nail anyway - it’s just a tool. Likewise, shouldn’t self-driving logic stop at “slow down when there’s stuff in front”?

I think I’d dispute that. If I crash my car into a group of 10 people and kill them all, and my defence is “but I was trying to avoid crashing into the one person who ran out in front of my car”, then… well, maybe I’d get a favourable hearing in court, but I think it’s quite likely that I would not.

Certainly this whole discussion is centred around a contrived example – the car will inevitably hit group A or group B of pedestrians and how should it choose between them? – and clearly the solution of “don’t crash into any pedestrians” is the best one if it’s available. But the reason I think this is an interesting thing to discuss is that there will need to be at some level some level of rating the “worth” or “possible damage” of things a self-driving car might crash into. This is a cold-blooded business and people don’t like talking about it, to quote Cecil Adams, but it nonetheless does need discussion. How much is a human life worth? (If your answer is, human lives are priceless and worth any amount of money, then is it worth depriving hundreds or thousands of people of housing or food to keep one person alive? Etc, etc, etc.) The interesting thing here is not necessarily the existence of ethical quandaries. It’s that, up until now, these ethical quandaries have been largely expressed in legal writings; that is, they’re for interpretation by human beings (normally judges or juries), where argument can be made for and against. Here, though, we’re talking about having them be expressed in code, which does not admit of vagueness. Our self-driving car has to do something when confronted with the trolley problem; someone has to write the lines of code that express what it is to do; someone has to have written a spec for those lines. How does the spec writer decide what to do? If they go into court and say “we couldn’t decide what to do, so the car just reads Math.random() and uses that to make a decision”, will they be found guilty of causing negligent homicide? Probably.

Could this lead to an “Internet of Things” thing? It has been discussed about having ‘chips in every paving brick’ at sometime in the future. Would a responsible thing be to wait until sufficient tech comes into play that the road knows what is on it and communicates these things to the car?

This is an interesting connection. Technology always seems to make us aware of more problems than it solves, if not actually ‘enabling’ new problems or outright creating them. Any solution we come up with now when we KNOW we don’t have the right answers is sure to be obsoleted, so would we as a society or Google as engineers or me as a consumer, be liable for the inevitable faults in our technological solutions to ethical problems?

1 Like

I’m glad you put it in such a way! It would be like saying “Hey, I invented this parachute, want to try it?” But when asked how it is deployed, it would be “We’ll figure that out on the way down!” Sometimes, we humans are just plain silly.

One of the arguments for using frameworks is that they allow people still learning to do something interesting fairly quickly. This is how I learned, and this caused me to develop some pretty bad habits. I developed the mindset that once an abstraction is in place you can’t change it, and I started working around abstractions, even the ones I wrote instead of fixing them to fit my need.

Frameworks are great for quick prototyping, they’re absolutely awful for learning.

Why didn’t anyone propose running over the five people and then in a bout a self-consciousness careen over the cliff and kill the driver too!?

I think that’s what @bryanlunduke’s “Murder Mode” does…

When I think of what @bryanlunduke said about Pokemon Go, Google and then this on the ethics of driverless cars, maybe Larry here has a point!