2x34: Destroying Angel

Jeremy Garcia, Jono Bacon, and Stuart Langridge present Bad Voltage, in which vertical video is truly a thing, editing doesn’t happen as much as it should, and:

  • [00:03:22] News: Ubuntu release a first analysis of the data they collect about user hardware ... WHO add "Gaming Disorder" to their disease classification manual ... Gitlab move from Azure to Google Cloud ... symbolics.com is the oldest domain, from 1985, according to Frederic Cambus ... Akon wants to build a "techno city" in Senegal and fund it with cryptocurrency to make "a real-life Wakanda" ... Tencent joins the Linux Foundation ... the US supreme court declares that protection from searches applies to cell tower tracking ... Instagram are estimated to be worth more than $100 billion ...
  • [00:38:30] In January, Microsoft published a blog post proudly boasting of the contracts it had signed with the US Immigration and Customs Enforcement division. Recently, ICE have been in the news a lot over border control issues and separation of children from parents; MS therefore got a bunch of pushback about this, and Satya himself commented, walking it back. Google were part of a US military AI project and in April 3100 employees signed a petition requesting that they pull out; they walked it back. The Open Source Definition specifically says that there will be "No Discrimination Against Fields of Endeavor". Should tech companies take ethical stances, or is the software industry merely a neutral provider of technology and the users have to deal with all the ethical questions?

Come chat with us and the community in our Slack channel via https://badvoltage-slack.herokuapp.com/!

Download from https://badvoltage.org

Well, IG Farben (a tech company by any reasonable definition) developed sarin and Zyklon B prior to WW II and turned it over to the German government. I think a tiny bit of “discrimination against fields of endeavor” might have been appropriate don’t you? True, it (the nerve agents) weren’t open source, but supposing Oggole, a developer of open source software came up with a sure fire way to track the movement of people irrespective of who, what and where they are. Do I think they ought to be compelled by a bit of lawyer speak in some “land of unicorns”^* license agreement to hand that tech over to anyone who wants it? Heck no! Alas, in Yankistan corporations have all the rights of meat people, but none of the responsibilities.

  • A mythological world where only good things happen, all the children are above average, and ding-a-ling freedom or death types live.
1 Like

Possibly not, indeed. I think the thing I at least was driving at in the discussion, or attempting to drive at, is not quite “should the government be able to demand this thing if it exists”, but instead “given that there are a whole bunch of bad consequences due to this thing existing once it does, should companies stop trying to develop it in the first place?”

The problem is that some developments (discoveries) are just that, serendipitous, and many if not most are multiple-use. Perhaps in software development the end product is more pre-determined. However, the applications to which they may be applied after the fact is less so. In other fields (of R&D) it is certainly more difficult to know in advance all the benefits and drawbacks which a project may produce. It seems more realistic to control who gets their grubby paws on a bit of tech, than to anticipate the uses to which it might be applied and decide in advance whether to pursue it or not.

Yup, and that’s been the argument for science since before computer technology existed: that research is open-ended and has no particular goal in mind, that discoveries are neutral in themselves and what matters is the use to which they are put, and that there are potential benefits as well as deficits to discovery always. This is certainly a view that I give a lot of time to… but I’m starting to wonder whether it might not actually be the case. It’s not an unanswerable knock-down argument, may be a better way of putting it. Yes, it’s possible that AI research into how to make drones better at targeting will also have useful applications for Google Home. Yes, it’s possible that building facial recognition systems to identify everyone in a protest crowd will also help to implement face recognition to log into your phone. Yes, it’s possible that data analysis to identify enemies of the state by analysing their purchases will also help improve ad targeting. What I’m suggesting here for discussion is: maybe sometimes the companies developing this stuff should think, this has a bunch of bad uses and so perhaps we should explicitly not have the good use in order to avoid the bad. Rather than developing a thing and then, when it’s put to bad uses, throw up hands and say “not our fault, we just made it”. This is starting, to me at least, to seem less like an admirable spirit of neutral technological enquiry and more like a callous refusal to even think about the uses of this new idea in the rush to make it.

Sad thing about this discussion and this kind of topic in general is, no matter what we think, progress will happen and will be weaponized regardless of the intentions of the original creator. The harder question for me is, what we can do to minimize the damage and maximize the responsibility of those turning technology against humanity.

Certainly. But I think that the original creator might sometimes consider whether they should not attempt to create original creations that can be or are even likely to be put to such use. I don’t think anyone is blaming Rutherford, who discovered radioactive half-lives, for nuclear bombs. Should we blame Bledsoe, Chan, and Bisson for the first work on computer face recognition in the 60s? Probably not, even though they were mostly funded by intelligence agencies. But if a government comes to a company and says “we want to be able to point a camera at a crowd and identify everyone in that crowd”, I think it behooves that company to at least have a discussion where they say “should we build this?” rather than merely “can we build this?”

That doesn’t preclude also saying to government, stop asking for this thing.

This. It doesn’t take a genius to suspect hat if you work directly for Intelligence, some sort of national secret service or straight military that whatever you do will end up as another mean to make harm (in the name of defense). Sad thing is, that’s where the money are so sometimes it’s not the easiest solution but the only solution for a scientist / a team to undergo experiments.

Scientist do it because they can - mostly for the glory of great discovery, not money. They are sponsored by governments that seek for new ways to maintain / broaden control - either external (war) or internal (citizen surveillance). Corporations do it for money. To break the circle, human nature shall change.

It;s like a part of a bigger question which is, what is needed to stop war and maintain peace? Because there will always be another Rutherford or Oppenheimer.

Yeah. I’m not really talking here about companies who know for a fact they’re doing this. If Lex Luthor rings you up and offers to pay you to develop synthetic kryptonite, you know what it’s for. I’m more thinking of companies where it’s ambiguous. I feel like there should be discussions which go like this:

[developer] we have come up with this cool idea where we can identify someone’s religion just by analysing a video of how they walk
[executive 1] ok let’s consider that as a project
[executive 2] we can make a ton of money from this licensing the technology to advertising agencies
[executive 3] um but there is a huge potential for misuse if people start using this to discriminate
[executive 1] good point; do we think that the potential outweighs the benefit here?
[executives] overall, yes. Also maybe there are things we can do to help the deficits not happen
[executive 1] ok let’s do it; here’s a budget

But actually, the bit in italics never even happens. I’m not suggesting here that the potential for misuse should prevent a thing from being done. I’m suggesting that the feasibility study for every project should include someone saying “what terrible uses can this be put to, and are we OK with that? what will the reactions to this be, and how will the people affected feel about it?”, and I do not think that this currently happens at all. This doesn’t just apply to someone inadvertently making weapons of war. Look at the pushback Google got about their “tell a robot to ring restaurants for you” service; the Google people are intelligent and would surely, if asked the question “how will the booker at a restaurant feel about this?”, have answered “they won’t like it”… so my conclusion has to be that nobody even asked that question while they were putting the project together.

I apologize, I am way behind on podcasts and just got to 0x34 today. I had comments about the whole separation of children thing. I wanted to make several “food for thought” observations on the subject.

First of all, the reason for the separation is that the parents committed a crime, coming across the border illegally. They broke the law of the land So it seems to me that there are five options:

  1. Change the laws so that it is no longer illegal to cross the border. (open borders)
  2. Incarcerate the children with their parents, in an adult prison. We can all probably see what would happen in a situation like that, especially since a greater-than-zero percentage of the people who are bringing these children across are smugglers, who are using them as props.
  3. Ignore the law of the land and let them come across anyway without repercussion, thus encouraging the commission of a crime.
  4. Separate the children as we were doing since the previous administration.
  5. Keep the families together, and send them right back where they came from.

Options 2 and 3 are not acceptable. If option 1 is what the majority wants, then we need to have that discussion. So it seems that options 4 and 5 are the only acceptable ones. There is a 6th option, and that would be to build a wall and keep them out except through legal means. Mexico has a wall on their southern border…

Another issue is that we are talking about 2000 children. How many children of American citizens are separated from a parent who is incarcerated? And what about the 765,000 children that are separated from one or both parents, who’s only crime is serving their country?

I’m not trying to be disrespectful, but there are other factors to consider, such as the matter of scale. And since this went on during the last administration, and since most of the anger seems to be coming from the blame Trump demographic, it seems to be a case of manufactured outrage, from where I sit.