Jeremy Garcia, Jono Bacon, and Stuart Langridge present Bad Voltage, in which there is unsceptical intensity, we are all eaten by a T-1000, and we have collected a list of concerns about Large Language Model-based AI, which we’re going to dive into!
- [00:00:00] Introduction
- [00:04:30] The "free software" concern: it normalises inhaling information for free and then renting out access to it. Upcoming EU regulation requires LLM creators to declare copyrighted material used in training corpora
- [00:09:30] The automation one: puts lots of mid-level workers people out of a job, with zero provision for what should happen to those people
- [00:25:25] The costlessness one: if it’s free (or at least easy) to generate text (and images and sound and so on), it becomes easy to flood places with it, which leads to finding hotels online being even more difficult, more fake news sites, students submitting LLM-written essays, etc
- [00:30:40] The inaccuracy one: LLMs “hallucinate”, meaning that they make up lies and present it with as much confidence as actual information, so it’s very hard to tell the difference, which is problematic when it’s relied upon and can be downright defamatory (as in the case of Jonathan Turley, law professor, on an LLM-generated list of “lecturers who have sexually harassed their students” which he didn’t)
- [00:33:35] The magnification of existing bias one: generated text embeds existing biases
- [00:53:30] The creativity one: where does the next generation of music and art and writing come from if this generation is written largely by LLMs?