3x58: Large Langridge Model

Jeremy Garcia, Jono Bacon, and Stuart Langridge present Bad Voltage, in which there is unsceptical intensity, we are all eaten by a T-1000, and we have collected a list of concerns about Large Language Model-based AI, which we’re going to dive into!

  • [00:00:00] Introduction
  • [00:04:30] The "free software" concern: it normalises inhaling information for free and then renting out access to it. Upcoming EU regulation requires LLM creators to declare copyrighted material used in training corpora
  • [00:09:30] The automation one: puts lots of mid-level workers people out of a job, with zero provision for what should happen to those people
  • [00:25:25] The costlessness one: if it’s free (or at least easy) to generate text (and images and sound and so on), it becomes easy to flood places with it, which leads to finding hotels online being even more difficult, more fake news sites, students submitting LLM-written essays, etc
  • [00:30:40] The inaccuracy one: LLMs “hallucinate”, meaning that they make up lies and present it with as much confidence as actual information, so it’s very hard to tell the difference, which is problematic when it’s relied upon and can be downright defamatory (as in the case of Jonathan Turley, law professor, on an LLM-generated list of “lecturers who have sexually harassed their students” which he didn’t)
  • [00:33:35] The magnification of existing bias one: generated text embeds existing biases
  • [00:53:30] The creativity one: where does the next generation of music and art and writing come from if this generation is written largely by LLMs?

https://www.badvoltage.org/3x58

The thing about automation is that the problem isn’t really job loss. I mean, yes, a bunch of people doing low-value work (and many of us do low-value work, so that’s not a dig at the workers) will get pushed out of their careers, but we know how to solve that problem (give people money or expand government services to take care of them), whether or not we do that because we’re afraid that the “wrong people” might not need to scramble for work.

The problem is that it’s a rich-get-richer kind of technology. As frightened as the big companies are that open source models are blowing them away (see the we have no moat Google memo), they can still afford to throw more hardware at problems than any of us can, to get more robust solutions. Jeff Bezos can change Amazon Prime to ship you behaviorally-predicted items before you want them, for an extreme example, damaging what’s left of local retail, but your local supermarket can’t use AI predictions to wean you off of Amazon.

The inaccuracy issue is interesting, because it seems the companies treat it like a feature. After all, there should be straightforward ways to solve it (verify output against trusted sources in real time), but nobody has done so, and they try to underplay and humanize it by euphemizing with terms like “hallucination” instead of admitting that it’s only a coincidence when the output proves accurate.

Mostly, though, I find it interesting that the models seem to have gotten worse, rather than better, and I don’t know if it’s an isolated experience or nobody wants to talk about it. I increasingly find myself getting different kinds of non-responses. For example, when I present ChatGPT with work, the odds are far better that it will describe the problem to me, try to educate me on how to solve the problem, or solve an entirely unrelated problem instead of doing what I ask. And trying to get a straight answer out of it about its own answers, especially without the multiple paragraphs of repeated boilerplate apologies and defining a language model, seems impossible.

Of course, that’s the other problem with companies using AI systems to replace employees: It’ll do the work, sure, but it’ll be far worse work that’ll cost just as much to rehabilitate, so we’ll probably just end up with worse software and storytelling…

LLMs, like ChatGPT, still surprise me by understanding my questions. That’s worth a lot to me.

When first playing with chatgpt it provided useful, accurate responses about a poem and a technical issue. Soon thereafter relying on its suggestions for a backup script turned out to be a painful lesson.

I asked ChatGPT for music recommendations, and it gave descriptions of bands and albums I hadn’t heard of. Excitedly sharing this with a friend, we discovered that none of the listed albums or bands existed. Lesson learned again.

Still. Chatgpt and the like can be SO usefully.

The terminator/doomsday scenario: I don’t see a path for LLMs to become intelligent or dangerous in that way -but llms and chatgpt reminds us that big surprises can happen in tech. A real general a.i. might happen tomorrow or a hundred years from now, but I don’t see it having chatgpt as a
direct ancestor.

Fun times. I’m excited to see where this tech goes next -and what it might inspire.

Interesting article, based on an interesting study (linked in the article).

Access to the tool increases productivity, as measured by issues resolved per hour, by 14 percent on average, with the greatest impact on novice and low-skilled workers, and minimal impact on experienced and highly skilled workers.