2x05: Bad Voltage Live at SCaLE 15x

The Bad Voltage live stage show, from SCaLE 15x in Pasadena, March 2017!

An epic time was had. Jeremy Garcia, Jono Bacon, and Stuart Langridge, live on stage, in which there was some downright unfair quizzing of Jono, a one-sh*t trumpet, the brightest suit that’s ever been seen, a machine to count eggs, Perl abuse, a hollow burrito, pies, more pies, hammer pants, the Phantom Zone, no air horns, the products of the Chevy company, and a reappearance of Bryan! As well as:

  • [00:07:00] The news! Featuring the Amazon S3 outage, Snapchat being worth $33 billion, System76 bringing manufacturing in-house, and how swimming pools have dustbins full of urine in
  • [00:11:30] Cloudflare had a pretty serious security flaw identified by the Project Zero team at Google, where sensitive data from all sorts of Cloudflare sites was leaked -- passwords, auth tokens, and the like. What's the deal with this sort of issue? It's surprising how much of the internet turned out to be behind Cloudflare, and this sort of centralisation is a problem... but equally, there's a reason we go to experts in the field and outsource services to them! So, what's the best approach here?
  • [00:20:00] Quizmaster extraordinaire Jeremy plays Much Taboo About Nothing, in which team opensource Jono and Rikki team up to battle heroic ginger team Stuart and Hannah in a game of wit, erudition, vocabulary, guesswork, and trying to not be too nasty about Ruby people. Partially successfully, depending on your attitude on rule-bending and wide appreciation of cultures...
  • [00:33:20] Why are all our amazing technological advances being used to make stupid pointless gadgets that nobody should buy? Paper towel dispensers that magically detect your hand movement and then still dispense a bit of brown paper to dry your hands on; amazing iPad-based payment systems which still require you to sign your name with your finger; endless pointless stupid Internet of Things devices. Stuart rants, and Jono and Jeremy respond with various degrees of defence or agreement as to where we're going and what to do about incredible technology put to wasteful ends
  • [00:43:00] Ig-NOT -- it's like the Ignite talk series, but... not. The presenters each do a talk, on an unknown subject chosen by the audience, using unseen images suggested by the community and the other presenters. And... well, see how they get on. Featuring some properly unkind choices, a brief and magnanimous appearance by Bryan, and a very weird old guy with an axe...

Many thanks to the extra participants in the show: Chris Smith for warming up, the ever-wonderful and ever-supportive Ilan Rabinovitch, and show guests and guessers Rikki Endsley and Hannah Anderson!

We are hugely grateful to the companies who helped make this show happen: Linode for getting us there, SCaLE for inviting us again, Dell and Endless for providing prizes, and Ticketmaster for putting on the show and venue and tacos and many many beers and the band afterwards!

And… if you want to enter the competition announced in the outro and win that sweet Dell XPS laptop… you want badvoltage.org/fruit

Watch the show here:

Download the show now!

2 Likes

Good show guys
I went out last year but couldn’t go this time

1 Like

Was that a forkbomb?

2 Likes

Good show. Pleased to see at least one of my submitted images in the slide decks (no I’m not going to admit which - I believe Jeremy used the word “reprobates” which is probably about right…)

On the whole CloudFlare/S3 thing. I think people building on top of these services need to do some defensive infrastructure design. This means, for one thing, maintaining and managing the business logic around scaling, traffic routing etc yourself. Rather than just offloading it all to The Cloud safe (ahem) in the knowledge that Amazon/Microsoft/Cloudflare/whoever will handle all of that for you.

Because cloud providers never go down, or deprecate services, or increase their prices or anything, right? Right.

Basically I think you want to be in a position where you can switch providers (or at the very least AWS zones or whatever) more or less at the drop of a hat. Cloud services should be a fungible commodity. If your provider has a problem, get a new one. If they raise their prices, get a new one. If it sucks in any way, just move. Plenty of good tooling has come out of the DevOps movement over the last five years that makes this a whole lot easier.

To me the real value of cloud services is in IaaS, which is exactly the right sort of interchangeable commoditisation. I sort of have an objection to PaaS which, while convenient, often ensures you’re tied to one particular provider’s platform in perpetuity. And that sounds an awful lot like a conversation we’ve probably all had before.

Holy moley, I can see why you were worried about being pulled over at customs looking like that… :slight_smile:

3 Likes

Yeah, what was the thing I said about not standing out like the mutt’s nuts?

Loved the episode and the topics discussed! The background noise drove me a little crazy though. Needed a little more quite to enjoy the discussion and fully absorb the wisdom of the presenters.

1 Like

Jesus, THAT SUIT!

3 Likes

This topic is now a banner. It will appear at the top of every page until it is dismissed by the user.

This topic is no longer a banner. It will no longer appear at the top of every page.

And… if you have never partaken in the Dragon Fruit, I give you:


I should have carved an LQ logo in it before uploading.

–jeremy

1 Like

I would have thought the obvious way to stop clouds taking your service down is to use multiple of them. Have your service on two or more clouds. This is what OpenStack’s CI did when we built it. Monty Taylor and I always used to say “clouds fail and you need to prepare for that”. That was over 5 years ago and things still haven’t changed. This is the age of throw away computing, managing herds instead of pets.
And yes, every largish service should use something similar to Chaos Monkey. I built something similar into one of HP Cloud’s services back when that was a thing (that was so good at self-repair it took out 3 racks of servers one night due to an SDN issue, but that is another story).