I forced a bot to read all of BNet
Is AI going to take Brian Feldman's job?
By Kathryn Lawrence
Text-generating algorithms have been menacing writers and journalists for years now. Back in 2015, the Associated Press was using automation to generate 5,000 news stories per quarter. Forbes, Bloomberg, The Washington Post, The Los Angeles Times, and others have all copped to welcoming robots into their newsrooms. Substack has yet to admit they're padding their newsletter-writing roster with hot-take-machines, but that could only be a matter of time.
We readers have been comfortably consuming robot-written news digests, letting AI finish our sentences with predictive text, and are regularly offered a selection of mundane bot-generated email responses, and yet every time there is a significant advance in the field of text generation, online writers like to make a big show of quaking in their boots about the implications of AI-created content.
And technologists aren't helping: when AI research lab OpenAI announced their GPT-2 language model in early 2019, it was deemed "too dangerous" to be released to the public. GPT stands for "Generative Pretrained Transformer" and describes a type of deep neural network that after a sufficient amount of training can create new text that is often indistinguishable from human writing. The release of GPT-2 was at the height of online panic about fake news, but after seeing little to no evidence of nefarious use, GPT-2 was made publicly available before the end of that year.
Comedy writers probably have the most legitimate claim to fear in the robots-gonna-steal-your-job department: since 2016, endeavors like bot-generated hilarity pioneer Janelle Shane's AI Weirdness blog and "human-machine entertainment" company Botnik Studios have shown that the absurdist, non sequitur style of creative content generated by AI rarely misses. Shane's book, You Look Like A Thing And I Love You, released in 2019, turned AI attempting to generate pickup lines (recently performed as a dramatic reading by Stephen Colbert), recipes, fanfiction scenarios, and Halloween costumes into a seemingly endless stream of robot goofs.
The "I Forced a Bot" meme created by comedy writer Keaton Patti on Twitter plays off of the expectation that AI turned to any task, like writing episodes of Jerry Springer or Olive Garden commercials, will produce unintentionally hilarious results. These texts almost always introduce a Dadaist twist to their source material, like the stage directions "JERRY SPRINGER sits atop his trash throne, aflame" and "ANNOUNCER (wet voice) Olive Garden. When You're Here, You're Here."
This format was so successful that Patti produced an entire book of "I Forced A Bot" jokes published in 2020, despite the fact that Patti never used any real AI to write the jokes. What's more nefarious: using AI to generate creative content and taking away potential opportunities for real human writers, or real human writers passing off their own jokes as AI-written for effect (and clout)?
Maybe neither scenario is particularly consequential, but as tools like OpenAI's GPT-2, GPT-3, and the open-source GPT-Neo by EleutherAI become more convincing writers and more easily accessible, questions of authenticity, transparency, and attribution are very likely to become more contentious. As Cards Against Humanity wrote after their 2019 stunt pitching their card writers against artificial intelligence to see who would sell more packs of cards on Black Friday: "their jobs will be replaced by automation later instead of right now. Happy Holidays."
Putting all the scaremongering and ethical concerns about replacing Brian Feldman with a robot internet-culture writer aside, are you curious what would really happen if we trained an AI to write BNet? What would it say about the state of online discourse? What hot topics, absurdist memes, and as-yet-unpredicted trends might emerge? Will it be funny? Accurate? Readable at all?
After scraping almost a year's worth of text from Brian's Substack archive and training GPT-2 on it, I drew out some samples at a temperature of 1.0 with the prompt "This week's best meme was," to kick off the article-writing. The "temperature" of an AI-generated text sample generally determines how random or how close to the source material the result will be: higher temperatures are further away, lower temperatures more closely match or replicate the original text. 1.0 is a nice medium temperature on the scale from 0-2.0, usually unique but very close to the original in tone.
With this pre-training, parameter and prompt set, we got some nice openers for introducing a fake "best meme of the week":
This week's best meme was done by a guy who apparently calls himself Huge Ant Man.
This week's best meme was 'Drip.'
This week's best meme was born in the throes of some unavoidable journalistic backlash towards the evergreen enigma of 'Me and the boys.'
This week's best meme was 'I’m thirsty.' If you spend more than fifteen minutes per day on TikTok, chances are good that 'I'm thirsty' has popped up somewhere.
This week's best meme was a reference to a YouTube weird video called 'Can You Feel The Love Tonight?', where people try to understand why someone would commit murder.
This week's best meme was a reference to the show RuPaul's Drag Race, which features queens competing for attention and dominance.
This week's best meme was... Marcus Aurelius?
My favorite AI-generated topic by far, though, was
"This week's best meme was, I think, 'The Macarena.'" The sample goes on to say, "It was horrible. Seriously, though. It was terrible. In case you are blessed enough to forget: This is the name of a song... I dunno what the lyrics mean."
Think about it, we are living in a time where choreographed dances are experiencing a huge resurgence thanks to TikTok, and ‘90s nostalgia is also still having a major moment (it’s arguably been having a “moment” for the last decade). “The Macarena” is due for its second coming.
I then asked GPT-3 (which uses the same type of neural network as GPT-2, but with significantly more pre-training and therefore a much better long-form writer) to continue the article with the following prompt, describing the tone by inserting BNet's "About" section and supplying the "week's best meme" generated by GPT-2:
"I am a newsletter called 'bnet' about internet culture. I wish I could say this newsletter is going to make you smarter but I can’t guarantee that. All I can say is maybe it’ll give you a couple of funny links you can send to the group chat.
This week's best meme was, I think, 'The Macarena.'"
Here's what it wrote about the Macarena coming back.
DISCLAIMER: The following text was generated using OpenAI's GPT-3 API, and does not reflect the opinions of the real, definitely-human Brian Feldman or the OpenAI organization.
In case you are blessed enough to forget: "The Macarena"
When my friend told me about it, I was like, 'Oh, that’s funny. People are still doing that?' Then I realized I was probably one of the people who did it, and I did it in a room full of people staring at me. So the Macarena is officially the most embarrassing thing I’ve ever been a part of. Great job, me.
I was also not the only person who was into the Macarena. It was a very popular dance. There’s a lot of videos of people doing it. A lot of people you might know. There was a video of Selena Gomez doing it. There was a video of Taylor Swift doing it. There’s a video of Sarah Jessica Parker doing it. There’s a video of Seth Meyers doing it. There’s a video of the cast of Modern Family doing it. There’s a video of Weird Al doing it. There’s a video of even Kevin Spacey doing it. So it was a great way of getting all the celebrities in one place, even if they were just in a room full of people doing the Macarena.
The Macarena was also a great way for people to get out of their shell. If you were one of those guys who was like, 'Oh, I’m not a dancer.' Let’s get out of that shell and do the Macarena. So it was a great way for people to express themselves without having to go to a therapist. It was a great way for people to express themselves without having to actually do anything. All you had to do was be there and wave your arms around. You didn’t have to go out there and shake it like a Polaroid picture. You could just do the Macarena and you’d be doing something.
I think the most surprising thing about the Macarena is that it’s still around. I thought it would have all disappeared by now. But it hasn’t disappeared. It’s still there. We’re still doing the Macarena. We’re still having fun with the Macarena. I have no idea why. But I think that’s the beauty of it. No one really knows why we’re doing it, but we’re doing it anyway. That’s the beauty of the Internet, and that’s the beauty of the Macarena.
Let’s get into this week’s links.
some of my favorite neural-network generated animals:
this nice round fellow
one of him always tells the truth and the other always lies
Kathryn Lawrence is a software engineer and artist based in Berlin, Germany. Her favorite science fiction genre is "they built me to kill, but I learned... how to love."