When we at Adweek launched our AI-powered Super Bowl Bot, designed to create ad pitches for the Big Game, we truly didn’t know what to expect. Would it be funny? Insightful? Disturbing? Offensive?
The answer to all of the above definitely proved to be “yes,” but the process of training an AI on such a specific and ad-centric task was also fascinating in ways we couldn’t have expected. The bot—which you can find at @SuperBowlBot on Twitter or @adw.ai on Instagram—has generated more than 200 ad ideas so far and continues to evolve as we expand its data set.
During the brief calm before the storm of (actual) Super Bowl ads, the bot’s creators decided to touch base on how it’s going, what we’ve all learned and where this project seems to hint that AI is headed, at least in the realm of creativity.
Here’s the conversation between Adweek emerging tech reporter Patrick Kulp and creative and innovation editor David Griner:
David Griner: Patrick, thanks again for co-parenting Adweek’s Super Bowl Bot with me. Much like with my real children, I like to take 50% of the credit despite my partner doing 95% of the effort in actually creating them.
Patrick Kulp: I don’t think you’re giving yourself enough credit. You really sell the bot, for one thing.
Griner: I love this bot. It’s my Baby Yoda.
Kulp: Baby Yoda would actually be an excellent concept to feed the bot.
(Editor’s note: Patrick followed through and actually generated the Baby Yoda pitch.)
Griner: So we’ve written quite a bit about how the bot generally works, but I’m curious about a few things. Namely, how much work was it to build and get up and running? You handled all that on the back end after we came up with the concept.
Kulp: It was a gradual process building up the dataset. I used web scrapers to download a ton of descriptions of ads in bulk from various sources and just kept adding to it over time.
As far as the training of the bot goes, I found it surprisingly easy. I used a guide from a BuzzFeed data scientist named Max Woolf and ran it in a free cloud program called Google Colab, in part because I don’t have the hardware needed to handle the processing demands of AI—typically graphics processing units (GPUs). That backend guide is written in Python code, but you don’t have to understand much Python at all (I don’t) to make it work. The training process itself takes about an hour or two depending on how extensively you do it.
Griner: We chose not to automate this bot, meaning you hand-feed it prompts, or it just spits out a random Super Bowl ad idea based on the data you’ve fed into it. But I assume we could have made this an automated bot that shoots things directly onto Twitter without us moderating or curating it?
Kulp: Yeah, there is also an API section in the guide that explains how to build a web app interface for the bot. And I’m sure the Twitter automation would have been doable from there.
That would’ve taken a bit more figuring out, but the main reason we chose not to do it was that this bot is wildly unpredictable. It takes quite a bit of curating to sift out something presentable. To be clear, that’s selecting which generated output to post—I never actually adjust any of its text.
Griner: You get to see all the raw content it creates from a prompt. You and I originally worried that, without a moral compass, the bot might create some truly unfortunate or problematic ad concepts that we wouldn’t feel good about putting out in the world. I’m sure the word “Tay” still haunts AI folks from Microsoft’s failed 2016 experiment with a Twitter AI bot. But now that you’ve seen what it actually produces, were we right to worry? How often does it get legitimately … unpublishable?