A web of nodes modeled after brain neurons is trained into an incomprehensibly complex algorithm that knows one thing and one thing only: the patterns of fade and wear accumulated by old pairs of jeans.
So begins a recent Levi’s patent proposal detailing the use of neural networks in clothes manufacturing. In the scenario, the San Francisco apparel brand goes on to set a second neural network—let’s call it bot B—to send blobs of pixels to the above one—bot A—gradually honing them through trial-and-error feedback until bot A can no longer distinguish between photos of old jeans and bot B’s imitations. Thus, a blueprint is born for laser printing a perfectly unique pair of rugged jeans.
The pairing is an example of a model of creative artificial intelligence called a generative adversarial network, or GAN. They’re most commonly known in the mainstream press for their role in the production of deepfake videos and, sometimes, their artistic ability. But now brands like Levi’s are starting to examine other less exciting but potentially more lucrative uses, like finishing jeans.
Doing so the traditional way is resource intensive and environmentally harmful, requiring large volumes of bleach and as much as 20 to 60 liters of water per pair, according to the patent filing. Levi’s transitioned to cleaner laser printing tech last year, around which point it seemed to consider—or is still considering (the company declined to comment)—this AI scheme to mass produce distinctive looks.
The Levi’s filing is one of 251 patent applications processed in the United States this year that mention GAN, up from 42 last year and nine in 2017, the first year it made an appearance, according to a search of the U.S. Patent and Trademark Office’s database.
Granted, not all patent applications are intended for eventual use (some are filed for competitive purposes or bet hedging), but even accounting for such, the spike can’t be written off, according to patent lawyer Chris Hutton, a partner at Silicon Valley powerhouse Cooley LLP. “There’s clearly more going on here,” he said.
The collection of ideas illustrates the oft-overlooked versatility of GANs. They run the gamut from eBay’s plan to generate different-angled images of products in user-submitted photos for visual search to Ford and GM’s descriptions of generating streetscape data for self-driving training to Verizon Media (still known as Oath at the time of filing) laying out how GANs might be used in creating individualized online ads in real time.
These brands either declined to comment on pending patents or didn’t respond to interview requests.
Since the start of what’s widely seen as an ongoing boom of AI advances in 2012, machine learning has become adept at recognizing and classifying images. But more recent efforts to have AI make things rather than sort them is still an exciting new frontier.
“Given how much people spend on creative services, I think there’s a lot of promise that [GANs] can dramatically improve not only the economics but even maybe some of the quality,” said Gartner analyst Andrew Frank. “When you see what it’s capable of, you can’t believe that’s really synthetic stuff.”
Frank was surprised to find that the marketers responding to a recent survey he conducted ranked “generative content creation” near the top of a list of emerging technologies they expect to have the most positive impact on their company’s marketing activities in the next five years.
This optimism might surprise you too if you’re familiar with some of the mangled output even the most sophisticated AI systems can sometimes produce. As often as GANs wow with uncannily photorealistic landscapes, they also render creepy, garbled human faces or awkward blotches of color.