Source: Interscope
Art

Trippy AI Art Jumps From Internet to TV Screens, Music Videos

Art-making artificial intelligence technology that started with Google's DeepDream will be featured in a new music video by Years & Years

Babies with dogs’ faces. Photographs of lakes where the sandy beach blends into trees that become cars. A rock-strewn valley that becomes a set of terraced, domed buildings.

No, these aren’t the descriptions of wild hallucinations. Instead, they’re images made with technology developed by the contemporary artificial-intelligence community. And soon this technology will be coming to a music video, a banner advert, and even an art gallery near you.

The first commercial music video to feature this technology is from the band Years & Years and is a remix of their song Desire. The band teamed up with two European computer coders named Samim Winiger and Roelof Pieters, who run a kind of AI agency called Artificial Experience, to apply AI techniques that have been available for only a matter of months to their video. Specifically, techniques with New Age names like “guided dreaming” are used to help computers make enhancements and create additions to filmed scenes, said Winiger. The video will premiere “in the coming weeks,” record label Interscope said in an e-mail.

Examples from Years & Years’ video showing the raw footage (left) and the AI-altered images (right).
Examples from Years & Years’ video showing the raw footage (left) and the AI-altered images (right).
Source: Interscope

Ground zero for this type of art is Google, which spurred development of some of the techniques that have come to dominate the space. This summer a group of Google researchers carried out a thought experiment: They took the world-class picture-identifying systems that power tools like Google Photos and asked them to look at images a little differently and report back what they saw. Imagine playing a free-association word game and then having to draw what you’ve come up with.

The results surprised Google’s team. It turns out that much in the same way a child can look at a fluffy cloud and see a duck carrying a top hat, so can a computer. The resulting psychedelic images are like little drawings from the AI’s imagination.

Within days, the project—called DeepDream—was reimplemented by third parties in the form of free code, spreading the warped aesthetic across the Web. Many people used the same free data set to teach their proto-brain how to interpret the world. The data set (known as ImageNet in the AI community) contains 1,000 individually tagged and labeled categories, including 120 dog species. This explains the DeepDream aesthetic’s predilection for canine visages.

This is what happens when you run Bloomberg Business through DeepDream: Dogs and eyes appear in all your images.
This is what happens when you run Bloomberg Business through DeepDream: Dogs and eyes appear in all your images.
Source: Deep Dream Generator

“I was really really surprised by DeepDream. I think everyone was,” self-described “wandering machine learning” researcher Christopher Olah, who’s done work at Google, wrote on a Reddit thread discussing the technology. (In later experiments, Olah discovered some other freaky characteristics of the network: “I was also really surprised when my own experiments with visualizing ‘what does a neural network think X looks like?’ started producing unexpected additions to the object,” he wrote. “Barbells have muscular arms lifting them. Balance beams have legs sprouting off the top. Lemons have knives cutting through them.”)

Looking at a DeepDream-inflected version of a familiar painting, like this one from Michelangelo, gives you a good sense of what can happen.
Looking at a DeepDream-inflected version of a familiar painting, like this one from Michelangelo, gives you a good sense of what can happen.
Source: Kyle McDonald

Within weeks, canny, commercially minded artists had cottoned to the technique, and today a search on Google yields numerous ways you can hand over some cash and get an AI-inflected picture back. There’s even a bustling cottage industry of artists selling their wares on Etsy.

“We have an opportunity here to shift the paradigm of the way we interact with machines in film,” said Brian Harrison, the director of Years & Years’ Desire video. He became fascinated by AI-born images when he saw an experimental clip of Fear and Loathing in Las Vegas that had been reinterpreted by the AI. The clip, coincidentally, was made by Winiger’s collaborator, Pieters. “I found it amazing that computers were thinking in a psychedelic way at first glance,” Harrison said.

A wildlife photo gets a little trippy when put through DeepDream.
A wildlife photo gets a little trippy when put through DeepDream.
Photographer: Zachi Evenor, Source: Google Research Blog

It might be the first music video created this way, but it won’t be the last. “I really want people to understand that times are changing,” Harrison said. “I think it’s an amazing time period.”

That’s because creatives around the world are abuzz with a new set of tools that lets them tap into the befuddling ways in which computers perceive the world, and then apply this perspective to their work. Winiger and the Artificial Experience team are already working on another AI-based project, with U.K. television network Channel 4, he said.

“Designing experiences with intelligent systems is a new paradigm—giving agency to creative machines and enabling collaborations between humans and machines as equals,” Winiger wrote in an e-mail. “The music video has been a truly interdisciplinary collaboration.”

A few examples of how DeepDream might interpret basic structures and turn them into something more complex.
A few examples of how DeepDream might interpret basic structures and turn them into something more complex.
Source: Google Research Blog

“I think we’re going to see an increasing interest from artists in machine learning in general,” said Michael Tyka, a Google software engineer and artist who helped create the technology underlying the system. “As a field, machine learning has really made incredible progress over the last few years and continues to forge ahead,” he said. “Like most other technologies in the past, artists experiment with new tools and find ways to use them in their artistic expression.”

The new visual style has already caught the attention of several creatives who want to expand on it, including Kyle McDonald, a Brooklyn (N.Y.)-based artist who spends about half his time using new technologies to create branded art for companies, and the other half giving workshops and selling his own work. He says there’s a “renaissance happening with neural networks,” as they can help artists create art more rapidly and with more variability than before. After working with various AI techniques for a while, “you get a feeling for how powerful these algorithms are,” McDonald said. “I find the hardest part isn’t even getting the stuff set up—it’s about getting your head wrapped around it in the right way.”

A cityscape explodes with life after a trip through DeepDream.
A cityscape explodes with life after a trip through DeepDream.
Source: Kyle McDonald

Interscope and Channel 4 aren’t the only media entities interested in applying AI to videos. Since producing artwork using the new techniques, McDonald has had interest from a few different music labels, he said, as well as some people within Google who want to try to collaborate. “Music videos have historically thrived on new aesthetics since they’ve been invented,” he said. “It’s the perfect place to apply some of these tools.”

The Google DeepDream work was followed by a paper from European researchers that outlined a technique called Style Transfer. This approach lets you take one image and apply its aesthetic style to another. Again, within weeks, the Internet became suffused with 1980s pornographic movies rendered in the style of Picasso’s cubist period, as well as photographs of forests drawn with the expressive slashes of color and tone made famous by Manet.

A real landscape looks like a painting after going through Style Transfer.
A real landscape looks like a painting after going through Style Transfer.
Source: Alex Champandard

Alex Champandard, an Austrian artificial-intelligence consultant, read the paper, found some code on online code bazaar GitHub, and threw together a Twitter bot that lets people tweet their picture to it and get an AI-rendered image back. The Deep Forger—named for the “deep learning” techniques used by the AI, and the implicit forgery of the artwork—has already produced hundreds of images. Now, Champandard is fiddling around with applying the same technique to retro games, such as Id Software’s Quake, since receiving a positive reaction on social networks like Twitter after showing off his experiments. Unfortunately this type of AI is heavy-duty in terms of computation, so “the technology will need to be redesigned for current hardware to run at real-time rates,” Champandard said.

“I’ve been keeping a close eye on machine learning for creative applications in general, and progress has been very rapid,” he said. “It’s exciting because nobody really understands how the algorithm really works. It’s now about the artistic exploration of the space that the algorithm covers, trying to understand what works and what doesn’t. The Twitter bot helps a lot in this process, identifying patterns for things that are successful and those that aren’t.”

Winiger used a similar technique to add new colors and textures to work by a Japanese manga creator named Kedamami. “Taking the outputs of Creative A.I systems and re-processing them by human hand is fascinating,” he wrote, in a post discussing the work. “If we can master these new tools, they will enable radical new aesthetics, processes and narratives.” Winiger and his Artificial Experience team soon plan to release some open source software, called DeepUI, which will make it easier to apply cutting-edge AI techniques to videos and images, he said.

Using different images as a model, Deep Forger can take a single photograph and alter it to match various styles.
Using different images as a model, Deep Forger can take a single photograph and alter it to match various styles.
Source: Alex Champandard

As this new form of AI-driven art makes its way from commercial and academic research labs onto the Internet, it’s sure to influence the aesthetics we see around us. “It reminds me of early ’80s music videos,” said David Klein, an independent artificial-intelligence engineer. “People got excited that we could do mirror reflections in real time and posterization. It added a tremendous amount to the music scene.”

These new effects were quickly adopted by the design community and helped create the visual language of the early ’90s—great splotches of color, huge usage of contrast, a frequent mingling of old and new styles—which fed directly into our present day, first through early net-art communities based around GeoCities and clip art, and then into Tumblr, whose mishmash aesthetic has come to dominate marketing geared toward millennials.

Eventually, AI-made art will become prevalent and, like any other aesthetic tool, will be changed by the numerous artists who use it. But here’s the twist: Because this stuff uses an AI that responds unpredictably to the things it learns about, the art created will be fundamentally different from any that’s made purely by humans.

“We’re really in the embryo stages of AI,” said Harrison, director of the Years & Years video. “But when we start bringing in art and imagination and look at it from a historical perspective of when the dawn of consciousness was for humans, it seems like imagination is the key focal point.” Dream on, DeepDream.