Matt O’Brien & Arijeta Lajka
NEW YORK (AP) – Countless artists have taken inspiration from The Starry Night since Vincent Van Gogh painted the swirling scene in 1889.
Now artificial intelligence (AI) systems are doing the same, training themselves on a vast collection of digitised artworks to produce new images you can conjure in seconds from a smartphone app.
The images generated by tools such as DALL-E, Midjourney and Stable Diffusion can be weird and otherworldly but also increasingly realistic and customisable – ask for a “peacock owl in the style of Van Gogh” and they can churn out something that might look similar to what you imagined.
But while Van Gogh and other long-dead master painters aren’t complaining, some living artists and photographers are starting to fight back against the AI software companies creating images derived from their works.
Two new lawsuits – one this week from the Seattle-based photography giant Getty Images – take aim at popular image-generating services for allegedly copying and processing millions of copyright-protected images without a license.
Getty said it has begun legal proceedings in the High Court of Justice in London against Stability AI – the maker of Stable Diffusion – for infringing intellectual property rights to benefit the London-based startup’s commercial interests.
Another lawsuit in a United States (US) federal court in San Francisco describes AI image-generators as “21st-Century collage tools that violate the rights of millions of artists”. The lawsuit, filed on January 13 by three working artists on behalf of others like them, also names Stability AI as a defendant, along with San Francisco-based image-generator startup Midjourney, and the online gallery DeviantArt. The lawsuit alleges that AI-generated images “compete in the marketplace with the original images. Until now, when a purchaser seeks a new image ‘in the style’ of a given artist, they must pay to commission or licence an original image from that artist”.
Companies that provide image-generating services typically charge users a fee. After a free trial of Midjourney through the chatting app Discord, for instance, users must buy a subscription that starts at USD10 per month or up to USD600 a year for corporate memberships. The startup OpenAI also charges for use of its DALL-E image generator, and StabilityAI offers a paid service called DreamStudio. Stability AI said in a statement that “Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”
In a December interview with The Associated Press (AP), before the lawsuits were filed, Midjourney CEO David Holz described his image-making service as “kind of like a search engine” pulling in a wide swath of images from across the Internet. He compared copyright concerns about the technology with how such laws have adapted to human creativity.
“Can a person look at somebody else’s picture and learn from it and make a similar picture?” Holz said. “Obviously, it’s allowed for people and if it wasn’t, then it would destroy the whole professional art industry, probably the nonprofessional industry too.
“To the extent that AIs are learning like people, it’s sort of the same thing and if the images come out differently then it seems like it’s fine.”
The copyright disputes mark the beginning of a backlash against a new generation of impressive tools – some of them introduced just last year – that can generate new visual media, readable text and computer code on command.
They also raise broader concerns about the propensity of AI tools to amplify misinformation or cause other harm. For AI image generators, that includes the creation of nonconsensual sexual imagery.
Some systems produce photorealistic images that can be impossible to trace, making it difficult to tell the difference between what’s real and what’s AI. And while some have safeguards in place to block offensive or harmful content, experts fear it’s only a matter of time until people utilise these tools to spread disinformation and further erode public trust.
“Once we lose this capability of telling what’s real and what’s fake, everything will suddenly become fake because you lose confidence of anything and everything,” said professor of electrical and computer engineering Wael Abd-Almageed at the University of Southern California.
As a test, the AP submitted a text prompt on Stable Diffusion featuring the keywords “Ukraine war” and “Getty Images.” The tool created photo-like images of soldiers in combat with warped faces and hands, pointing and carrying guns. Some of the images also featured the Getty watermark, but with garbled text. AI can also get things wrong, like feet and fingers or details on ears that can sometimes give away that they’re not real, but there’s no set pattern to look out for. Those visual clues can also be edited.
On Midjourney, users often post on the Discord chat asking for advice on how to fix distorted faces and hands. With some generated images travelling on social networks and potentially going viral, they can be challenging to debunk since they can’t be traced back to a specific tool or data source, according to professor at the Information School Chirag Shah at the University of Washington, who uses these tools for research.
“You could make some guesses if you have enough experience working with these tools,” Shah said. “But beyond that, there is no easy or scientific way to really do this.”
For all the backlash, there are many people who embrace the new AI tools and the creativity they unleash. Some use them as a hobby to create intricate landscapes, portraits and art; others to brainstorm marketing materials, video game scenery or other ideas related to their professions.
There’s plenty of room for fear, but “what can else can we do with them?” asked the artist Refik Anadol this week at the World Economic Forum in Davos, Switzerland, where he displayed an exhibit of climate-themed work created by training AI models on a trove of publicly available images of coral.
At the Museum of Modern Art (MoMA) in New York, Anadol designed Unsupervised, which draws from artworks in the museum’s prestigious collection – including The Starry Night – and feeds them into a digital installation generating animations of mesmerising colours and shapes in the museum lobby.
The installation is “constantly changing, evolving and dreaming 138,000 old artworks at MoMA’s archive”, Anadol said. “From Van Gogh to Picasso to Kandinsky, incredible, inspiring artists who defined and pioneered different techniques exist in this artwork, in this AI dream world.”
Anadol, who builds his own AI models, said in an interview that he prefers to look at the bright side of the technology. But he hopes future commercial applications can be fine-tuned so artists can more easily opt out. “I totally hear and agree that certain artists or creators are very uncomfortable about their work being used,” he said. For painter Erin Hanson, whose impressionist landscapes are so popular and easy to find online that she has seen their influence in AI-produced visuals, the concern is not about her own prolific output, which makes USD3 million a year.