This new tool could give artists an edge over AI
Nightshade poisons the images that are used to train large AI models, and causes the model to malfunction.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The artist-led backlash against AI is well underway. While plenty of people are still enjoying letting their imaginations run wild with popular text-to-image models like DALL-E 2, Midjourney, and Stable Diffusion, artists are increasingly fed up with the new status quo.
Some have united in protest against the tech sector’s common practice of indiscriminately scraping their visual work off the internet to train their models. Artists have staged protests on popular art platforms such as DeviantArt and Art Station, or left the platforms entirely. Some have even filed lawsuits over copyright.
Right now, there is total power asymmetry between rich and influential technology companies and artists, says Ben Zhao, a computer science professor at the University of Chicago. “The training companies can do whatever the heck they want,” Zhao says.
But a new tool developed by Zhao’s lab might change that power dynamic. It’s called Nightshade, and it works by making subtle changes to the pixels of the image—changes that are invisible to the human eye but trick machine-learning models into thinking the image depicts something different from what it actually does. When artists apply it to their work and those images are then hoovered up as training data, these “poisoned pixels” make their way into the AI model’s data set, and cause the model to malfunction. Images of dogs become cats, hats become toasters, cars become cows. The results are really impressive, and there is currently no known defense. Read more from my story here.
Some companies, such as OpenAI and Stability AI, have offered to let artists opt out of training sets, or have said they will respect requests not to have their work scraped. However, there’s no mechanism to force companies to stay true to their word right now. Zhao says Nightshade could be that mechanism. It is extremely expensive to build and train generative AI models, and it could be very risky for tech companies to scrape data that could break their crown jewels.
Autumn Beverly, an artist who intends to use Nightshade, says she found her work had been scraped into the populace LAION-5B data set, and that felt very “violating.”
“I never would have agreed to that, and [AI companies] just took it without any consent or notification or anything,” she says.
Before tools like Nightshade, Beverly did not feel comfortable sharing her work online. Beverly and other artists are calling for tech companies to shift from opt-out mechanisms to asking for consent first, and to start compensating artists for their contributions. These demands would involve some truly revolutionary changes to how the AI sector usually functions, yet she remains hopeful.
“I’m hoping that it makes it where things have to be through consent—otherwise, they’re going to just have a broken system,” Beverly says. “That is the entire goal for me.”
But artists are the canary in the coal mine. Their fight belongs to anyone who has ever posted anything they care about online. Our personal data, social media posts, song lyrics, news articles, fiction, even our faces—anything that is freely available online could end up in an AI model forever without our knowing about it.
Tools like Nightshade could be a first step in tipping the power balance back to us.
Deeper Learning
How Meta and AI companies recruited striking actors to train AI
Earlier this year, a company called Realeyes ran an “emotion study.” It recruited actors and then captured audio and video data of their voices, faces, and movements, which it fed into an AI database. That database is being used to help train virtual avatars for Meta. The project coincided with Hollywood’s historic strikes. With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human.
Who owns your face: Many actors across the industry worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. Read more from Eileen Guo here.
Bits and Bytes
How China plans to judge generative AI safety
The Chinese government has a new draft document that proposes detailed rules for how to determine whether a generative AI model is problematic. Our China tech writer Zeyi Yang unpacks it for us. (MIT Technology Review)
AI chatbots can guess your personal information from what you type
New research has found that large language models are excellent at guessing people’s private information from chats. This could be used to supercharge profiling for advertisements, for example. (Wired)
OpenAI claims its new tool can detect images by DALL-E with 99% accuracy
OpenAI executives say the company is developing the tool after leading AI companies made a voluntary pledge to the White House to develop watermarks and other detection mechanisms for AI-generated content. Google announced its watermarking tool in August. (Bloomberg)
AI models fail miserably in transparency
When Stanford University tested how transparent large language models are, it found that the top-scoring model, Meta’s LLaMA 2, only scored 54 out of 100. Growing opacity is a worrying trend in AI. AI models are going to have huge societal influence, and we need more visibility into them to be able to hold them accountable. (Stanford)
A college student built an AI system to read 2,000-year-old Roman scrolls
How fun! A 21-year-old computer science major developed an AI program to decipher ancient Roman scrolls that were damaged by a volcanic eruption in the year 79. The program was able to detect about a dozen letters, which experts translated into the word “porphyras”—ancient Greek for purple. (The Washington Post)
Deep Dive
Artificial intelligence
How to opt out of Meta’s AI training
Your posts are a gold mine, especially as companies start to run out of AI training data.
Why does AI hallucinate?
The tendency to make things up is holding chatbots back. But that’s just what they do.
Apple is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.
This AI-powered “black box” could make surgery safer
A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.