March 6, 2026
4 min read
Add Us On GoogleAdd SciAm
People who know more about AI art find it less ethical
When people understand the system and process behind AI art, its moral implications become harder to accept
Malte Mueller/Getty Images
A year ago, at Christie’s auction house in New York City, auctioneers sold an unusual collection of art pieces: surreal portraits, photorealistic images and cartoon-inspired creations, all generated by artificial intelligence. The first-of-its-kind event sparked a backlash. More than 6,000 artists protested that the AI models used to create these works had been trained on copyrighted images without creator consent. While the auction house had argued that the works demonstrated “human agency in the age of AI,” critics saw the event as an example of an industry rushing to commercialize technology built on uncompensated creative labor.
Other artistic and professional communities have also been worried. A report released last November found that more than half of novelists surveyed in the U.K. thought AI could end their career. And audiences seem to have complicated feelings about the technology, too. As one survey found, many Americans are okay with AI as a tool for creative professionals but not as a replacement for their work.
A viewer’s comfort with AI art, however, may depend on how much they know about how it’s made. I study neuroaesthetics, a field that combines neuroscience, psychology and our perception of beauty and art. My colleagues and I have found that the more people learn about how AI’s back end works—the datasets, training process, prompting—the less comfortable they are with the moral considerations surrounding these creations and the value of AI-generated pieces.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
I became curious about AI because its rapid proliferation into the art world has started to expose a gap between what the technology is and what people know about it. Past research has shown that people tend to give AI art lower ratings of creativity, value and emotional depth. And in my own work, I had studied how knowledge about art changes the way we view it. This led me to wonder whether knowledge about AI shapes people’s judgments of AI-generated art and might help explain the often observed bias against it. To investigate this, my colleagues and I conducted three experiments, each involving 100 participants. We started by presenting people with AI-generated art images and asking questions about their morality and aesthetic value. For example, participants in two of these experiments had to rate how morally acceptable it was to use AI to produce such art, earn money or prestige from these works and label them as conventional art. People also had to rate how much they aesthetically appreciated the images we presented.
In the first experiment, we showed our participants 20 landscapes and 20 portraits that were generated using DALL-E 3 with prompts based on the Impressionist art of the Spanish painter Joaquín Sorolla. Half of the participants viewed this AI art with no added context. The other half received a short text that gave them more information. It read:
“This image was generated by an AI algorithm that produces images from textual descriptors. To accomplish that, several steps are required. First, the AI algorithm is trained by learning a large dataset of art images and their corresponding text descriptors, such as the artist’s name. Then, the AI algorithm is able to generate new images based on different textual prompts (e.g., artist’s name, artistic style, whether it depicts a seascape, landscape, or people).”
The additional information made a difference. When people knew how the AI system operated, they perceived the AI art images as less morally acceptable, especially when the creation of these images involved financial gain and artistic acclaim. But the aesthetic appeal of the images did not change, suggesting that learning how AI works made people reflect on ethics, not aesthetics.
Psychologists have found that people’s judgments about what is good or valuable can change when they learn something has earned awards or praise from experts. The authority bias, for example, makes us more inclined to agree with people who seem to be in charge or in the know. In addition, cues such as success or prestige can lead people to see something as more morally good. In our second study, we told a group of participants that some of the AI art images had been exhibited, sold or praised. But we were surprised to find that sharing a work’s success did not improve the moral acceptability of these images in the eyes of people who had learned about how these works are created.
In a final experiment, we tested people’s automatic judgments of AI-made versus human-made art. We used a tool from psychology called a go/no-go association task, in which people are asked to very quickly link one kind of prompt, such as an image, with another, such as the words “good” or “bad.” In this experiment, we showed participants images (which were either AI-generated or human-created Impressionist paintings), along with object-category labels on the left (“AI art” or “human art”) and attribute labels on the right (such as “good” or “bad”). Participants needed to click a button if the image and labels were in alignment, and to refrain from responding when they were not. This task needed to be done quickly and over many trials as a way to capture people’s most immediate associations. We worked with people who had not been given any additional education on AI to try to get a sense of what the average person might think.
We found no strong automatic tendency to see AI or human art as inherently better or worse. This finding tells us that people don’t yet have a knee-jerk reaction or deeply held opinion about AI as opposed to human art. It also underscores that, as our earlier experiments suggested, moral resistance to AI art is something people learn over time.
Overall, when people know how AI works, they become more careful in judging its moral fairness. This suggests that educating audiences, artists, curators and policy makers about how technology works could shape the future of the technology in the art world. Artists working with AI tools can help in this effort by sharing information about the models, data or prompts that they used and clarifying where their own human hand guided the process. Although such transparency may lead to critiques, it may also build credibility and equip people with the tools to think critically about technology.
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
