Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    US preparing system to process refunds on billions in illegal Trump tariffs | Trump tariffs

    NASA changed an asteroid’s orbital path around the sun, a first for humankind

    How AI is shaping the war in Iran — and what’s next for future conflicts

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Friday, March 6
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Environment»People who know more about AI art find it less ethical
    Environment

    People who know more about AI art find it less ethical

    onlyplanz_80y6mtBy onlyplanz_80y6mtMarch 6, 2026007 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    People who know more about AI art find it less ethical

    Malte Mueller/Getty Images

    Share
    Facebook Twitter LinkedIn Pinterest Email

    March 6, 2026

    4 min read

    Add Us On GoogleAdd SciAm

    People who know more about AI art find it less ethical

    When people understand the system and process behind AI art, its moral implications become harder to accept

    By Ionela Bara edited by Daisy Yuhas

    Malte Mueller/Getty Images

    A year ago, at Christie’s auction house in New York City, auctioneers sold an unusual collection of art pieces: surreal portraits, photorealistic images and cartoon-inspired creations, all generated by artificial intelligence. The first-of-its-kind event sparked a backlash. More than 6,000 artists protested that the AI models used to create these works had been trained on copyrighted images without creator consent. While the auction house had argued that the works demonstrated “human agency in the age of AI,” critics saw the event as an example of an industry rushing to commercialize technology built on uncompensated creative labor.

    Other artistic and professional communities have also been worried. A report released last November found that more than half of novelists surveyed in the U.K. thought AI could end their career. And audiences seem to have complicated feelings about the technology, too. As one survey found, many Americans are okay with AI as a tool for creative professionals but not as a replacement for their work.

    A viewer’s comfort with AI art, however, may depend on how much they know about how it’s made. I study neuroaesthetics, a field that combines neuroscience, psychology and our perception of beauty and art. My colleagues and I have found that the more people learn about how AI’s back end works—the datasets, training process, prompting—the less comfortable they are with the moral considerations surrounding these creations and the value of AI-generated pieces.

    On supporting science journalism

    If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

    I became curious about AI because its rapid proliferation into the art world has started to expose a gap between what the technology is and what people know about it. Past research has shown that people tend to give AI art lower ratings of creativity, value and emotional depth. And in my own work, I had studied how knowledge about art changes the way we view it. This led me to wonder whether knowledge about AI shapes people’s judgments of AI-generated art and might help explain the often observed bias against it. To investigate this, my colleagues and I conducted three experiments, each involving 100 participants. We started by presenting people with AI-generated art images and asking questions about their morality and aesthetic value. For example, participants in two of these experiments had to rate how morally acceptable it was to use AI to produce such art, earn money or prestige from these works and label them as conventional art. People also had to rate how much they aesthetically appreciated the images we presented.

    In the first experiment, we showed our participants 20 landscapes and 20 portraits that were generated using DALL-E 3 with prompts based on the Impressionist art of the Spanish painter Joaquín Sorolla. Half of the participants viewed this AI art with no added context. The other half received a short text that gave them more information. It read:

    “This image was generated by an AI algorithm that produces images from textual descriptors. To accomplish that, several steps are required. First, the AI algorithm is trained by learning a large dataset of art images and their corresponding text descriptors, such as the artist’s name. Then, the AI algorithm is able to generate new images based on different textual prompts (e.g., artist’s name, artistic style, whether it depicts a seascape, landscape, or people).”

    The additional information made a difference. When people knew how the AI system operated, they perceived the AI art images as less morally acceptable, especially when the creation of these images involved financial gain and artistic acclaim. But the aesthetic appeal of the images did not change, suggesting that learning how AI works made people reflect on ethics, not aesthetics.

    Psychologists have found that people’s judgments about what is good or valuable can change when they learn something has earned awards or praise from experts. The authority bias, for example, makes us more inclined to agree with people who seem to be in charge or in the know. In addition, cues such as success or prestige can lead people to see something as more morally good. In our second study, we told a group of participants that some of the AI art images had been exhibited, sold or praised. But we were surprised to find that sharing a work’s success did not improve the moral acceptability of these images in the eyes of people who had learned about how these works are created.

    In a final experiment, we tested people’s automatic judgments of AI-made versus human-made art. We used a tool from psychology called a go/no-go association task, in which people are asked to very quickly link one kind of prompt, such as an image, with another, such as the words “good” or “bad.” In this experiment, we showed participants images (which were either AI-generated or human-created Impressionist paintings), along with object-category labels on the left (“AI art” or “human art”) and attribute labels on the right (such as “good” or “bad”). Participants needed to click a button if the image and labels were in alignment, and to refrain from responding when they were not. This task needed to be done quickly and over many trials as a way to capture people’s most immediate associations. We worked with people who had not been given any additional education on AI to try to get a sense of what the average person might think.

    We found no strong automatic tendency to see AI or human art as inherently better or worse. This finding tells us that people don’t yet have a knee-jerk reaction or deeply held opinion about AI as opposed to human art. It also underscores that, as our earlier experiments suggested, moral resistance to AI art is something people learn over time.

    Overall, when people know how AI works, they become more careful in judging its moral fairness. This suggests that educating audiences, artists, curators and policy makers about how technology works could shape the future of the technology in the art world. Artists working with AI tools can help in this effort by sharing information about the models, data or prompts that they used and clarifying where their own human hand guided the process. Although such transparency may lead to critiques, it may also build credibility and equip people with the tools to think critically about technology.

    It’s Time to Stand Up for Science

    If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

    I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

    If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

    In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.

    There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

    ART ethical Find people
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFirst ‘half Möbius’ carbon chain wows chemists
    Next Article ‘A big burden for farmers’: Gulf shipping crisis threatens food price shock | Supply chain crisis
    onlyplanz_80y6mt
    • Website

    Related Posts

    NASA changed an asteroid’s orbital path around the sun, a first for humankind

    March 6, 2026

    A century of care: Wildlife Trusts mark 100th birthday with woodland project | Wildlife

    March 6, 2026

    Is AI conscious? Michael Pollan weighs in on the debate

    March 6, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    US preparing system to process refunds on billions in illegal Trump tariffs | Trump tariffs

    NASA changed an asteroid’s orbital path around the sun, a first for humankind

    How AI is shaping the war in Iran — and what’s next for future conflicts

    Recent Posts
    • US preparing system to process refunds on billions in illegal Trump tariffs | Trump tariffs
    • NASA changed an asteroid’s orbital path around the sun, a first for humankind
    • How AI is shaping the war in Iran — and what’s next for future conflicts
    • A century of care: Wildlife Trusts mark 100th birthday with woodland project | Wildlife
    • When Is a War Not a War?
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.