CHANGING THE CONVERSATION ON ART + AI: THE HARMS ARE REAL

In the beginning of this series I stated my intention to write “not just another piece about AI Art, or about AI and Art, but about the conversation around AI and Art.” So far I’ve spent a lot of time discussing historical precedence and flawed framing and some readers might have been bracing themselves for some oncoming wave of apologetics. I wanted to point out the non-starters for productive conversation so there’s a stronger foundation for where we go next. Taking things apart is always fun, but the hope here is to learn something along the way so they can be put back together better. A flawed argument doesn’t mean there isn’t a problem, but it sometimes makes it worse by providing cover for the real problems, distracting from the things that actually need to be examined. If you were skeptical at the start, I hope the following provides some payoff for your patience.

It’s easy for opposition arguments to be wooed by “we’re all in this together” approaches, seeing strength in numbers as a helpful tool. If lots of people agree this is bad then it must be bad and we should stop it. But this is an appeal to popularity, and more dangerously it’s conflation. Conflation isn’t solidarity, it’s cover. Current criticism lumps everything together into a single “NO AI” protest sign, treating non-consensual data training, lost work, unethical Terms of Service, deepfake porn, journalistic fraud, etc as all the same problem. Unfortunately the highest profile, loudest and most often repeated argument is aesthetic theft. As I discussed previously in “Is It Theft?” this is also the least legally substantial complaint, which means that most people are most often talking about the weakest argument. This absorbs energy and keeps the attention away from more legally sound, more actionable problems. And since all the outrage is aimed elsewhere, the people responsible for the most serious actual harms; fraud, labor exploitation, etc; simply hide behind the noise.

I don’t want to be pedantic here but precision is important, that’s essentially my entire thesis for this series after all. If we want to fix a problem, we have to be able to accurately identify the problem, its cause, and a solution. Falling back on “it’s bad, it should just go away” is not realistic, and honestly makes it seem like people don’t really want a solution because they enjoy having something to complain about.

Training data and consent is where this gets philosophically serious and is a good place to start. The extraction issue is this: any distinct artist style was not inherently valuable or recognizable before that artist spent years building that recognition and value. It only has recognition and value because of the dedication and efforts of that artist, in fact that value is inseparable from the work that built the value in the first place. AI models replicating that style isn’t as much about copying the style of the artist as it is benefitting from the investment of the work of that artist. In many cases their work has been co-opted, transferred to a system they don’t own and don’t benefit from, without their permission. Years of dedication and effort spent to develop that style and build its recognition (and reputation) now capitalized on by others. Using a knocked off version of an artist’s work does not come with the same cachet of working with that artist, but an artist who might have chosen not to lend their distinctive style to a brand or cause they personally objected to may find their style replicated and used for that project regardless, and suffer the same assumed association. This is effectively loss of creative control which forces artists to respond after the fact to clarify misrepresentations, which still results in drawing attention to the thing they didn’t want to be associated with. This is not hypothetical, artists Matt Furie and Sarah Andersen have had their work co-opted and used for purposes they are ideologically opposed to and had to spend their own time and money to counter the narratives. While primarily telling a pre-AI story, the 2020 documentary Feels Good Man chronicles Furie’s battle and it’s easy to see how in a post AI world the ease and speed at which that can happen is significantly multiplied

Screencrops: (L) Salle, 2025 / (R) Reemsten, 2021

And while false association is a problem, uncredited usage can lead to situations like we’re seeing with David Salle and Kelly Reemsten right now. Salle, a veteran of the 1980s New York Pictures Generation that legitimized appropriation as a conceptual practice, is no stranger to controversy. For his new exhibition My Frankenstein he used a custom trained AI to determine subjects and one piece Hatchet (2025) very blatantly appropriated an image from Reemsten’s Impact (2021). Salle claims this was unintentional. The gallery pulled the piece regardless. That we can already see this issue touching everyone from cartoonists to trad artists shows that in effect, no one is safe. If simply copying someone’s style is bad form and borderline unethical, using that copy to disenfranchise the original creator is unquestionably bad. Doing that on an industrial level scale, to millions of people, is a very real problem. 

While this is the most substantial, it’s also the most solvable. Between opt-in/opt-out mechanisms, frameworks for licensing, training data disclosure, legal clarity on what constitutes consensual usage; there are a number of tools already being discussed, negotiated, and built to directly address this. The UMG/Udio settlement is the perfect example of a negotiated agreement that benefits artists. Warner following it shows it wasn’t a one off solution. Of course having solutions does not mean solutions are universally applied so even with these options available there’s significant work to be done convincing (or forcing) the public (both individual and commercial) to adopt best practices, and with that an arsenal of social and legal levers that can be applied. The EU AI Act mandating opt-out mechanisms for training data as of August 2025 is an example of how new laws can support and enforce these policies.

The “ai will replace all artists” warning is both wrong and unhelpful, and it steals the attention from a group who are much more at risk of being replaced. While copying famous artists is a real thing, it’s also fairly easy to identify. Having a Warhol painting on your book cover and having something in the style of Warhol are two very different things. But focusing on the most recognizable artists misses the demographic that is far more directly impacted – the mid-tier commercial / working creative professional. So what is true is that illustrators, concept artists, stock photographers, commercial designers, motion graphic professionals, etc, an entire ecosystem of working artists whose hourly wage income depends on a steady volume of client work are all at risk (a 2024 IATSE estimate states that 29% of all jobs in animation will be gone within 3 years.) These are all positions that countless industries have always needed a human to do, and now they don’t. This is especially true of concept and production work that might only be used in house to help shape and refine an idea before being handed to the high end professionals to finalize for the public. These positions are disappearing and being replaced by monthly AI subscriptions, a fact that not only impacts the people losing those jobs, but also could lead to an existential crisis in these fields: how does one build a career in an industry if the entry level positions at the bottom of the ladder no longer exist?

This also isn’t entirely new. The introduction of stock photography as a product essentially put an end to the on staff assignment photographers who were previously the sole source of that imagery, and professional stock photographers have since had their industry turned upside down by the democratization of stock photography with sites like Shutterstock allowing anyone with a camera to sell their photos to those institutional clients – digital cameras and ever improving optics on mobile phones only multiplied that. Desktop publishing software severely impacted the market for typesetters, a very common service provided by printshops until the widespread adoption of Adobe Pagemaker and Microsoft Word in the 1990s killed the demand for those skills. As I discussed in an earlier essay there is a long historical precedence for progress impacting industries, but the scale and speed makes this feel very different. Whereas previously there were often years of gradual adoption allowing people to easily adapt, now people are waking up to find they’ve been let go from the job they thought was secure and reliable just last week. Worse still, they are unable to distribute resumes and portfolios to land another comparable spot elsewhere as those positions across the industry no longer exist. And historically while some pen and paper illustrators did transition to computers and continue working in those fields, the speed of change is so drastic that adaptation can’t keep up, and without entry level jobs in these fields the “safety net” for those interested in learning new tools has all but disappeared.

This is real and we need to recognize that in order to plan for and consider solutions, or perhaps interventions. Companies need to offer stronger protections for existing staff to give time to compensate for and facilitate those transitions, unions need to prioritize negotiations around a practical and realistic future including AI generated work. Clients need new disclosure requirements, reselling platforms need to better distinguish between AI generated and human made work so that clients genuinely committed to working with humans can easily make those distinctions, markets need accurate information in order to function. This is not a stop to AI, nor is such a thing a realistic goal and focusing on that as the only acceptable outcome is a losing battle. What these do is help shape the effects, and help direct the outcomes. 

Speaking of tools, the next real harm worth talking about is platform concentration. While not often brought up, it’s no less serious than the others I’m including. Who controls the tools is as important as what the tools do, and as creative professionals begin using AI tools to remain competitive in the new markets it gets concerning when a small number of corporations have a monopoly on those tools. Dependency creates power imbalances and people lose the ability to make choices around things like pricing, terms of use, content policies, etc and instead have to accept whatever is handed to them. This ends up dictating who can use the tools and how they can use them, and for what purposes which is incredibly problematic when talking about creative professions. Corporations with investors deciding what artists can do is never a good idea. If an entire industry suddenly has to conform to a software company’s policies, well we’ve seen exactly how that plays out. Tumblr was one of the most vibrant online communities when Yahoo! acquired it for $1.1 Billion in 2013, a few years later Verizon bought Yahoo! and got Tumblr as part of the deal. As part of an app store dispute deal with Apple in 2018, Verizon changed Tumblr’s allowable content policy which led to a 30% traffic drop in just a few months. Some creators were forced out, others left in solidarity. These were not people doing anything wrong, this was a policy change made by corporate management several layers detached from the community. Ironically the argument for the change was to calm investor fears, but ultimately it led to a fire sale of Tumblr the following year for less than $3 Million. And while the new owners tried to roll things back, the damage had largely been done. In this case people moved their content to other sites, but that’s because the other sites existed. If there’s no alternative, things just disappear.

The “race to the bottom” is also a concern, and again we can see exactly how this plays out, the Getty/Shutterstock merger impact on the stock photography industry is a perfect example. Commercial photographers were punished for individuality and the financial reward for conformity began to shrink so an ever smaller pool of companies could show their investors increased profits. Beyond the creative chilling effect, this begins to average everything to the lowest common denominator. AI has the potential to do all of this across countless industries impacting creatives in just about every field you can imagine.

The 2024 Adobe Terms of Service change is another example to look at. Adobe rolled out a new ToS, continued use of their products was considered agreement to the new terms. The problem however was these new terms said anything you make with their products would be used to help train their in house AI models, with no ability to opt-out for confidential or sensitive material. People went ballistic and Adobe “clarified” (read: backpedaled) their terms, but had there been no pushback from users or threat to abandon the products, the terms likely would have remained. 2025 saw OpenAI relaxing content policies around creating images of celebrities in the beginning of the year, then tightening restrictions again later in the year. Both policy changes happened silently, leaving users to try and figure out why one day’s rule abiding workflow ran into walls the next. People will need to build dependencies on these tools, silent policy changes make that difficult.

So how do we address this? Supporting open source alternatives is important, as is early antitrust legislation to prevent all the indie tools from being absorbed into the majors. Users can push for tools that work with each other rather than opt-in to walled gardens where they will eventually find themselves locked. Recognizing that where the power sits is incredibly important doesn’t require adopting any particular position about the output of those tools, and even people deeply opposed to the adoption of AI should be able to see that the ecosystem has a better chance of going in a positive direction if we reject centralization early on.

(This image is fake, circa 2023)

Shifting perspective, provenance confusion is a real and growing concern. The inability to determine the difference between a real image used to document reality and something AI generated is genuinely dangerous. Photographs are used as proof in many circumstances; from current event photojournalism and legal proceedings to insurance claims and historical records. Trusting that the photo being presented is real is crucial, and as that gets harder to do the risks get increasingly greater. While I’ve talked about subjective aesthetic opinions, presenting an AI-generated image as real is objectively  lying, and can have real consequences for real people. A notable example of this was the mid-2023 fake image of an explosion at the Pentagon. The seriousness of the report and the speed at which it could be distributed on Twitter had immediate impacts in financial markets. The problem at the time was the image was posted by a “verified” account, and then reposted by many other verified accounts. I’ve written a lot in the past about the problem with account “verification” and in this case people assumed if a verified account posted it then it must be real, as many people didn’t know that verification was now a paid upgrade available to anyone. Attempts to debunk the false image didn’t spread as quickly, so the result was that the fake image went viral while the fact checking struggled to get noticed. And while we can draw ethical distinctions between (accidental) confusion and (intentional) deception the impact is the same. While looking for a reference image online and finding an AI generated image and confusing it for a real document is not as malicious as generating a fake image to mislead people, in the end both are building false narratives and the intentionality only matters so much when actual damage is done.

This multiplies the existing problem of real images being mislabeled or misrepresented, for example war or natural disaster imagery from years ago being presented as new images of current events. The current US/Israel war with Iran is a solid example, with both AI imagery being passed along as real, and recycled imagery from earlier (and in some cases entirely separate) conflicts being presented as new. It’s also a compounding problem, the more images that are created and posted online the likelihood of people finding them and misusing them or passing them on grows as well which leads to further problems with public trust. Assuming (or asserting) that any image is fake is now commonplace, especially if the image provides evidence for something the viewer disagrees with. Labeling inconvenient images “fake news” is a regular occurrence, and people were photoshopping images long before AI streamlined the process. There are technical solutions such as metadata and watermarking, and policy work like new AP guidelines and C2PA standards, but ultimately these operate on a foundation of good faith, they assume people want to know what is real or not which isn’t always the case. Bad actors intent on misleading people do not follow best practices, and well written policy doesn’t fix motivated skepticism. The problem is not that people don’t know what is real, it’s that the epistemic environment degrades in both directions. People become selectively skeptical in ways that confirm what they already believe. The fake Pentagon image spread because people either wanted it to be true, or found it plausible enough not to question. The same dynamic runs in reverse: a real image of something genuinely terrible gets dismissed as AI-generated by people who don’t want it to be true. Trust collapses and credulity flourishes. Uncertainty gets weaponized and the public becomes increasingly manipulable.

The spectrum has accidental confusion at one end and deliberate targeting at the other, with the harms scaling accordingly. Not knowing if a photo of a hurricane damaged house is real or not is one problem, using that image for insurance fraud is significantly worse. Generating an AI image to make a vacation sunset selfie more picturesque might be a little deceptive, but doesn’t harm anyone. Intentionally creating malicious deepfakes of someone is a different story altogether and while photoshopped images have been a problem for decades, it’s much easier to use AI models to create non-consensual intimate imagery, and much harder to quickly identify. Fake nudes of Taylor Swift that began circulating online in early 2024 brought this problem out of the shadows, with Sensity AI (makers of deepfake detection software) revealing that by 2019 96% of deepfakes circulating online were nonconsensual pornography, almost entirely targeting women. Celebrities have always had to wrestle with this kind of unwanted attention, but until recently it’s not been something unsuspecting individuals have needed to worry about. Over the 2025/2026 holidays XAI rolled out an update to its Grok model which enabled editing existing photos, without strong content restrictions. Almost immediately users started asking Grok to remove clothing, add props and otherwise sexualize previously posted benign imagery of women and children. Rather than immediately address this as a problem, X’s owner Elon Musk made public jokes about it. In less than 2 weeks Grok generated over 4 Million images, close to half of which were sexualized depictions of women.

This is already criminal in most jurisdictions and laws are being passed everyday in places that are just now catching up. But it still ends up being a reactionary game of cat and mouse, with laws being written after the harm has been done. And while deepfake porn may feel like an entirely different topic than AI and Art, understanding the conflation problem is very relevant. When “an artist training an AI on their own work” and “sexual deepfake of teenagers” is treated like the same problem, people who need immediate legal protection get lost in a discourse that’s fundamentally about aesthetics. A victim of impersonation does not care about proof of authorship or image source provenance, and their case shouldn’t be bogged down in unrelated details. These are very different issues with very different concerns, and that distinction is crucial to ensure they are handled appropriately.

In the beginning of this essay I argued that precision was important, and lumping everything together into a single criticism of AI hurts all the individual criticisms which are much stronger when given their own space in the conversation. Each of the “harms” I’ve brought up here sit at a different level of the big picture and none are benefitted by flattening that into a single argument. Training data consent is a licensing question, labor displacement is a market structure question, platform concentration is an antitrust question, provenance confusion is a disclosure question, fraud is a law enforcement question. These are inherently different problems, asking different questions. While they may all have origin in the rapid growth and use of generative AI tools, they have very different solutions.

Legislators tend to respond to the loudest arguments, and when “aesthetic theft” or “ai steals from artists” is the loudest complaints, everything else gets lumped into that, sucking the attention away from the more serious problems. This isn’t hypothetical handwringing, early legislation efforts have been primarily focused on style and training data while fraud and deepfakes have been pushed aside with the assumption existing law is probably fine to address them. That’s not the case, and the new problems of the new world demand new considerations if we hope to find new solutions, and that starts by being purposeful with these conversations and keeping the individual concerns distinct.

AI fatalists might argue that this is all inevitable, and things will change if you like it or not so no reason to question it. That’s not clear eyed realism, it’s surrender masquerading as pragmatism. The more realistic position is not to assume it can be stopped, but to recognize that people have agency and can help shape the course of progress. The environmental movement didn’t stop industrialization. The labor movement didn’t stop automation. The FDA didn’t stop the pharmaceutical industry. But clean air standards exist, the 40-hour work week exists, and drug safety requirements exist. These protections were built because concerned people put their efforts into movements that named specific harms and produced specific responses. Importantly, these solutions were not final, they are still challenged and people are still engaged and still fighting for them. The hope is that the same can happen with AI, that people can identify a specific problem and fight for a specific solution. General objections to industrial capitalism are easy to dismiss, a specific problem with a specific solution is much easier to adopt.

(Ladies Tailors on Strike in New York, 1910
Bain News Service / Library of Congress)

That dynamic should be considered here. “NO AI” stickers and demands to stop all of it are easily ignored and can’t be taken seriously by anyone crafting policy. But talking about training practices, market effects, legal accountability, infrastructure ownership, those are real details that structure can be built around, something only possible by being precise. Vague objections invite vague responses, while identifying specific harms paves the way for specific policy to address them. The future is not written, outcomes are not predetermined. But there will be a future and there will be outcomes, which can still be influenced by people who care enough to focus on the specific thing they want to change.

I’ve spent a lot of time in this essay talking about issues but I want to make sure the larger point isn’t lost in the weeds. The people. There are real people and there are real harms impacting them. These people are not served by any conversation that treats all of their concerns as a single thing. That obscures the problems and makes them harder to solve. The way to actually help is to take the time to name what’s actually happening so that solutions can actually be implemented. They deserve better than having their pain used as ammo in arguments that were never really about them.

[header image: “Depression” Mark Benedict Barry, 1934 / Library of Congress]

This essay is part four of a series, part one can be found here.


March 31, 2026 Sean Bonner

Subscribe

Enter your email address to subscribe and receive new posts by email.