TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
Science / Tech Culture

The Power and Ethical Dilemma of AI Image Generation Models

Will AI-based generative models provide an academic-to-commercial pipeline for big tech companies to get around copyrights and accountability?
Jan 6th, 2023 3:00am by
Featued image for: The Power and Ethical Dilemma of AI Image Generation Models

The recent emergence of deep learning text-to-image platforms like Midjourney and Stable Diffusion are allowing people to conjure up incredible works of digital art within seconds, just by typing in a short descriptive text prompt — for example, it can be as simple as something like “a wizard casting a spell on top of a mountain.”

Many of these new tools are relatively easy for the general public to use — all without needing to spend years learning the basics of drawing and painting.

Many of us are well aware of the potential benefits of machine learning, such as helping businesses manage data more smoothly, or assisting healthcare professionals in making more accurate diagnoses, or sifting misinformation out of the daily news. Not surprisingly, however, there are also valid concerns about potential AI pitfalls, such as it being misused to create eerily convincing deepfakes, the societal impacts of algorithmic bias, or the mass surveillance anxieties surrounding AI technologies like facial recognition.

And now, these new generative tools are brewing up a whole wave of concerns around AI ethics.

Human Artists ‘Competing against Code’

These systems are what are called diffusion models, which are a type of generative model that creates data output that looks similar to the input data they were trained on.

First appearing in 2015, these diffusion models work by destroying training data by successively adding Gaussian noise, and then learning to recover that data by reversing this “noising” or diffusion process — making them more powerful than generative adversarial networks (GANs) for image generation.

It is through this “noising” and “de-noising” diffusion process that one can even use these AI image generators to create images in the style of a particular artist, just by typing in their name. That’s because these models are trained on millions of images that have been scraped off the internet — which one study recently showed can contain harmful or even illegal content.

Image of “wizard casting spell on top of mountain,” generated by Stable Diffusion (via OpenArt.ai).

One particularly popular artist whose name and images are frequently used to help train and produce these imitative AI artworks is Polish concept artist Greg Rutkowski, who is well-known for his fantastical paintings, often done for the gaming industry.

But the problem is that Rutkowski himself never approved the use of his images in this way. Even worse, some of these AI-generated imitations even have his signature on them.

“The way [AI art generation is] developing and the direction where it’s heading is terrifying,” said Rutkowski in an interview with Crypto Mile. “Right now it just takes five to ten minutes to create something that humans would only be able to create in two weeks. We have to wait probably a year until it gets so good that it will probably compete with living artists.”

And therein lies the ethical conundrum behind such AI image generators. In order to produce something in a particular artist’s style, works by that artist have to be scraped off the internet, and then fed into these AI training datasets.

Yet, none of the companies behind these image generators have explicitly asked permission from the artists themselves — nor have the artists been compensated.

“People who choose to use this technology need to understand that the vast majority of these algorithms are trained on uncontrolled datasets,” said Adobe’s creative director Vladimir Petkovic in a recent LinkedIn post.

“Copyrighted artworks, artist’s personal names and styles are simply ingested without any respect to the legitimate authors. This creates an environment where artists need to compete against the code, which is utilizing their hard labor. AI has its place as a powerful tool, which can enhance many creative workflows. However, until we have a proper system in place, which will [rightfully] attribute and compensate everyone whose work is being used to train these algorithms, I personally believe it is not ethical to use them to produce ‘art’ concepts.”

Unforeseen Impacts

Beyond copyright infringement, and potentially threatening human artists’ livelihoods, there will likely be wider, unforeseen ripple effects in the industry due to AI. For instance, widespread use of AI imagery might discourage would-be artists from pursuing a creative career, since they may believe that it would be futile to compete in a market that might be dominated by machine-generated art in the future.

Moreover, AI could also disrupt the educational pipeline within the art industry, where it is common for fledging art creators to invest a sizeable chunk of cash for courses led by established artists and art schools, in order to gain marketable skills that help them move up within the industry.

Indeed, the threat of AI automating away jobs from professional artists and illustrators isn’t just a hazy prospect; some artists who typically take on smaller commissions are already noticing that work is drying up, especially from clients that have tighter budgets.

“Already this year I’ve personally lost almost $3,000 worth of freelance work,” as Springfield, Missouri-based artist Daniel Harris told CBC News. “[Clients] just flat-out told me that they will just get this AI to do it — it’s not as good, but it’s way cheaper.”

Conversely, there are already reports of clients getting scammed, who are paying for what they believe is original work, but receiving something that is actually AI-generated. In a similar vein, one US-based artist was recently able to win first prize in a state fair’s digital painting contest, using a work that was actually generated by Midjourney, and then printed on canvas.

So far, it seems that while there are a growing number of AI artists who are supportive of the technology, there are also some who are speaking out. Recently, artists posted images saying “NO TO AI-GENERATED IMAGES” on one online portfolio site, to protest against having their original works posted alongside AI-generated images. Other artists have come together to form collectives like Spawning, which is behind Have I Been Trained?, a site that allows users to find out if their work has been scraped for AI models and to opt-out. Additionally, there are also suggestions that AI models should exclude images created by living artists.

A Form of Data Laundering?

Other experts are speculating that these models are also being used as a form of data laundering, where stolen data is converted in a way so that it can be sold or used by purportedly legitimate databases. Essentially, it’s an academic-to-commercial pipeline where big tech companies can get around copyrights and accountability, by creating and funding non-profits that can build datasets and train models for “research purposes.” These models are then shared by for-profit enterprises, which can then monetize these models by offering APIs that are then sold commercially.

It may seem like a stretch — until one makes an eye-opening comparison between how generative AI is being deployed in the art world, versus the music industry.

“Technically these models create something new, so they should be protected by fair use,” as Clientell’s head of AI and data science Devansh Devansh noted in a blog post. “However, I learned that Stability AI [the company behind Stable Diffusion] was creating diffusion-based models for music as well. Unlike with Stable Diffusion, this [Dance Diffusion] model uses no copyrighted data. It can’t be a coincidence that the models avoid copyrighted material from an industry that has much better lawyers.”

While art, literature, journalism and music may be the first several testing grounds for a wide range of rapidly developing AI models, there are legitimate concerns that it will also negatively impact other industries like film, photography or fashion, where human actors, directors and models may be replaced by AI-generated images in the future.

In the end, it may seem fun to experiment with technologies that ostensibly “democratize” these industries, but it should be done carefully, and transparently, and without harming human livelihoods. Besides these practical issues, we also need to ask ourselves: is “art” made without soul actually art?

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.