Kin.art Launches Free Tool to Prevent GenAI Models from Training with Artworks

2024-01-25

Artificial intelligence, especially text-to-image AI models like Midjourney and OpenAI's DALL-E 3, can do amazing things. From realism to cubism, image generation models can transform any description, whether it's brief or detailed, into artwork that could have just been taken off an artist's easel.

The problem is that many (if not most) of these models are trained on artwork without the knowledge or permission of the artists. While some vendors have started compensating artists or offering "opt-out" options for model training, many vendors have not.

Entrepreneurs and activists are now releasing tools aimed at allowing artists to modify their work to prevent it from being used for training GenAI models. One such tool, Nightshade, subtly alters the pixels of an image to make the model think that the depicted content is different from reality. Another tool, Kin.art, uses image segmentation (hiding parts of the artwork) and label randomization (swapping image metadata labels of artworks) to disrupt the model training process.

Kin.art tool was co-developed by Flor Ronsmans De Vry, who founded the art commission management platform Kin.art a few months ago with Mai Akiyoshi and Ben Yu.

As Ronsmans De Vry explained in an interview, art generation models learn the association between written concepts and images by training on labeled image datasets, such as the word "bird" referring not only to a bluebird but also to a parrot and a bald eagle (as well as more abstract concepts). He said that by "disturbing" the related images or labels of a given artwork, vendors will find it more difficult to use the artwork for model training.


Ronsmans De Vry said, "Designing an environment where traditional art and generative art can coexist has become one of the major challenges facing the art industry. We believe it starts with ethical approaches to AI training and respecting the rights of artists."

Ronsmans De Vry claimed that Kin.art's training defense tool is superior to existing solutions in certain aspects because it does not require cryptographic modifications to the images, which can be costly. However, he added that it can also be used in conjunction with these methods as an additional protection.

Ronsmans De Vry said, "Other tools that help prevent AI training attempt to mitigate harm by poisoning your artwork after it has already been included in a dataset. We prevent your artwork from being inserted."

While the tool is free, artists must upload their work to Kin.art's portfolio platform to use it.

He said, "After testing our solution on our platform, we plan to offer it as a service so that any small website or large platform can easily protect their data from unauthorized use. Owning and being able to defend your platform's data in the age of AI is more important than ever... Some platforms are fortunate enough to protect their data by blocking non-users' access, but other platforms that provide services to the public do not have this luxury. That's where solutions like ours come into play."