The Joy and Dread of AI Image Generators Without Limits - 7 minutes read




+++lead-in-text

For the past few months, Elle Simpson-Edin, a scientist by day, has been working with her wife on a novel, due out late this year, that she describes as a “grimdark queer science fantasy.”

+++

As she prepared a website to promote the book, Simpson-Edin decided to experiment with illustrating its content using one of the powerful new [artificial [art-making which can create [eye-catching and even photo-real images to match a text But most of these image generators are designed to restrict what users can depict, banning pornography, violence, and pictures showing the faces of real people. Every option she tried was too prudish. “The book is quite heavy on violence and sex, so art made in an environment where blood and sex is banned isn’t really an option,” Simpson-Edin says.

Happily for Simpson-Edin, she discovered [Unstable a Discord community for people using unrestricted versions of a recently released, open source AI image tool called [Stable Users share illustrations and simulated photographs that might be considered pornographic or horror-themed, as well as plenty of images that feature nude figures made grotesque by the software’s lack of any understanding of how bodies should actually look.

Simpson-Edin was able to use the unfiltered tool to create some suitably erotic and violent images for her book. Although relatively tame and featuring limited nudity, other image generators would not have been able to make them. “The big selling point of the uncensored Stable Diffusion variants is that they allow so much more freedom,” Simpson-Edin says.



[#image: Simpson-Edin, an author, used an open source AI image generator to create images to promote her “grimdark queer science fantasy” novel.*|||

The world’s most powerful AI projects remain locked inside large tech companies that are reluctant to provide unfettered access to them—either because they are so valuable or because they might be abused. Over the past year or so, however, some AI researchers have begun building and releasing powerful tools for anyone to use. The trend has sparked concern around the potential misuses of AI technology that can be harnessed to different ends. Some users of the notorious image board 4chan have discussed using Stable Diffusion to generate celebrity porn, or deepfakes of politicians as a way to spread misinformation. But it is unclear whether any effort has been made to actually do this.

Some fans of AI art worry about the effect of removing guardrails from image generators. The host of a [YouTube dedicated to AI art, who goes by the name Bakz T. Future, claims that the Unstable Diffusion community is also creating content that might be considered child pornography. “These are not AI ethicists,” he says. “These are people from dark corners of the internet who have essentially been given the keys to their dreams.”

The provider of those keys is Emad Mostaque, an ex-hedge fund manager from the UK who created Stable Diffusion in collaboration with a collective called which is working on numerous open source AI projects.

Mostaque says the idea was to make AI image generation more powerful and accessible. He has also created a company to commercialize the technology. “We support the entire open source art space and wanted to create something anyone could develop on and use on consumer hardware,” he says, adding that he has been amazed by the range of uses people quickly found for Stable Diffusion. Developers have created plugins that add AI image generation to existing applications like and adding new capabilities such as instantly applying a particular artistic style to an existing image.

The official version of Stable Diffusion does include guardrails to prevent the generation of nudity or gore, but because the full code of the AI model has been released, it has been possible for others to remove those limits.

Mostaque says that although some images made with his creation may be unsavory, the tool has not done anything different from more established image making technologies. “Using technology has always been about people’s personal responsibility,” he says. “If they use Photoshop for illegal or unethical use it is the fault of the person. The model can create bad things only if the user deliberately makes it do so.”



[#image: generators like Stable Diffusion can create what look like real photographs or hand-crafted illustrations depicting just about anything a person can imagine. This is possible thanks to algorithms that learn to associate the properties of a vast collection of images taken from the web and image databases with their associated text labels. Algorithms learn to render new images to match a text prompt in a process that involves adding and removing random noise to an image.

Because tools like Stable Diffusion use images scraped from the web, their training data often includes pornographic images, making the software capable of generating new sexually explicit pictures. Another concern is that such tools could be used to create images that appear to show a real person doing something compromising—something that [might spread [Keep
##### Search our [artificial intelligence and discover stories by sector, tech, company, and more.
+++

+++

The quality of AI-generated imagery has soared in the past year and a half, starting with the January 2021 announcement of a system called DALL-E by AI research company It popularized the model of generating images from text prompts, and was followed in April 2022 by a more powerful successor, [DALL-E now available as a commercial service.

From the outset, OpenAI has restricted who can access its image generators, providing access only via a prompt that filters what can be requested. The same is true of a competing service called released in July of this year, that helped popularize AI-made art by being widely accessible.

Stable Diffusion is not the first open source AI art generator. Not long after the original DALL-E was released, a developer built a clone called DALL-E Mini that was made available to anyone, and quickly became [a meme-making DALL-E Mini, later rebranded as still includes guardrails similar to those in the official versions of DALL-E. Clement Delangue, CEO of a company that hosts many open source AI projects, including Stable Diffusion and Craiyon, says it would be problematic for the technology to be controlled by only a few large companies.

“If you look at the long-term development of the technology, making it more open, more collaborative, and more inclusive, is actually better from a safety perspective,” he says. Closed technology is more difficult for outside experts and the public to understand, he says, and it is better if outsiders can assess models for problems such as race, gender, or age biases; in addition, others cannot build on top of closed technology. On balance, he says, the benefits of open sourcing the technology outweigh the risks.

Delangue points out that social media companies could use Stable Diffusion to build their own tools for spotting AI-generated images used to spread disinformation. He says that developers have also contributed a system for adding invisible watermarks to images made using Stable Diffusion so they are easier to trace, and built a tool for finding particular images in the model’s training data so that problematic ones can be removed.

After taking an interest in Unstable Diffusion, Simpson-Edin became a moderator on the Unstable Diffusion Discord. The server forbids people from posting certain kinds of content, including images that could be interpreted as underage pornography. “We can’t moderate what people do on their own machines but we’re extremely strict with what’s posted,” she says. In the near term, containing the disruptive effects of AI art-making may depend more on humans than machines.



[#image:

Source: Wired

Powered by NewsAPI.org