Tech

Uncensored AI art model prompts ethics questions – TechCrunch


One new open source AI image generator has the ability to create photo-realistic images from any text prompt that saw amazingly quick uptake within the first week. AI Stability Stable diffusion, high-fidelity but capable of running on commercially available consumer hardware, is currently used by artwork services like Artbreeder, Pixelz.ai and more. But the unfiltered nature of the model meant that not all uses were entirely across the board.

For the most part, use cases are already across the board. For example, NovelAI has been experimenting with Stable Diffusion to create artwork that can accompany AI-generated stories by users on its platform. Midjourney has launched a beta that exploits Stable Diffusion to produce better optical effects.

But Steady Diffusion has also been used for less salty purposes. On the infamous 4chan discussion board, where the mockup was leaked early, several threads dedicated to the AI-generated celebrity nude art and other forms of pornography were created.

Emad Mostaque, CEO of Stability AI, called the model leak on 4chan “regrettable” and emphasized that the company is working with “leading ethicists and technologists” on safety. and other mechanisms around responsible release. One of these mechanisms is the tunable AI engine, the Safety Classifier, which is included with the overall Stable Diffuse software package to attempt to detect and block offensive or undesirable images. would like.

However, the Safety Classifier – while enabled by default – can be disabled.

Stable diffusion is new territory. Other AI art generation systems, like OpenAI’s DALL-E 2, have implemented strict filters for pornographic material. (The license for the open source Stable Diffusion prohibits certain applications, such as minor exploitation, but the model itself is not acceptable on a technical level.) Furthermore, many people are not able to ability to create artwork for the public, unlike Stable Diffusion. Those two capabilities can pose a risk when combined, allowing bad actors to create pornographic “deepfakes” that – in the worst case scenario – could continue to abuse or implicate someone in a crime without they did not commit.

Stable diffusion

An insightful photo of Emma Watson, created by Stable Diffusion and published on 4chan.

Unfortunately, so far women are most likely victims of this. A study conducted in 2019 revealed that, of 90% to 95% of non-consensual insights, about 90% are women. That bodes well for the future of these AI systems, according to Ravit Dotan, an AI ethicist at the University of California, Berkeley.

“I am concerned about the other effects of composite images with illegal content – ​​that it will exacerbate the illegal acts depicted,” Dotan told TechCrunch via email. “For example, the child will synthesize [exploitation] enhance the creation of the authentic child [exploitation]? Will it increase the number of attacks by pedophiles? “

Lead researcher Abhishek Gupta of the Montreal Institute of AI Ethics shares this view. “We really need to think about the lifecycle of the AI ​​system, including usage and post-implementation monitoring, and think about how we can envision controls,” he said. harm can be minimized even in the worst situations. “This is especially true when a powerful ability [like Stable Diffusion] going into the wilderness could cause real harm to those against whom such a system could be used, for example by creating objectionable content in the image of the victim. “

Something of a preview Last year, on the advice of a nurse, a father took pictures of his baby’s swollen genitals and texted them to the nurse’s iPhone. The photo was automatically backed up to Google Photos and flagged by the company’s AI filter as child sexual abuse material, resulting in the man’s account being disabled and the San Francisco Police Department having to take action. check.

If a legitimate photo can bypass such a detection system, experts like Dotan say, there’s no reason why the deep images produced by a system like Stable Diffusion couldn’t – and at the scale large tissue.

“The AI ​​systems that people create, even with the best of intentions, can be used in harmful ways that they don’t anticipate and can’t prevent,” Dotan said. “I think developers and researchers often underestimate this point.”

Of course, the technology to create deepfakes has been around for a long time, powered by AI or otherwise. One year 2020 report from deepfake detection company Se Density found that hundreds of deepfake videos apparently featuring female celebrities are uploaded to the world’s largest porn sites every month; The report estimates the total number of deepfakes online at around 49,000, more than 95% of which are pornographic. Actresses including Emma Watson, Natalie Portman, Billie Eilish, and Taylor Swift have been the target of deep-rooted scams ever since AI-powered face-swapping tools went mainstream a few days ago. years ago and some, including Kristen Bell, have spoken out against what they see as sexual exploitation.

But Stable Diffusion represents a newer generation of systems that can produce incredibly convincing – if not perfect – fake images with minimal user work. It’s also easy to set up, doesn’t require more than a few setup files, and a graphics card costs a few hundred dollars for the high-end. Work is underway on more efficient versions of the system that can run on the MacBook M1.

Stable diffusion

A poignant photo of Kylie Kardashian posted to 4chan.

Sebastian Berns, a PhD researcher in the AI ​​group at Queen Mary University of London, thinks automation and the ability to scale custom image creation are the big differences with systems like Stable Diffuse. determination – and key issues. “Most harmful images can be produced using conventional methods but are manual and require a lot of effort,” he said. “A model that can produce near-realistic footage could give way to personalized extortion attacks against individuals.”

Berns is concerned that personal photographs scavenged from social media could be used to condition Stable Diffusion or any such model to create targeted pornography or images. pictures depicting illegal acts. There is certainly precedent. After covering the rape of an 8-year-old Kashmiri girl in 2018, Indian investigative journalist Rana Ayyub became target of Indian nationalist trolls, some of whom created deepfake porn with her face on someone else’s body. The leadership of the nationalist political party BJP spoke deeply, and the resulting harassment Ayyub received became so bad that the United Nations had to intervene.

Berns continues: “Stable Diffusion offers enough customization to send automated threats against individuals who pay or are at risk of publishing fake but potentially harmful footage.” “We have seen people being blackmailed after their webcam was accessed remotely. That intrusive step may no longer be necessary.”

With a steady spillover into the wild and already used to create pornography – some of which is non-consensual – it could become responsible for the servers hosting the images to take action. TechCrunch reached out to one of the major adult content platforms, OnlyFans, but did not receive a response regarding the publication time. A spokesperson for Patreon, which also allows adult content, noted that the company has a policy against fake images and doesn’t allow images that “reuse images of celebrities and put non-adult content in an adult context”.

However, if history is any indication, enforcement will likely be uneven – partly because there are few laws specifically protecting against deep default behavior as it relates to pornography. And even as the threat of legal action pulls some sites dedicated to AI-generated objectionable content, there’s nothing stopping new ones from popping up.

In other words, says Gupta, it’s a brave new world.

“Creative and malicious users can abuse capabilities [of Stable Diffusion] to generate subjectively objectionable content on a large scale, using minimal resources to run inferences – cheaper than training the whole model – and then publish them in places like Reddit and 4chan to increase traffic and attract attention,” says Gupta. “There is a great deal of danger when such capabilities go ‘natural’ where controls such as API rate limiting, safety controls over output types returned from the system are no longer allowed. apply again.”



Source link

goznews

Goz News: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably.

Related Articles

Back to top button