OpenAI expands access to DALL-E 2, its powerful imaging AI system – TechCrunch


Written by admin



Today, DALL-E 2, an OpenAI artificial intelligence system that can generate images on demand or edit and enhance existing images, is becoming more accessible. The company announced on its blog that it will accelerate access for customers on the waiting list, with a goal of reaching approximately 1 million people over the next few weeks.

With this “beta test”, DALL-E 2, which was free to use, will move to a credit-based payment structure. New users will receive a limited amount of credits that can be spent on creating or editing an image or creating an image variant. (Generations return four images, while edits and variations return three.) Credits will refill each month up to 50 the first month and 15 the month thereafter, or users can purchase additional credits in $15 increments.

Here is a chart with specifics:

Prices for OpenAI DALL-E 2

Image credits: Open AI

Artists in need of financial assistance will be able to apply for subsidized access, OpenAI said.

DALL-E’s successor, DALL-E 2, was announced in April and made available to a select group of users earlier this year. intersection threshold of 100,000 users. OpenAI says wider access is enabled by new approaches to reduce bias and toxicity across the DALL-E 2 generations, as well as the evolution of the policy governing images generated by the system.


An example of the types of images that DALL-E 2 can generate. Image Credits: Open AI

For example, OpenAI said it rolled out a method this week that prompts DALL-E 2 to generate images of people that “more accurately reflect the diversity of the world’s population” when a tooltip appears describing a person with an unspecified race or gender. The company also said that it is now rejecting image uploads containing realistic faces and attempts to portray public figures, including prominent political figures and celebrities, while improving the accuracy of its content filters.

Generally speaking, OpenAI does not allow DALL-E 2 to be used to create images that are not “G-rated” or that may “cause harm” (such as images of self-harm, hateful symbols, or illegal activity). And previously it was forbidden to use the generated images for commercial purposes. However, as of today, OpenAI is granting users “full use rights” to commercialize the images they create with DALL-E 2, including the right to reprint, sell, and sell – including images created during early previews.

As DALL-E 2 derivatives such as Craiyon (formerly DALL-E mini) and the unfiltered DALL-E 2 itself have demonstrated, image-generating AI can very easily uncover the bias and toxicity inherent in millions of images from the internet. train them. Futurism was able to encourage Kryon to create images of burning crosses and Ku Klux Klan rallies and found that the system made racist assumptions about identities based on “ethnic-sounding” names. OpenAI researchers noted in an academic paper that an open source implementation of DALL-E can be trained to create stereotypical associations, such as creating images of white men in business suits passing by for terms such as “CEO”.

While the version of DALL-E 2 hosted on OpenAI was trained on a dataset filtered to remove images that contained overtly violent, sexual, or hateful content, the filtering has its limitations. Google recently stated that it will not release the AI-generating Imagen model it has developed due to the risk of misuse. Meanwhile, Meta has limited access to Make-A-Scene, its art-focused imaging system for “high-profile AI artists.”

OpenAI emphasizes that the hosted DALL-E 2 includes other security measures, including “automated and human surveillance systems” to ensure that things like the model don’t remember faces that appear frequently on the Internet. However, the company acknowledges that there is still work to be done.

“Expanding access is an important part of our responsible deployment of AI systems because it allows us to learn more about real-world usage and continue to work on our security systems,” OpenAI wrote in a blog post. “We continue to explore how artificial intelligence systems such as DALL-E can reflect bias in their training data, and different ways to address them.”

#OpenAI #expands #access #DALLE #powerful #imaging #system #TechCrunch



About the author


Leave a Comment