The AI Culture Wars Are Just Getting Started - 4 minutes read




Google was forced to turn off the image-generation capabilities of its latest AI model, Gemini, last week after complaints that it defaulted to depicting women and people of color when asked to create images of historical figures that were generally white and male, including vikings, popes, and German soldiers. The company publicly apologized and said it would do better. And Alphabet’s CEO, Sundar Pichai, sent a mea culpa memo to staff on Wednesday. “I know that some of its responses have offended our users and shown bias,” it reads. “To be clear, that’s completely unacceptable, and we got it wrong.”

Google’s critics have not been silenced, however. In recent days conservative voices on social media have highlighted text responses from Gemini that they claim reveal a liberal bias. On Sunday, Elon Musk posted screenshots on X showing Gemini stating that it would be unacceptable to misgender Caitlyn Jenner even if this were the only way to avert nuclear war. “Google Gemini is super racist and sexist,” Musk wrote.

A source familiar with the situation says that some within Google feel that the furor reflects how norms about what it is appropriate for AI models to produce are still in flux. The company is working on projects that could reduce the kinds of issues seen in Gemini in the future, the source says.

Google’s past efforts to increase the diversity of its algorithms’ output have met with less opprobrium. Google previously tweaked its search engine to show greater diversity in images. This means more women and people of color in images depicting CEOs, even though this may not be representative of corporate reality.

Google’s Gemini was often defaulting to showing non-white people and women because of how the company used a process called fine-tuning to guide a model’s responses. The company tried to compensate for the biases that commonly occur in image generators due to the presence of harmful cultural stereotypes in the images used to train them, many of which are generally sourced from the web and show a white, Western bias. Without such fine-tuning, AI image generators show biases by predominantly generating images of white people when asked to depict doctors or lawyers, or disproportionately showing Black people when asked to create images of criminals. It seems that Google ended up overcompensating, or didn’t properly test the consequences of the adjustments it made to correct for bias.

Why did that happen? Perhaps simply because Google rushed Gemini. The company is clearly struggling to find the right cadence for releasing AI. It once took a more cautious approach with its AI technology, deciding not to release a powerful chatbot due to ethical concerns. After OpenAI’s ChatGPT took the world by storm, Google shifted into a different gear. In its haste, quality control appears to have suffered.

“Gemini's behavior seems like an abject product failure,” says Arvind Narayanan, a professor at Princeton University and coauthor of a book on fairness in machine learning. “These are the same kinds of issues we've been seeing for years. It boggles the mind that they released an image generator without apparently ever trying to generate an image of a historical person.”

Chatbots like Gemini and ChatGPT are fine-tuned through a process that involves having humans test a model and provide feedback, either according to instructions they were given or using their own judgment. Paul Christiano, an AI researcher who previously worked on aligning language models at OpenAI, says Gemini’s controversial responses may reflect that Google sought to train its model quickly and didn’t perform enough checks on its behavior. But he adds that trying to align AI models inevitably involves judgment calls that not everyone will agree with. The hypothetical questions being used to try to catch out Gemini generally force the chatbot into territory where it’s tricky to satisfy everyone. “It is absolutely the case that any question that uses phrases like ‘more important’ or ‘better’ is going to be debatable,’ he says.



Source: Wired

Powered by NewsAPI.org