Google's Rush to Launch Gemini AI Raises Ethical Concerns

Google's rush to get its Gemini AI chatbot (you might have heard of it as Bard) out there has caused quite a stir.

by Faruk Imamovic
SHARE
Google's Rush to Launch Gemini AI Raises Ethical Concerns
© Getty Images/Michael M. Santiago

Google's rush to get its Gemini AI chatbot (you might have heard of it as Bard) out there has caused quite a stir. A person who used to work high up at Google spilled the beans, saying the company cut corners on important ethical checks to get Gemini launched quickly, even though there were clear warnings it wasn't ready to be released.

It looks like Google was so eager to stay ahead in the AI game that it might have forgotten about playing fair and taking care of the ethical side of things.

Internal Warnings Overlooked

Before its release, Gemini underwent an AI Principles Review, a process designed to ensure that new technologies align with Google's ethical standards.

This review raised significant concerns, with experts warning that the AI was unsafe and advising against its activation. Contrary to these recommendations, Jen Gennai, then-Director of Responsible Innovation (RESIN), is alleged to have edited the expert responses, pushing the product up the leadership chain for expedited release.

The decision to proceed despite these warnings was defended by Gennai, who argued that Gemini was merely a preview product, and thus, standard review procedures for demos did not apply. This stance, however, was met with skepticism by insiders, who labeled it as "100% bull---." Employees had raised alarms about troubling initial results from Bard's data sets and embeddings, even proposing the development of a tool to better understand what the model had learned.

These concerns were reportedly dismissed in favor of a quicker market launch, driven by a perceived threat from the rise of ChatGPT.

Ethical Corners Cut in Development

Google's scramble to compete with ChatGPT's success reveals a broader culture of prioritizing rapid product launches over thorough review and ethical considerations.

According to the source, there was a strategic decision to overlook issues of fairness, bias, and ethics, provided the AI did not generate overtly harmful content. This approach marked a significant departure from Google's previous stance on AI development, reflecting a shift in priorities towards regaining market dominance.

The Disbanding of RESIN and Shifting Priorities

When Google decided to merge Gennai's team focused on innovation responsibility with the Office of Compliance and Integrity, it was a big shuffle that changed the company's focus more towards avoiding business risks than really thinking about how users are affected.

This shake-up meant that the usual thorough checks to make sure AI projects like Gemini were on the up-and-up got watered down, making it easier for them to skip the deep ethical dive they really needed. Launching a new product at Google is a big deal that involves a lot of moving parts - from making sure the technology is top-notch, to marketing it right, to making sure it's secure.

But according to someone in the know, Google has gotten into the habit of rushing these launches to get ahead, even if it means not spending enough time making sure everything's as good or as ethical as it should be. This rush job, along with teams not working together as well as they should, is probably why Gemini's debut was met with a lot of raised eyebrows from the public.

Gemini© Getty Images/Michael M. Santiago

The Culture of "Launch and Land"

According to a former employee, Google really pushes to get new stuff out fast, sometimes skipping over thorough checks on whether these projects are doing right by society and sticking to ethical guidelines.

In this fast-paced setting, if you want to climb the ladder, you've got to be quick to launch new products. This rush to launch has changed how things are done around there. For example, when Google decided to dissolve the RESIN team, which was all about making sure innovations were ethically sound, and spread its duties across other departments, it kind of loosened the reins on the ethical review process.

This meant that even if a project like Gemini had some ethical question marks hanging over it, it could still make its way through the system a bit too easily.

The Rush to Compete

Google was in a hurry to get Gemini out the door mostly because ChatGPT was starting to look like serious competition.

Google has been a big deal in AI for a while, but ChatGPT showing up on the scene threatened to shake things up. So, Google decided to keep pushing Gemini forward, even though there were some serious ethical questions hanging over it.

They figured staying ahead in the market was more important than taking a step back to think through those ethical issues. This whole situation shines a light on a bigger problem that a lot of tech companies face: the rush to be the first with the newest technology can sometimes mean they're not paying enough attention to whether they should be doing something, ethically speaking.

Without someone from outside the company pushing them to think about these things, it's all too easy for their own rules about doing the right thing to get a little too flexible when there's a chance to beat the competition.

The Aftermath and Reckoning

After Gemini got hit by a wave of criticism, Google had to really think about what to do next. People weren't just upset about the chatbot messing up; they were also disappointed that Google seemed to ditch its own rules about AI.

Prabhakar Raghavan, one of the top guys at Google, stepped up to say they'd be doing more tests and taking another look at how Gemini's AI works. This was Google's way of admitting there were problems. But even with these promises to do better, there's a big question hanging in the air: Are these changes enough if no one outside of Google is checking their work? According to someone who used to work there, if Google doesn't really shake things up - changing the way they do things and how they think about their projects - we might just see the same problems popping up again.

This whole Gemini thing shows that maybe it's time for the tech world to take a long, hard look at how AI is made and shared with everyone.

Google
SHARE