When an AI doesn't know history, you can't blame it. It always comes down to data, programming, training, algorithms, and any other part of human-built technology. It's all that and our perceptions of the “intentions” of the AI on the other side.
When Google's recently rebranded Gemini (formerly Bard) started spitting out people of color to represent Caucasian historical figures, people quickly assessed that something was wrong. For Google's part, it noticed the bug and removed all Gemini's people generation capabilities until it could find a fix.
It wasn't too difficult to figure out what happened here. Since the early days of AI, and by that I mean 18 months ago, we've been talking about inherent and built-in biases in AI that, often unintentionally, come hand in hand with programmers training the big language and models. of big pictures into data that reflect your experiences and, perhaps, not those of the world. Sure, you'll have a smart chatbot, but it's likely to have significant blind spots, especially when you consider that the majority of programmers are still male and white (a 2021 study put the percentage of white programmers at 69% and found that only 20 % of all programmers were women).
Still, we've learned enough about the potential for bias in AI training and outcomes that companies have become much more proactive in getting ahead of the problem before such biases appear in a chatbot or in generative results. Adobe told me earlier this year that it has programmed its Firefly Generative AI tool to take into account where someone lives and the racial makeup and diversity of their region to ensure that image results reflect their reality.
doing too well
Which brings us to Google. He probably programmed Gemini to be racially sensitive, but did so in a way that overcompensated. If there was a weighing system between historical accuracy and racial sensitivity, Google would put its thumb on the scale for the latter.
The example I have seen is Google Gemini which offers a multicultural image of the founding fathers of the United States. Unfortunately, men and women of color were not represented in the group that drafted the American Declaration of Independence. In fact, we know that some of those men were enslavers. I'm not sure how Gemini could have accurately represented these white men by adding that footnote. Still, the programmers got the bias training wrong, and I applaud Google for not leaving Gemini's people-imaging capabilities to annoy people even more.
However, I think it's worth exploring the significant backlash Google received for this bug. On X (which is the dumpster fire formerly known as Twitter), people, including Elon Musk, CEO of X, decided that this was Google trying to impose some kind of prejudice against white people. I know, it's ridiculous. Pushing a bizarre political agenda would never serve Google, which is the search engine home of the masses, regardless of their political or social leanings.
What people don't understand, despite how often developers get it wrong, is that we are still in the early days of the generative AI cycle. The models are incredibly powerful and, in some ways, are surpassing our ability to understand them. We use mad scientist experiments every day with very little idea about the type of results we will get.
When developers release a new generative model and AI to the world, I think they only understand about 50% of what the model could do, in part because they can't account for every message, conversation, and image request.
More mistakes ahead, until we get it right
If there is something that separates AIs from humans, it is that we have almost unlimited and unpredictable creativity. AI's creativity is based solely on what we feed it, and even if the results surprise us, I think we are more capable of surprising programmers and AI with our cues.
However, this is how AI and the developers behind it learn. We have to make these mistakes. The AI has to create a hand with eight fingers before it can learn that we only have five. Sometimes AI hallucinates, misinterprets facts, and even offends.
However, if it happens, that's no reason to pull the plug. AI has no emotions, intentions, opinions, political stances or goals to work towards. He is trained to give you the best possible result. It won't always be right, but eventually you'll get a lot more right than wrong.
Gemini produced a bad result, which was a mistake by the programmers, who will now go back and push and pull various levers until Gemini understands the difference between political correctness and historical accuracy.
If they do their job well, future Gemini will offer us a perfect image of the all-white founding fathers with that crucial note about their position on the enslavement of other humans.
you might also like