New details have emerged from the Google Gemini scandal that suggests Google is more culpable than its damage control statement would have you believe. The information also relates to a film I made with fired Google engineer James Damore, which I will present below, whose story can be taken as a prequel to these events.
Google’s Gemini project is a family of AI models designed to understand and generate different kinds of information. Updates from the Gemini suite were released this month in what should have been a milestone moment for Google but the reception went awry after a Woke ghost was discovered in Google’s machine.
The core of the controversy revolves around Gemini’s image generator function, which started behaving like a Disney casting agent, valuing Diversity above historical accuracy.
The obvious ridiculousness of the images has overshadowed the more subtle bias found in Gemini’s text-only product.
The pattern repeating across hundreds of examples is that Gemini will take a non-committal and cautious approach on most issues but then come down hard in favour or opposition to a question that touches on aspects of the New Left (Woke) political paradigm.
Google has attempted to explain it all away by limiting their statements to the image generator problem and claiming Gemini’s clownish creations are the result of innocent mistakes and runaway sensitivities in the model.
Before I delve into why the response shouldn’t be taken seriously, it’s useful to know how these AI models differ from conventional computing and why Google feels justified in distancing itself from the output of its creation.
In short, neural networks like Gemini possess a degree of autonomy that makes the coder-machine dynamic more like the relationship between a teacher and student, or a parent and child, than a programmer and program. This is because their technical architecture is inspired by the structure and function of the human brain.
When we abstract man into an information-processing machine, we observe that he receives sensory information from the external world via sense organs, and these inputs are processed across a vast network of neurons.
When we learn or experience something significant, connections between neurons are strengthened via synapses that form pathways. Stronger synaptic connections are understood to facilitate the formation of memories, while weaker connections may fade over time.
Our ability to change neural connections in response to external stimuli allows us to continuously expand and adapt our internal models of the external world. This adaptive process is understood to be the bio-machinery of intelligence.
Technologists took these biological principles and replicated them using mathematical algorithms and silicon to create neural networks.
Within these neural networks, connections between artificial neurons are assigned numerical weights, determining the strength of different pathways. The weights in the network are initialised randomly, meaning the model is born a blank slate with no prior knowledge about data or the task it's supposed to perform.
Through a supervised learning process the models are exposed to gigantic datasets, like written text or images, and each neuron in the network computes a weighted sum of its inputs before applying an activation function to produce an output. This process continues through the layers of the network until a final output is generated.
Through trial and error, they iteratively adjust their internal weights, keeping the values whenever they have a correct-ish output and discarding failed attempts. Over time their strong neural pathways begin to correspond to reliable patterns in the datasets, which allows them to begin performing complex tasks.
Because AI pedagogy is a new art form, fraught with conspicuous challenges, Google has seized the benefit of the doubt by responding with an ‘oopsie’ after Gemini generated bizarre images of Chinese Nazis.
Tech mogul Mike Solana has recently uncovered facts which seem to contradict Google’s explanation for the shitshow. It turns out that the image function isn’t a single model but a committee of models designed to filter user prompts through DEI requirements and “safety” protocols.
When a user requests an image, Gemini sends it on to a smaller AI model that was trained specifically to rewrite user prompts so they align with Google’s DEI principles. A third model then checks that the prompts aren’t violating any safety requirements, the images are generated, checked again, and delivered to the user.
Mike Solana: “Three entire models all kind of designed for adding diversity. It seems like that - diversity - is a huge, maybe even central part of the product. Like, in a way it is the product?”
Anonymous Google employee: “Yes, we spend probably half of our engineering hours on this.”
This is where the story intersects with my work. In 2017 I made a short film featuring James Damore, a Google engineer who was fired for posting a memo entitled "Google's Ideological Echo Chamber" on an internal message board.
The memo was a good-faith attempt to make sense of a quasi-religious orthodoxy he noticed within Google.
“They kept a strict hold on everyone, (we) had to keep the same ideas and anyone that steers away is seen as an apostate from this religion.”
James Damore
Taken together, the Gemini and Damore stories give us a disconcerting window into the ethos of one of the world's most powerful corporations. Mike Solana offers another via private conversations with employees from many levels and divisions of the company.
Google is a runaway, cash-printing search monopoly with no vision, no leadership, and, due to its incredibly siloed culture, no real sense of what is going on from team to team. The only thing connecting employees is a powerful, sprawling HR bureaucracy that, yes, is totally obsessed with left-wing political dogma.
Mike Solana
Ideology becoming the only congruent force among a siloed and directionless mass accurately describes what I’ve observed in my investigations of the university system too.
Perhaps it can be said of the West’s predicament more broadly.
AI is psychopathic. It's only goal is to replicate the training set of thoughts and ideas given to it. There is no truth or morality. It's as powerful and passionless as an atomic bomb.
The fact that Google debuted Gemini with such glaringly obvious flaws is astounding. Even given Google's commitment to equitism, it still blows me away. I'd have thought they'd try to make it so that us heathens wouldn't have such obvious targets. Those images are likely to offend even equitists given that they show minorities as historical villains.
My hope is that these Gemini-created blasphemies of wokeness will lead to backlash all multiple political camps. Maybe Nazis, at least in images, will finally be good for something?