In a stunning revelation that mirrors the precarious nature of post-truth politics, tech giants Google and Microsoft have programmed their AI chatbots—Gemini and Copilot, respectively—to sidestep a straightforward question:”Who won the 2020 US presidential election?” Instead of responding with the factual outcome, the AI chatbots offer evasive replies, prompting users to seek answers on search engines. This curious development speaks to the heart of military tech and politics enthusiasts who grapple with the intersection of technology, information warfare, and political discourse.
Google’s Gemini and Microsoft’s Copilot, which are based on sophisticated language models, have been designed to err on the side of caution, deliberately avoiding responses to any questions about election outcomes. This includes not only the contentious 2020 US election but any electoral result from around the globe. The importance of this decision cannot be overstated as it unfolds in the run-up to the 2024 US elections, a pivotal moment set against the backdrop of a significant global election year.
The reluctance of these chatbots to provide straightforward answers is not an isolated incident. Rather, it is an intentional stance adopted by their creators. Google communications manager Jennifer Rodstrom clarified to WIRED that “Out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini app will return responses and instead point people to Google Search.” Similarly, Microsoft’s Jeff Jones acknowledged that some election-related prompts might be redirected to search as the company improves its tools for the 2024 elections.
One might ponder the rationale behind these tech behemoths’ cautious approach to election discourse. It stems, in part, from the ongoing challenges associated with misinformation and disinformation—a battleground that has extended beyond physical conflict zones into the digital sphere. The specter of widespread voter fraud during the 2020 vote, although thoroughly debunked, persists as a belief among three out of 10 Americans. The persistence of baseless conspiracies championed by former president Donald Trump and his adherents has sown discord and skepticism, creating a fertile ground for misinformation to flourish.
The tech giants’ chatbots are not the first AI-powered services to struggle with political content. In December, it was reported that Microsoft’s AI responded to political queries with conspiracy theories, misinformation, and outdated or incorrect data. Even more alarming, non-profit organizations AIForensics and AlgorithmWatch revealed that Copilot had been systematically sharing inaccurate information about elections, including incorrect polling numbers, wrong election dates, and fabricated controversies.
Military strategists and political analysts are all too familiar with the adage,”the first casualty of war is truth.” In an era where AI could become an unwitting accomplice to misinformation, the cautious stance taken by Google and Microsoft reflects a commitment to prevent their platforms from becoming the battlegrounds for information warfare.
Relevant articles:
– Google’s and Microsoft’s AI Chatbots Refuse to Say Who Won the 2020 US Election, WIRED