In the age of rapidly advancing AI capabilities, misleading content, once a problem tethered to the fringes of cyberspace, has now cascaded into mainstream social media platforms, notably Facebook. For older adults, in particular, the proliferation of AI-generated content has become a quagmire of confusion and deceptive enticement, influencing perceptions and interactions on the platform significantly.
Facebook, a social medium that has become the digital salon for older generations since the migration of younger users to newer apps, is under scrutiny for the AI-generated content that seems to target this demographic. Researchers from Stanford and Georgetown uncovered that AI-made visuals are frequently mistaken as genuine by older users, who lavish such posts with adulation. One must wonder, in an era where AI can fabricate anything from hyperrealistic faces to audio clips, if the platform is doing enough to protect its users from digital chimeras.
This susceptibility is not merely an anecdote, it’s backed by data. A preprint paper from noted institutions pointed out that Facebook’s algorithm may be complicit in inundating user feeds with AI-generated images to boost product sales and social followings. Older adults’ comments on such content display genuine admiration, a concerning revelation given the potential for scams and misinformation.
Research is ongoing into why such images hoodwink older adults more readily. A study in ‘Scientific Reports’ indicates that older participants are more prone to perceiving AI-generated images as human-made. Yet, it’s not just a matter of aging brains being outfoxed by new technology; experience, or the lack thereof, with AI could be a significant factor, with only 17% of older adults reporting a substantial awareness of AI, according to an AARP and NORC survey.
The dichotomy between generations is stark. The older generation’s naivety towards AI-generated content contrasts with the younger generation’s skepticism, the latter having been steeped in a culture of digital distrust. “We have been living in a society that is constantly becoming more and more fake,” noted Simone Grassini, one of the researchers delving into AI media perception.
Amid this upheaval, initiatives are brewing to mitigate the deceptive power of AI on social platforms. Meta, Facebook’s parent company, has announced through its global executive Nick Clegg, an endeavor to detect and label AI-generated images across its network, which includes Instagram and Threads. This attempt at transparency is especially pertinent with “important elections taking place around the world,” as noted by Clegg. Yet, as generative AI tools gain sophistication, this is a race where the finish line incessantly recedes.
The adversarial landscape of AI content prompts a bipartisan concern, particularly as the U.S. approaches a presidential election. A study from the University of Chicago Harris School of Public Policy reveals a consensus that AI’s role in elections is perceived more harmful than beneficial. With an overwhelming majority advocating for preventative measures against AI-spun falsehoods, it’s clear that public apprehension is peaking.
Facebook’s conundrum reflects a broader societal challenge: as AI-generated content becomes indistinguishable from reality, how do we equip all generations, especially those more vulnerable, with the acumen to discern fact from fiction? The collective effort must encompass education on digital literacy, increased transparency from social platforms, and perhaps most critically, regulation that holds these entities accountable for the content they disseminate.
Relevant articles:
– Facebook Is Filled With AI-Generated Garbage—and Older Adults Are Being Tricked
– Meta pushes to label all AI images on Instagram and Facebook in crackdown on deceptive content, The Guardian, Tue, 06 Feb 2024 08:00:00 GMT
– How AI-Generated Content Can Undermine Your Thinking Skills, Psychology Today, Mon, 27 Nov 2023 08:00:00 GMT
– Poll finds bipartisan concern about use of artificial intelligence in 2024 elections, UChicago News, Mon, 06 Nov 2023 08:00:00 GMT