More
    HomeNewsOpenAI's Broken Promise: Superalignment Team Denied Critical Compute Power

    OpenAI’s Broken Promise: Superalignment Team Denied Critical Compute Power

    Published on

    OpenAI, a leading force in the field of artificial intelligence, is at the center of a controversy that strikes at the heart of its integrity and the safety measures it had vowed to uphold. Revelations have emerged that the company failed to allocate the promised 20% of its computing power to its Superalignment team—a group focused on preventing the existential risks posed by superintelligent AI systems. This breach of commitment raises serious questions about the company’s dedication to AI safety and the reliability of its public assurances.

    When OpenAI introduced the Superalignment team, it was presented as a crucial component in steering and controlling AI to prevent a rogue superintelligence that could potentially threaten humanity. At the time, the initiative was deemed so vital that OpenAI publicly committed a significant portion of its computing resources to the cause: “20% of the compute we’ve secured to date over the next four years” was earmarked for this purpose.

    However, insiders familiar with the team’s operations have disclosed that this commitment was never fulfilled. According to multiple sources, requests for access to graphics processing units—indispensable for training and operating AI applications—were consistently denied by OpenAI’s leadership. The total compute budget allocated to the Superalignment team never approached the promised threshold, leading to a bottleneck in research progress.

    This oversight was not due to a lack of potential or ambition on the part of the Superalignment team. As late as December 2023, the team released a paper detailing successful experiments in getting one AI model to control another more powerful one. But the constraints on compute resources stifled the team’s ability to pursue more groundbreaking endeavors.

    Jan Leike, a co-leader of the Superalignment team, expressed his frustrations on X, stating that “safety culture and processes have taken a backseat to shiny products” and highlighting the difficulties in obtaining computational resources. The sentiment of a team struggling to acquire crucial resources for their research echoes across multiple corroborating sources.

    This situation was further exacerbated following a high-profile corporate clash involving Sam Altman, OpenAI’s CEO. The board of the OpenAI nonprofit foundation, including Ilya Sutskever, a co-founder and former chief scientist of the company, initially voted to remove Altman. Although Altman was eventually reinstated and the dissenting board members, including Sutskever, stepped down, the tumultuous events created a chasm that led to the loss of political leverage necessary for the Superalignment team to secure their compute allocation.

    The challenges faced by the Superalignment team and subsequent reassignment of its members within the company underscore a disconcerting trend. The departure of several safety researchers in recent months, with some, like Daniel Kokotajlo, citing eroded trust in OpenAI’s leadership to responsibly handle artificial general intelligence (AGI), intensifies the scrutiny on OpenAI’s commitment to AI safety.

    Amidst this controversy, OpenAI also faces backlash regarding the use of an AI-generated voice similar to actress Scarlett Johansson’s, further muddying the waters of its credibility. With OpenAI’s response to these issues remaining elusive, stakeholders are left to ponder whether the company’s other public commitments hold any weight.

    The disbanding of this team and the misallocation of resources not only undermines OpenAI’s initial pledge but also raises alarms about the potential risks that unchecked AI systems might pose.

    Relevant articles:
    OpenAI promised 20% of its computing power to combat the most dangerous kind of AI—but never delivered, sources say, fortune.com, 05/22/2024
    OpenAI didn’t keep promises made to its AI safety team, report says, Quartz, 05/21/2024

    Glad you enjoyed above story, be sure to follow TrendyDigests on Microsoft Start.

    Leave a Reply

    Latest articles

    The Tragic End of Charles Piroth: Misjudged Artillery and Dien Bien Phu’s Despair

    The battlefields of history are often scattered with the tales of valor and the...

    The Intriguing Story Behind “Hotchkiss”: How a Stapler Brand Became a Household Name in Japan and Korea

    In the early 20th century, a simple office device sailed across the seas to...

    The Curious Case of the Number 37: The Most Chosen ‘Random’ Number Explained

    When asked to pick a number between 1 and 100, most people say 37....

    Audi’s 2025 A3 Update: Stylish Overhaul with a Subscription Cost for In-Car Features

    The 2025 Audi A3 is navigating a controversial trajectory in the automotive landscape with...

    More like this

    SpaceX’s Starship’s Landmark Splashdown: A Pioneering Leap Towards Reusable Spacecraft

    On a historic day for space exploration, SpaceX's towering Starship, the most powerful rocket...

    Study Reveals Anti-Piracy Messages May Increase Piracy Among Men, Not Women

     When efforts to combat digital piracy unintentionally stoke the very behavior they aim to...

    The Science of Blinking on the Fast Lane

    When you blink, you miss a fraction of a second of the world around...

    Discover more from Trendy Digests

    Subscribe now to keep reading and get access to the full archive.

    Continue reading