OpenAI’s bots getting ‘dangerously’ close to human-like intelligence may have led to Sam Altman’s ouster
While Sam Altman may have been reinstated as the CEO of OpenAI, there are a lot of questions that are still lingering regarding his dismissal. While some say that the dismissal was the result of a corporate coup to open up the AI studio to a hostile takeover, others say the reason may have been something more severely critical.
Leading up to the recent ousting of Altman, a group of staff researchers reportedly penned a letter to the board of directors, expressing concerns about a powerful AI discovery that could pose a threat to humanity. The revelation adds a new dimension to the circumstances surrounding Altman’s temporary removal from his position.
According to Reuters, who quoted sources familiar with the matter, the letter, highlighted potential dangers associated with the AI algorithm and was among the factors that contributed to Altman’s firing. The researchers expressed worries about the commercialization of AI advances without fully understanding the consequences.
The letter and the AI algorithm, known as Q-Star, were pivotal developments leading up to the board’s decision to oust Altman. Some within OpenAI reportedly view Q-Star as a potential breakthrough in the quest for artificial general intelligence (AGI), which refers to autonomous systems surpassing humans in most economically valuable tasks.
Researchers at OpenAI also raised alarms about the prowess and potential risks of AI. Q-Star demonstrated the ability to solve certain mathematical problems, a notable achievement in the field of generative AI.
Researchers believe that if AI can excel at mathematics, it implies greater reasoning capabilities akin to human intelligence, with potential applications in novel scientific research.
The letter also revealed the existence of an “AI scientist” team, formed by merging earlier “Code Gen” and “Math Gen” teams. This team focused on optimizing existing AI models to enhance reasoning and potentially perform scientific work.
The circumstances surrounding Altman’s firing and the concerns raised by researchers highlight the ongoing debate within the AI community about the potential risks associated with highly intelligent machines. The revelation adds complexity to the narrative surrounding Altman’s leadership and the direction of OpenAI, a prominent player in the AI research landscape.
(With inputs from agencies)
OpenAI’s bots getting ‘dangerously’ close to human-like intelligence may have led to Sam Altman’s ousterRead More