
(SeaPRwire) – OpenAI introduced GPT-Rosalind on April 16, a specialized AI system designed for the life sciences. This model notably exceeds the capabilities of the company’s existing public versions in biological and chemical research, along with experimental planning. Much like Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber, which were also launched this month, the system is withheld from the general public and is currently only accessible to “qualified customers” via a “trusted access program.”
These developments indicate a growing trend where AI firms decide their most advanced models are too risky for broad public release. Peter Wildeford, who leads policy at the AI Policy Network, notes that frontier developers are restricting access because of legitimate concerns regarding the power of these systems.
The specific reasoning behind the restrictions on GPT-Rosalind remains unclear. A representative for OpenAI explained via email that partnering with trusted users allows the company to deploy high-capability systems more quickly while still carefully overseeing potential risks.
Who makes the decisions?
The swift progression of AI technology has sparked a debate over whether private corporations should hold the authority to make critical choices regarding the development of hazardous models and their distribution. Representative Mark DeSaulnier, a Democrat from California, believes that the federal government must be involved in this process.
The launch of Mythos seems to have helped Anthropic repair its tense relationship with the White House, which recently described a meeting with CEO Dario Amodei as “productive and constructive.” Additionally, reports indicate that the NSA has started employing Claude Mythos. This follows a February order from President Trump for federal agencies to cease collaboration with the firm, which he characterized as a “radical left, woke company,” following a disagreement over a Pentagon contract.
While the current access limits are voluntary on the part of Anthropic and OpenAI, the increasing complexity of AI risks has led to calls for more formal external regulation.
Connor Leahy, the U.S. director of ControlAI, argues that just as the government regulates toxic pollutants in water, it should also oversee AI safety, emphasizing that such decisions belong to the state rather than private corporations.
‘Scientific study and bioweapon development share many similarities’
AI firms face significant challenges regarding dual-use technologies in areas like biology and cybersecurity. Tools intended to help security experts identify and fix software flaws can also be utilized by malicious actors. Similarly, an AI designed for viral research could potentially be used by a bioterrorist to engineer a more dangerous pathogen. Wildeford notes that the distinction between cyber defense and offense is thin, just as scientific study and bioweapon creation share many similarities.
In the past, companies typically blocked these capabilities for everyone. For example, chatbots often refuse to answer questions about viral mutations. James Diggans, vice president at Twist Bioscience, admits this is frustrating for scientists but believes it is the correct approach.
The latest releases provide more flexibility for vetted users. OpenAI only allows organizations with strict internal protocols to use GPT-Rosalind, while Anthropic has teamed up with government agencies and private firms to use Mythos for identifying cyber vulnerabilities. However, Batalis notes that defining “legitimate” researchers is more difficult outside the U.S., which could create equity issues for international scientists.
Deciding which models to restrict is a difficult balance that changes depending on the field. Diggans notes that cyber threats are easier to evaluate based on whether a model can breach systems. Biological research is more gradual, making it harder to determine if a model like GPT-Rosalind would cause immediate harm if released. Batalis adds that while cyberattacks are common, there is less data on biological risks. Other areas, like communication strategies, could also be viewed as tools for propaganda.
‘The spread of cyber capabilities is inevitable’
The availability of free, downloadable open-source models could shift the strategy regarding AI access limits. According to the research group Epoch AI, open-source systems have typically trailed behind proprietary ones by three to seven months. If this pattern persists, models with the power of Mythos or GPT-Rosalind might be accessible to the public by the end of the year. A spokesperson for OpenAI remarked that because cyber capabilities will inevitably spread, it is vital for defenders to have access to advanced tools as early as possible.
Open-source systems could also benefit foreign attackers. Anthropic recently blocked a Chinese state-sponsored group from using its paid models; if similar models were freely available, Western companies would lose this leverage.
Some open-source developers use outputs from top-tier proprietary models to train their own. Restricting access might slow this process, provided companies can maintain security—though some unauthorized users have reportedly already accessed Claude Mythos.
Regardless of the progress of open-source AI, Mythos and GPT-Rosalind represent the new standard for frontier models. Wildeford argues that the government has a clear interest in managing these risks, stating that federal intervention seems necessary.
This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.
Category: Top News, Daily News
SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.