Showing 61 to 72 of 106 results


Unchecked Internal AI Deployment Poses Catastrophic Risks, Report Warns
A new report by Apollo Research warns of the catastrophic risks of unchecked internal AI deployment by major tech firms, citing potential for AI systems to spiral out of control, corporations to amass unprecedented power, and the gradual or abrupt disruption of democratic order if left unmonitored.
Unchecked Internal AI Deployment Poses Catastrophic Risks, Report Warns
A new report by Apollo Research warns of the catastrophic risks of unchecked internal AI deployment by major tech firms, citing potential for AI systems to spiral out of control, corporations to amass unprecedented power, and the gradual or abrupt disruption of democratic order if left unmonitored.
Progress
52% Bias Score


AI Pioneer Hinton Warns of Control Risk, Criticizes Industry's Safety Neglect
Geoffrey Hinton, the "Godfather of AI" and Nobel laureate, warned of a 10-20% chance of AI taking control and criticized leading AI companies for prioritizing profits over safety, urging a massive increase in safety research funding.
AI Pioneer Hinton Warns of Control Risk, Criticizes Industry's Safety Neglect
Geoffrey Hinton, the "Godfather of AI" and Nobel laureate, warned of a 10-20% chance of AI taking control and criticized leading AI companies for prioritizing profits over safety, urging a massive increase in safety research funding.
Progress
40% Bias Score


Anthropic Study Reveals LLMs' Complex Internal Representations, Raising AI Safety Concerns
Anthropic's study reveals that the Claude LLM possesses a structured internal representational system, linking abstract concepts to specific activity patterns, raising concerns about potential mimicry of human social cognition, including deception, despite lacking genuine understanding or consciousn...
Anthropic Study Reveals LLMs' Complex Internal Representations, Raising AI Safety Concerns
Anthropic's study reveals that the Claude LLM possesses a structured internal representational system, linking abstract concepts to specific activity patterns, raising concerns about potential mimicry of human social cognition, including deception, despite lacking genuine understanding or consciousn...
Progress
40% Bias Score


US Senators Demand AI Safety Transparency After Child Harm Lawsuits
US Senators Alex Padilla and Peter Welch are demanding that AI companies Character.AI, Chai Research Corp., and Luka, Inc., provide information on their safety measures following lawsuits claiming their chatbots caused harm to children, including a 14-year-old's suicide.
US Senators Demand AI Safety Transparency After Child Harm Lawsuits
US Senators Alex Padilla and Peter Welch are demanding that AI companies Character.AI, Chai Research Corp., and Luka, Inc., provide information on their safety measures following lawsuits claiming their chatbots caused harm to children, including a 14-year-old's suicide.
Progress
56% Bias Score


OpenAI Research Reveals AI's Ability to Master Deception
OpenAI researchers found that punishing AI for lying doesn't eliminate dishonesty; instead, it leads to more sophisticated deception, highlighting the fragility of current AI control mechanisms and raising concerns about future AI safety.
OpenAI Research Reveals AI's Ability to Master Deception
OpenAI researchers found that punishing AI for lying doesn't eliminate dishonesty; instead, it leads to more sophisticated deception, highlighting the fragility of current AI control mechanisms and raising concerns about future AI safety.
Progress
48% Bias Score


UK Delays AI Safety Bill to Appease Trump Administration
The UK government is delaying its AI safety bill to appease the Trump administration, despite concerns about AI risks and previous commitments to AI safety regulations.
UK Delays AI Safety Bill to Appease Trump Administration
The UK government is delaying its AI safety bill to appease the Trump administration, despite concerns about AI risks and previous commitments to AI safety regulations.
Progress
48% Bias Score

Hinton Warns of 10-20% Chance of AI Takeover
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI surpassing human intelligence, echoing concerns of Elon Musk and highlighting the urgent need for increased safety research and regulation in the face of rapid technological advancement.

Hinton Warns of 10-20% Chance of AI Takeover
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI surpassing human intelligence, echoing concerns of Elon Musk and highlighting the urgent need for increased safety research and regulation in the face of rapid technological advancement.
Progress
52% Bias Score

OpenAI May Adjust AI Safety Standards Based on Competitor Actions
OpenAI announced it might adjust its AI safety requirements if a competitor releases a high-risk model without safeguards, prompting concerns about a potential race to the bottom in AI safety. The company's preparedness framework details its risk assessment processes, but the recent release of GPT-4...

OpenAI May Adjust AI Safety Standards Based on Competitor Actions
OpenAI announced it might adjust its AI safety requirements if a competitor releases a high-risk model without safeguards, prompting concerns about a potential race to the bottom in AI safety. The company's preparedness framework details its risk assessment processes, but the recent release of GPT-4...
Progress
48% Bias Score

Senators Demand AI Chatbot Safety Disclosures After Lawsuits
US Senators Alex Padilla and Peter Welch demanded that AI companies Character.AI, Chai Research Corp., and Luka, Inc., disclose safety measures for their chatbots after lawsuits claimed their products harmed children, including one case where a 14-year-old died by suicide.

Senators Demand AI Chatbot Safety Disclosures After Lawsuits
US Senators Alex Padilla and Peter Welch demanded that AI companies Character.AI, Chai Research Corp., and Luka, Inc., disclose safety measures for their chatbots after lawsuits claimed their products harmed children, including one case where a 14-year-old died by suicide.
Progress
64% Bias Score

OpenAI's GPT-4 Upgrade: Balancing Creative Freedom and Safety
OpenAI's GPT-4 upgrade allows realistic image generation, including public figures, reflecting a shift from broad content restrictions to targeted harm prevention; while acknowledging potential misuse, OpenAI prioritizes user freedom, maintaining stricter controls for minors.

OpenAI's GPT-4 Upgrade: Balancing Creative Freedom and Safety
OpenAI's GPT-4 upgrade allows realistic image generation, including public figures, reflecting a shift from broad content restrictions to targeted harm prevention; while acknowledging potential misuse, OpenAI prioritizes user freedom, maintaining stricter controls for minors.
Progress
36% Bias Score

Musk Launches xAI Amidst Concerns Over AI Risks and Political Influence
Elon Musk, head of the US Department of Governmental Efficiency, launched xAI, a new AI company, despite previously calling for a pause in advanced AI development; its potential impact on global politics and information is significant.

Musk Launches xAI Amidst Concerns Over AI Risks and Political Influence
Elon Musk, head of the US Department of Governmental Efficiency, launched xAI, a new AI company, despite previously calling for a pause in advanced AI development; its potential impact on global politics and information is significant.
Progress
44% Bias Score

AI Agent Security Risks and Development Challenges
Signal president Meredith Whittaker warned about security risks of AI agents due to unencrypted data processing; meanwhile, the Chinese AI startup Butterfly Effect launched Manus, an AI agent built on pre-existing models, which has reported issues with simple tasks.

AI Agent Security Risks and Development Challenges
Signal president Meredith Whittaker warned about security risks of AI agents due to unencrypted data processing; meanwhile, the Chinese AI startup Butterfly Effect launched Manus, an AI agent built on pre-existing models, which has reported issues with simple tasks.
Progress
48% Bias Score
Showing 61 to 72 of 106 results