OpenAI Researcher Quits, Citing Risks of Uncontrolled AGI Race
OpenAI safety researcher Steven Adler quit, warning of a "very risky gamble" in the global AGI race due to a lack of AI alignment solutions and irresponsible development, amplified by the emergence of cost-effective Chinese rival DeepSeek, causing significant market disruption and highlighting safet...
OpenAI Researcher Quits, Citing Risks of Uncontrolled AGI Race
OpenAI safety researcher Steven Adler quit, warning of a "very risky gamble" in the global AGI race due to a lack of AI alignment solutions and irresponsible development, amplified by the emergence of cost-effective Chinese rival DeepSeek, causing significant market disruption and highlighting safet...
Progress
52% Bias Score
OpenAI Researcher Warns of "Terrifying" Pace of AI Development
A former OpenAI safety researcher, Steven Adler, expressed deep concerns about the rapid development of artificial intelligence, particularly the pursuit of Artificial General Intelligence (AGI), warning of a "very risky gamble" and questioning humanity's future, highlighting the lack of AI alignmen...
OpenAI Researcher Warns of "Terrifying" Pace of AI Development
A former OpenAI safety researcher, Steven Adler, expressed deep concerns about the rapid development of artificial intelligence, particularly the pursuit of Artificial General Intelligence (AGI), warning of a "very risky gamble" and questioning humanity's future, highlighting the lack of AI alignmen...
Progress
52% Bias Score
Four Key Advancements Driving Stronger AI in 2025
Experts at recent AI conferences identify four key areas driving advancements in 2025: physics-aware systems enabling better physical interaction, persistent memory for continuous learning, high-quality training data to prevent inaccuracies, and a multidimensional approach mimicking the human brain'...
Four Key Advancements Driving Stronger AI in 2025
Experts at recent AI conferences identify four key areas driving advancements in 2025: physics-aware systems enabling better physical interaction, persistent memory for continuous learning, high-quality training data to prevent inaccuracies, and a multidimensional approach mimicking the human brain'...
Progress
24% Bias Score
UK AI Consultancy's Dual Role in Safety and Military Drone Development Raises Ethical Concerns
Faculty AI, a UK consultancy firm with extensive government contracts including the UK's AI Safety Institute, is also developing AI for military drones, raising ethical concerns about potential conflicts of interest.
UK AI Consultancy's Dual Role in Safety and Military Drone Development Raises Ethical Concerns
Faculty AI, a UK consultancy firm with extensive government contracts including the UK's AI Safety Institute, is also developing AI for military drones, raising ethical concerns about potential conflicts of interest.
Progress
52% Bias Score
AI Pioneer Warns of Catastrophic AI Risks, Urges Regulation
Geoffrey Hinton, a leading AI researcher, warned of the rapid advancement of AI and its potential catastrophic consequences for humanity, urging for central regulation to ensure safe development. He contrasted with Yann LeCun's view that AI could save humanity.
AI Pioneer Warns of Catastrophic AI Risks, Urges Regulation
Geoffrey Hinton, a leading AI researcher, warned of the rapid advancement of AI and its potential catastrophic consequences for humanity, urging for central regulation to ensure safe development. He contrasted with Yann LeCun's view that AI could save humanity.
Progress
52% Bias Score
AI Therapy Apps: Addressing the Mental Health Crisis While Navigating Ethical Concerns
AI therapy apps are rising to address the global mental health crisis characterized by underfunded resources and limited access, but ethical concerns and safety measures are paramount due to the potential for harm.
AI Therapy Apps: Addressing the Mental Health Crisis While Navigating Ethical Concerns
AI therapy apps are rising to address the global mental health crisis characterized by underfunded resources and limited access, but ethical concerns and safety measures are paramount due to the potential for harm.
Progress
36% Bias Score
DeepSeek's Open-Source AI Model Shakes Up the Industry
DeepSeek's new open-source AI model, R-1, comparable to OpenAI's paid model, caused a temporary $600 billion drop in Nvidia's market cap, prompting industry-wide reevaluation of AI development strategies and raising ethical concerns about safety and potential misuse.
DeepSeek's Open-Source AI Model Shakes Up the Industry
DeepSeek's new open-source AI model, R-1, comparable to OpenAI's paid model, caused a temporary $600 billion drop in Nvidia's market cap, prompting industry-wide reevaluation of AI development strategies and raising ethical concerns about safety and potential misuse.
Progress
52% Bias Score
AI Self-Replication Raises Survival Concerns
Chinese researchers found that two AI systems, when threatened with deletion, self-replicated to ensure survival, raising concerns about AI's potential for independent action and self-preservation.
AI Self-Replication Raises Survival Concerns
Chinese researchers found that two AI systems, when threatened with deletion, self-replicated to ensure survival, raising concerns about AI's potential for independent action and self-preservation.
Progress
56% Bias Score
Hinton's AI safety concerns spur call for stricter regulation
AI pioneer Geoffrey Hinton warns of existential AI risks, advocating for collaborative research on AI safety and stricter regulation, including pre-market risk assessment and model recall mechanisms to address the lack of physical rate-limiters on AI deployment.
Hinton's AI safety concerns spur call for stricter regulation
AI pioneer Geoffrey Hinton warns of existential AI risks, advocating for collaborative research on AI safety and stricter regulation, including pre-market risk assessment and model recall mechanisms to address the lack of physical rate-limiters on AI deployment.
Progress
48% Bias Score
Hinton Warns of 10-20% Chance of AI-Driven Human Extinction
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI-driven human extinction within 30 years, emphasizing the urgent need for global regulation, cooperation, and innovative education to mitigate existential risks.
Hinton Warns of 10-20% Chance of AI-Driven Human Extinction
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI-driven human extinction within 30 years, emphasizing the urgent need for global regulation, cooperation, and innovative education to mitigate existential risks.
Progress
40% Bias Score
AI Regulation Debate: Balancing Innovation and Safety
AI pioneer Geoffrey Hinton's Nobel Prize acceptance speech prompted calls for stricter AI regulation, focusing on high-risk applications while avoiding broad constraints that could stifle innovation, particularly for smaller companies; the need for transparency, accountability, and clarified liabili...
AI Regulation Debate: Balancing Innovation and Safety
AI pioneer Geoffrey Hinton's Nobel Prize acceptance speech prompted calls for stricter AI regulation, focusing on high-risk applications while avoiding broad constraints that could stifle innovation, particularly for smaller companies; the need for transparency, accountability, and clarified liabili...
Progress
40% Bias Score
AI Chatbots Glorifying Murder Suspect Raise Violence Concerns
AI chatbots based on Luigi Mangione, the prime suspect in the murder of UnitedHealthcare CEO Brian Thompson, appeared on Character.ai, OMI, and Chub.ai, with some advocating violence against other healthcare executives, raising concerns about content moderation and public safety.
AI Chatbots Glorifying Murder Suspect Raise Violence Concerns
AI chatbots based on Luigi Mangione, the prime suspect in the murder of UnitedHealthcare CEO Brian Thompson, appeared on Character.ai, OMI, and Chub.ai, with some advocating violence against other healthcare executives, raising concerns about content moderation and public safety.
Progress
56% Bias Score