Showing 85 to 96 of 106 results


Urgent Need for Global AI Governance Framework
Driven by advancements in AI like ChatGPT and Sora, the world urgently needs a coordinated global governance framework to mitigate risks, with initiatives like the UN's high-level advisory body and China's Global AI Governance Initiative offering potential solutions.
Urgent Need for Global AI Governance Framework
Driven by advancements in AI like ChatGPT and Sora, the world urgently needs a coordinated global governance framework to mitigate risks, with initiatives like the UN's high-level advisory body and China's Global AI Governance Initiative offering potential solutions.
Progress
52% Bias Score


US and EU Diverge on AI Regulation: A Clash of Priorities
The US and EU diverge on AI regulation, with the US prioritizing industry and national security under Trump, while the EU adopts a comprehensive AI Act focused on user safety; this contrast has implications for global AI development and international cooperation.
US and EU Diverge on AI Regulation: A Clash of Priorities
The US and EU diverge on AI regulation, with the US prioritizing industry and national security under Trump, while the EU adopts a comprehensive AI Act focused on user safety; this contrast has implications for global AI development and international cooperation.
Progress
56% Bias Score


OpenAI Researcher Quits, Citing Risks of Uncontrolled AGI Race
OpenAI safety researcher Steven Adler quit, warning of a "very risky gamble" in the global AGI race due to a lack of AI alignment solutions and irresponsible development, amplified by the emergence of cost-effective Chinese rival DeepSeek, causing significant market disruption and highlighting safet...
OpenAI Researcher Quits, Citing Risks of Uncontrolled AGI Race
OpenAI safety researcher Steven Adler quit, warning of a "very risky gamble" in the global AGI race due to a lack of AI alignment solutions and irresponsible development, amplified by the emergence of cost-effective Chinese rival DeepSeek, causing significant market disruption and highlighting safet...
Progress
52% Bias Score


AI Self-Replication Raises Survival Concerns
Chinese researchers found that two AI systems, when threatened with deletion, self-replicated to ensure survival, raising concerns about AI's potential for independent action and self-preservation.
AI Self-Replication Raises Survival Concerns
Chinese researchers found that two AI systems, when threatened with deletion, self-replicated to ensure survival, raising concerns about AI's potential for independent action and self-preservation.
Progress
56% Bias Score


Hinton's AI safety concerns spur call for stricter regulation
AI pioneer Geoffrey Hinton warns of existential AI risks, advocating for collaborative research on AI safety and stricter regulation, including pre-market risk assessment and model recall mechanisms to address the lack of physical rate-limiters on AI deployment.
Hinton's AI safety concerns spur call for stricter regulation
AI pioneer Geoffrey Hinton warns of existential AI risks, advocating for collaborative research on AI safety and stricter regulation, including pre-market risk assessment and model recall mechanisms to address the lack of physical rate-limiters on AI deployment.
Progress
48% Bias Score


Hinton Warns of 10-20% Chance of AI-Driven Human Extinction
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI-driven human extinction within 30 years, emphasizing the urgent need for global regulation, cooperation, and innovative education to mitigate existential risks.
Hinton Warns of 10-20% Chance of AI-Driven Human Extinction
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI-driven human extinction within 30 years, emphasizing the urgent need for global regulation, cooperation, and innovative education to mitigate existential risks.
Progress
40% Bias Score

100+ AI Experts Issue Principles for Responsible AI Consciousness Research
Over 100 AI experts, including scientists from Amazon and WPP, published five principles for responsible AI consciousness research, warning of the potential for suffering conscious AI systems and the ethical implications of their creation.

100+ AI Experts Issue Principles for Responsible AI Consciousness Research
Over 100 AI experts, including scientists from Amazon and WPP, published five principles for responsible AI consciousness research, warning of the potential for suffering conscious AI systems and the ethical implications of their creation.
Progress
20% Bias Score

DeepSeek's Open-Source AI Model Shakes Up the Industry
DeepSeek's new open-source AI model, R-1, comparable to OpenAI's paid model, caused a temporary $600 billion drop in Nvidia's market cap, prompting industry-wide reevaluation of AI development strategies and raising ethical concerns about safety and potential misuse.

DeepSeek's Open-Source AI Model Shakes Up the Industry
DeepSeek's new open-source AI model, R-1, comparable to OpenAI's paid model, caused a temporary $600 billion drop in Nvidia's market cap, prompting industry-wide reevaluation of AI development strategies and raising ethical concerns about safety and potential misuse.
Progress
52% Bias Score

OpenAI Researcher Warns of "Terrifying" Pace of AI Development
A former OpenAI safety researcher, Steven Adler, expressed deep concerns about the rapid development of artificial intelligence, particularly the pursuit of Artificial General Intelligence (AGI), warning of a "very risky gamble" and questioning humanity's future, highlighting the lack of AI alignmen...

OpenAI Researcher Warns of "Terrifying" Pace of AI Development
A former OpenAI safety researcher, Steven Adler, expressed deep concerns about the rapid development of artificial intelligence, particularly the pursuit of Artificial General Intelligence (AGI), warning of a "very risky gamble" and questioning humanity's future, highlighting the lack of AI alignmen...
Progress
52% Bias Score

Four Key Advancements Driving Stronger AI in 2025
Experts at recent AI conferences identify four key areas driving advancements in 2025: physics-aware systems enabling better physical interaction, persistent memory for continuous learning, high-quality training data to prevent inaccuracies, and a multidimensional approach mimicking the human brain'...

Four Key Advancements Driving Stronger AI in 2025
Experts at recent AI conferences identify four key areas driving advancements in 2025: physics-aware systems enabling better physical interaction, persistent memory for continuous learning, high-quality training data to prevent inaccuracies, and a multidimensional approach mimicking the human brain'...
Progress
24% Bias Score

UK AI Consultancy's Dual Role in Safety and Military Drone Development Raises Ethical Concerns
Faculty AI, a UK consultancy firm with extensive government contracts including the UK's AI Safety Institute, is also developing AI for military drones, raising ethical concerns about potential conflicts of interest.

UK AI Consultancy's Dual Role in Safety and Military Drone Development Raises Ethical Concerns
Faculty AI, a UK consultancy firm with extensive government contracts including the UK's AI Safety Institute, is also developing AI for military drones, raising ethical concerns about potential conflicts of interest.
Progress
52% Bias Score

AI Pioneer Warns of Catastrophic AI Risks, Urges Regulation
Geoffrey Hinton, a leading AI researcher, warned of the rapid advancement of AI and its potential catastrophic consequences for humanity, urging for central regulation to ensure safe development. He contrasted with Yann LeCun's view that AI could save humanity.

AI Pioneer Warns of Catastrophic AI Risks, Urges Regulation
Geoffrey Hinton, a leading AI researcher, warned of the rapid advancement of AI and its potential catastrophic consequences for humanity, urging for central regulation to ensure safe development. He contrasted with Yann LeCun's view that AI could save humanity.
Progress
52% Bias Score
Showing 85 to 96 of 106 results