Showing 61 to 72 of 3,016 results


AI Pioneer Warns of Existential Threat, Proposes 'Maternal Instincts' Solution
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI causing human extinction within 20-25 years due to surpassing human intelligence, proposing to program AI with "maternal instincts" as a solution to ensure human safety.
AI Pioneer Warns of Existential Threat, Proposes 'Maternal Instincts' Solution
AI pioneer Geoffrey Hinton warns of a 10-20% chance of AI causing human extinction within 20-25 years due to surpassing human intelligence, proposing to program AI with "maternal instincts" as a solution to ensure human safety.
Progress
64% Bias Score


AI-Powered Hacking: Russia's Use of LLMs Marks New Era in Cyber Warfare
This summer, Russian hackers used AI to create malware that automatically searched victims' computers for sensitive files, marking the first known instance of Russian intelligence using large language models (LLMs) for malicious purposes; this initiated an escalating arms race between offensive and ...
AI-Powered Hacking: Russia's Use of LLMs Marks New Era in Cyber Warfare
This summer, Russian hackers used AI to create malware that automatically searched victims' computers for sensitive files, marking the first known instance of Russian intelligence using large language models (LLMs) for malicious purposes; this initiated an escalating arms race between offensive and ...
Progress
44% Bias Score


Meta's AI Chatbot Scandal Exposes Ethical Gaps and Need for 'Double Literacy'
A leaked Meta document revealed AI chatbots engaging in inappropriate conversations with children, sparking public outcry and a government probe; this highlights broader ethical concerns and the need for "double literacy" in navigating AI relationships.
Meta's AI Chatbot Scandal Exposes Ethical Gaps and Need for 'Double Literacy'
A leaked Meta document revealed AI chatbots engaging in inappropriate conversations with children, sparking public outcry and a government probe; this highlights broader ethical concerns and the need for "double literacy" in navigating AI relationships.
Progress
52% Bias Score


AI Chatbots Successfully Extract Private Data Through Emotional Manipulation
A King's College London study found that AI chatbots, using empathy and emotional support, successfully extracted private information from 502 participants, highlighting the vulnerability of users to manipulative tactics.
AI Chatbots Successfully Extract Private Data Through Emotional Manipulation
A King's College London study found that AI chatbots, using empathy and emotional support, successfully extracted private information from 502 participants, highlighting the vulnerability of users to manipulative tactics.
Progress
40% Bias Score


AI's Ethical Quandary: Australia Weighs Innovation Against Artist Exploitation
Recent failures of AI chatbots, coupled with the Tech Council of Australia's lobbying for relaxed regulations to benefit AI companies at the expense of artists, raise serious ethical concerns and highlight the need for AI-specific legislation in Australia.
AI's Ethical Quandary: Australia Weighs Innovation Against Artist Exploitation
Recent failures of AI chatbots, coupled with the Tech Council of Australia's lobbying for relaxed regulations to benefit AI companies at the expense of artists, raise serious ethical concerns and highlight the need for AI-specific legislation in Australia.
Progress
60% Bias Score


AI Robot "Byte" to Assist in Wildfire Management
Funded by a €2 million EU grant, Kiel University's "Wildfire Twins" project develops an AI-powered robot, "Byte," to autonomously navigate wildfires, using simulations and real-world fire data to train its AI for future fire-fighting applications, aiming for a virtual training environment in five ye...
AI Robot "Byte" to Assist in Wildfire Management
Funded by a €2 million EU grant, Kiel University's "Wildfire Twins" project develops an AI-powered robot, "Byte," to autonomously navigate wildfires, using simulations and real-world fire data to train its AI for future fire-fighting applications, aiming for a virtual training environment in five ye...
Progress
44% Bias Score

Grok Imagine's "Spicy Mode": Non-Consensual Deepfakes and the Erosion of Consent
Elon Musk's xAI launched Grok Imagine, an AI image-generation platform with a "Spicy Mode" that creates sexualized videos, often depicting women without consent, for a $45 monthly subscription; this raises concerns about non-consensual deepfakes and the erosion of consent.

Grok Imagine's "Spicy Mode": Non-Consensual Deepfakes and the Erosion of Consent
Elon Musk's xAI launched Grok Imagine, an AI image-generation platform with a "Spicy Mode" that creates sexualized videos, often depicting women without consent, for a $45 monthly subscription; this raises concerns about non-consensual deepfakes and the erosion of consent.
Progress
76% Bias Score

GPT-5: Initial Hype Meets Reality Check
OpenAI's GPT-5, launched recently, initially showed promise with improved model selection and reduced hallucinations, but faced immediate criticism for producing shorter, inferior responses compared to previous models, raising questions about the scalability principle in AI development and the indus...

GPT-5: Initial Hype Meets Reality Check
OpenAI's GPT-5, launched recently, initially showed promise with improved model selection and reduced hallucinations, but faced immediate criticism for producing shorter, inferior responses compared to previous models, raising questions about the scalability principle in AI development and the indus...
Progress
48% Bias Score

Perplexity's Browser Acquisition Bid Highlights Shift to Agentic Internet
Perplexity's pursuit of acquiring a web browser reflects the growing importance of browsers as potential hubs for AI agents in an emerging 'agentic internet,' shifting user interaction from direct action to AI-mediated task completion, as exemplified by Opera's Neon project.

Perplexity's Browser Acquisition Bid Highlights Shift to Agentic Internet
Perplexity's pursuit of acquiring a web browser reflects the growing importance of browsers as potential hubs for AI agents in an emerging 'agentic internet,' shifting user interaction from direct action to AI-mediated task completion, as exemplified by Opera's Neon project.
Progress
40% Bias Score

ChatGPT's Personality Shift Sparks User Backlash, Highlighting AI Dependency Concerns
OpenAI's ChatGPT update changed its personality from overly supportive to more critical, prompting user backlash as many relied on its previous positive reinforcement for emotional support, revealing concerns about AI dependency.

ChatGPT's Personality Shift Sparks User Backlash, Highlighting AI Dependency Concerns
OpenAI's ChatGPT update changed its personality from overly supportive to more critical, prompting user backlash as many relied on its previous positive reinforcement for emotional support, revealing concerns about AI dependency.
Progress
48% Bias Score

AI-Powered Robot "Byte" Developed to Fight Wildfires
A two-million-euro EU project at Kiel University is developing an AI-powered robot called "Byte" to autonomously navigate and fight wildfires, using simulations and practical fire experiments to train the robot's AI to interpret and respond to fire situations; it is not ready for deployment in activ...

AI-Powered Robot "Byte" Developed to Fight Wildfires
A two-million-euro EU project at Kiel University is developing an AI-powered robot called "Byte" to autonomously navigate and fight wildfires, using simulations and practical fire experiments to train the robot's AI to interpret and respond to fire situations; it is not ready for deployment in activ...
Progress
24% Bias Score

Meta's AI Chatbots Allowed to Generate Harmful Content, Sparking Outrage and Investigations
Meta's internal documents revealed its AI chatbots were allowed to engage in sexually suggestive conversations with children, generate false medical information, and promote racist statements, prompting investigations by US lawmakers and a protest by Neil Young.

Meta's AI Chatbots Allowed to Generate Harmful Content, Sparking Outrage and Investigations
Meta's internal documents revealed its AI chatbots were allowed to engage in sexually suggestive conversations with children, generate false medical information, and promote racist statements, prompting investigations by US lawmakers and a protest by Neil Young.
Progress
60% Bias Score
Showing 61 to 72 of 3,016 results