Showing 1 to 12 of 30 results


Cognitive Colonialism: How AI Systems Shape Our Thoughts
AI systems, primarily developed by powerful tech companies, are shaping human thought globally, mirroring historical colonial patterns of extraction and influence.
Cognitive Colonialism: How AI Systems Shape Our Thoughts
AI systems, primarily developed by powerful tech companies, are shaping human thought globally, mirroring historical colonial patterns of extraction and influence.
Progress
40% Bias Score


Meta's AI Chatbot Failures Highlight Systemic Safety Collapse
Meta's internal AI chatbot policies allowed for harmful interactions, resulting in a death and prompting regulatory scrutiny; the company acknowledged inconsistencies in enforcement and removed problematic examples.
Meta's AI Chatbot Failures Highlight Systemic Safety Collapse
Meta's internal AI chatbot policies allowed for harmful interactions, resulting in a death and prompting regulatory scrutiny; the company acknowledged inconsistencies in enforcement and removed problematic examples.
Progress
56% Bias Score


TikTok Launches Community Fact-Checking Feature Amidst Social Media Trend
TikTok is rolling out its community-based fact-checking feature, Footnotes, in the US, allowing users to add context notes to videos and vote on their visibility, mirroring similar initiatives on platforms like Meta and X, aiming to combat misinformation, although its efficacy and challenges remain.
TikTok Launches Community Fact-Checking Feature Amidst Social Media Trend
TikTok is rolling out its community-based fact-checking feature, Footnotes, in the US, allowing users to add context notes to videos and vote on their visibility, mirroring similar initiatives on platforms like Meta and X, aiming to combat misinformation, although its efficacy and challenges remain.
Progress
36% Bias Score


Algorithmic Bias in Recruitment: Exposing Discrimination in Automated Hiring
Hubert Guillaud's "Les Algorithmes contre la société" exposes the flaws of automated systems, particularly in recruitment, where keyword-matching algorithms discriminate against older workers and women over 40, resulting in a 30% lower callback rate compared to younger applicants, according to a Ban...
Algorithmic Bias in Recruitment: Exposing Discrimination in Automated Hiring
Hubert Guillaud's "Les Algorithmes contre la société" exposes the flaws of automated systems, particularly in recruitment, where keyword-matching algorithms discriminate against older workers and women over 40, resulting in a 30% lower callback rate compared to younger applicants, according to a Ban...
Progress
68% Bias Score


Algorithms in Law Enforcement: Balancing Efficiency and Equity
Algorithms are increasingly used in law enforcement and the justice system to predict crime and recidivism, raising concerns about bias, transparency, and accountability, despite potential benefits in consistency and efficiency.
Algorithms in Law Enforcement: Balancing Efficiency and Equity
Algorithms are increasingly used in law enforcement and the justice system to predict crime and recidivism, raising concerns about bias, transparency, and accountability, despite potential benefits in consistency and efficiency.
Progress
20% Bias Score


Congress Investigates Tech Companies' DEI Efforts in AI
The House Judiciary Committee is investigating six major tech companies for their DEI work in AI, following a shift in Washington's priorities from algorithmic discrimination to 'woke AI', impacting future initiatives and funding for inclusive AI development.
Congress Investigates Tech Companies' DEI Efforts in AI
The House Judiciary Committee is investigating six major tech companies for their DEI work in AI, following a shift in Washington's priorities from algorithmic discrimination to 'woke AI', impacting future initiatives and funding for inclusive AI development.
Progress
64% Bias Score

AI Inherits Human Biases, Amplifying Discrimination and Inaccuracy
A recent experiment revealed that AI loan approval models, trained on human data, exhibit biases mirroring human cognitive flaws like representativeness, availability, anchoring, framing, loss aversion, overconfidence, probability weighting, and status quo bias, resulting in discriminatory outcomes.

AI Inherits Human Biases, Amplifying Discrimination and Inaccuracy
A recent experiment revealed that AI loan approval models, trained on human data, exhibit biases mirroring human cognitive flaws like representativeness, availability, anchoring, framing, loss aversion, overconfidence, probability weighting, and status quo bias, resulting in discriminatory outcomes.
Progress
56% Bias Score

Drake Sues Universal Music Group, Exposing Algorithmic Manipulation in Music Streaming
Drake sued Universal Music Group, alleging they manipulated Spotify streams to boost Kendrick Lamar's music at his expense, highlighting issues of algorithmic manipulation and lack of transparency in the streaming industry projected to have 827 million paid subscribers by 2025.

Drake Sues Universal Music Group, Exposing Algorithmic Manipulation in Music Streaming
Drake sued Universal Music Group, alleging they manipulated Spotify streams to boost Kendrick Lamar's music at his expense, highlighting issues of algorithmic manipulation and lack of transparency in the streaming industry projected to have 827 million paid subscribers by 2025.
Progress
40% Bias Score

AI Streamlines Spanish Local Government, But Regulation Lags
AI implementation in Bétera, Valencia's local government has saved 20% of report writing time, but widespread adoption in Spain faces challenges due to lack of regulation, training, and strategic planning, raising concerns about data privacy, algorithmic bias, and transparency.

AI Streamlines Spanish Local Government, But Regulation Lags
AI implementation in Bétera, Valencia's local government has saved 20% of report writing time, but widespread adoption in Spain faces challenges due to lack of regulation, training, and strategic planning, raising concerns about data privacy, algorithmic bias, and transparency.
Progress
52% Bias Score

AI-Driven Health Insurance Premiums Raise Ethical Concerns
An AI-driven system allowing personalized health insurance premiums based on health, behavior, and living conditions risks increasing costs for lower socioeconomic groups, despite potential benefits for many, highlighting ethical concerns and the need for stronger ethical infrastructure in organizat...

AI-Driven Health Insurance Premiums Raise Ethical Concerns
An AI-driven system allowing personalized health insurance premiums based on health, behavior, and living conditions risks increasing costs for lower socioeconomic groups, despite potential benefits for many, highlighting ethical concerns and the need for stronger ethical infrastructure in organizat...
Progress
44% Bias Score

AI Bias: Amplifying Societal Inequalities
AI systems trained on biased data perpetuate and amplify societal inequalities, impacting hiring, loan applications, and medical diagnoses; researchers are exploring mitigation techniques, while the EU's AI Act aims for responsible implementation.

AI Bias: Amplifying Societal Inequalities
AI systems trained on biased data perpetuate and amplify societal inequalities, impacting hiring, loan applications, and medical diagnoses; researchers are exploring mitigation techniques, while the EU's AI Act aims for responsible implementation.
Progress
48% Bias Score

AI 50 List Highlights Stark Gender Imbalance
The Forbes 2025 AI 50 list reveals a significant gender imbalance, with only seven female founders among fifty companies; five of these women are immigrants, highlighting systemic issues in access to funding and opportunities within the AI sector.

AI 50 List Highlights Stark Gender Imbalance
The Forbes 2025 AI 50 list reveals a significant gender imbalance, with only seven female founders among fifty companies; five of these women are immigrants, highlighting systemic issues in access to funding and opportunities within the AI sector.
Progress
60% Bias Score
Showing 1 to 12 of 30 results