As artificial intelligence permeates decisions about jobs, finances, security, and personal data, its ethical implications have become a core concern for businesses, regulators, and society at large. Among these concerns, human rights impacts are paramount. Issues like algorithmic bias, unjust discrimination, invasive surveillance, and erosion of privacy aren’t just technical glitches – they strike at fundamental rights to equality, freedom, and dignity. In the realm of corporate social responsibility (CSR), respecting human rights is non-negotiable, and this extends to how organizations develop and deploy AI.
ISO 26000, the international CSR standard, provides a useful lens to examine AI’s human rights challenges. ISO 26000 offers guidance on seven core subjects of social responsibility – including human rights – to help organizations operate ethically. It emphasizes due diligence, avoiding complicity in abuses, remedying grievances, and non-discrimination. In this third entry of our AI and CSR series, we leverage ISO 26000’s human rights framework to assess AI ethics issues and propose governance practices. (If you’re new to the series, consider reading our earlier post on AI and Organizational Governance for foundational principles.)
AI technologies can amplify human rights risks if not managed responsibly. Key areas of concern include algorithmic bias leading to discrimination, mass AI surveillance infringing on privacy and civil liberties, and opaque data practices violating individuals’ rights. Recent real-world cases highlight how these issues are playing out:
Algorithmic bias occurs when AI systems unfairly favor or disadvantage certain groups of people. Biased AI can inadvertently discriminate on the basis of age, gender, race, or other protected characteristics – undermining the right to equality and freedom from discrimination. This risk often stems from training data that reflect historical prejudices or flawed assumptions in algorithm design.
A stark example came in 2023, when a hiring algorithm used by an online education company was found to automatically reject older job applicants, leading to an age discrimination lawsuit. In a landmark case, the U.S. Equal Employment Opportunity Commission (EEOC) settled its first-ever AI bias lawsuit against iTutorGroup, whose recruitment AI was filtering out candidates over 40 in violation of anti-discrimination laws. This case underscores that biased AI decisions can directly violate human rights (in this instance, the right to fair employment opportunity) and that regulators are now actively enforcing against such abuses.
Another real-world instance involves government benefits. In France, a welfare fraud detection algorithm came under fire for disproportionately flagging single mothers and disabled people as potential fraudsters. A coalition of human rights groups launched legal action in 2024, arguing that the automated system used by the French Family Allowance Fund (CNAF) is discriminatory. Because single-parent families (often single mothers) and people with disabilities appeared more frequently targeted, the algorithm effectively reinforced bias against vulnerable groups. Such outcomes conflict with the right to social security and the principle of non-discrimination. Algorithmic bias in social services can mean those who need help the most are unjustly denied benefits or subjected to undue scrutiny – a clear human rights concern.
Even outside of intentional discrimination, unintended bias can creep in. In the UK, for example, the Universal Credit benefits system uses automated calculations that have been prone to errors. Reports found its algorithm frequently miscalculates claimants’ incomes and needs, resulting in reduced support for families who then struggle with hunger, rent arrears, and debt. Here, an algorithmic design flaw turned into a human rights issue – impacting rights to food, housing, and an adequate standard of living. These cases illustrate why AI ethics cannot be divorced from human rights: biased algorithms can cause real harm to people’s lives at scale.
Mass surveillance powered by AI is another flashpoint. Advances in facial recognition and predictive analytics enable monitoring of individuals on an unprecedented scale, posing threats to privacy, freedom of movement, and freedom of expression. Human rights advocates warn that without strict limits, AI-driven surveillance can create a “Big Brother” effect that chills democratic society.
Consider facial recognition technology (FRT), which has been deployed in public spaces by law enforcement and private companies. In a 2023 judgment, the European Court of Human Rights weighed in on FRT’s legality. In Glukhin v. Russia, the court underscored the “highly intrusive” nature of facial recognition and held that blanket use of FRT by police to identify protesters violates fundamental rights unless tightly regulated with safeguards. The case involved a lone peaceful protester in Moscow who was identified and arrested via CCTV linked to facial recognition. The ruling signaled that indiscriminate surveillance of the public – especially political demonstrations – infringes on the right to privacy and freedom of assembly. In essence, scanning everyone’s face in search of wrongdoing treats all citizens as suspects, a practice incompatible with human rights values.
Privacy regulators are also pushing back on invasive AI. Clearview AI, a US company that scraped billions of online photos to build a facial recognition database, has faced hefty fines in Europe for GDPR violations. In 2024 the Dutch Data Protection Authority fined Clearview €30.5 million, noting that “facial recognition is a highly intrusive technology that you cannot simply unleash on anyone in the world”reuters.com. This statement encapsulates the human rights standpoint: people should not have their biometric data captured and analyzed without consent or oversight. Several countries and cities have accordingly banned or restricted facial recognition in policing, citing racial biases and privacy rights. These developments show a growing consensus that AI surveillance must be constrained to protect the public’s rights.
Beyond surveillance cameras, AI also threatens privacy through how it collects and uses personal data. Generative AI models and large-scale data analytics often gobble up personal information, sometimes without individuals’ knowledge. If AI systems are trained on personal emails, social media, or sensitive records, this can violate the right to privacy and data protection. Furthermore, AI-driven content moderation or recommendation algorithms can impact freedom of expression – for instance, by unfairly censoring certain viewpoints – though that is a complex issue of its own.
A prominent case highlighting data privacy issues occurred in 2023 with OpenAI’s ChatGPT. Italy’s data protection authority (Garante) temporarily banned ChatGPT nationwide over privacy concerns, accusing the service of lacking a legal basis for its massive personal data collection and failing to protect minors. As the first Western country to block a popular AI chatbot, Italy sparked a global conversation about AI and privacy rights. Regulators questioned whether AI companies can harvest and use people’s data to train models without explicit consent or transparency. OpenAI was forced to implement new privacy disclosures and options to reinstate the service in Italy. The incident underscored that even cutting-edge AI must comply with fundamental rights and data protection laws – a clear example of AI governance catching up with technology.
Other examples abound: social media algorithms that fueled disinformation and hate speech (implicating rights to safety and information), or “predictive policing” tools that unfairly target minority neighborhoods (implicating rights to equal treatment and justice). In each scenario, an AI system, if left unchecked, can perpetuate human rights violations at scale. These real-world cases from 2023–2025 reinforce the urgency of building human rights considerations into AI design, deployment, and oversight.
How can organizations prevent AI from undermining human rights? This is where ethical AI governance becomes crucial. Companies and governments deploying AI need robust frameworks to ensure their systems respect human rights by design. ISO 26000’s human rights principles offer strategic guidance for doing exactly that, helping translate high-level ethics into concrete practices.
AI’s influence on society will only grow, which is why anchoring AI development in human rights is critical for sustainable innovation. By looking to frameworks like ISO 26000, organizations can gain a roadmap for ethical AI governance – one that ensures technology serves people, and not the other way around. From preventing algorithmic bias to curbing excessive surveillance, a human rights-based approach helps avoid harm and build public trust in AI systems. In this article, we’ve seen how recent AI controversies (bias in hiring, welfare algorithms, facial recognition, data privacy breaches) all share a common lesson: without deliberate governance, AI can and will compromise human dignity and rights.
The good news is that companies are not starting from scratch. CSR standards and human rights principles provide time-tested guidance that can be adapted to the AI era. Many organizations are now establishing AI ethics committees, conducting impact assessments, and engaging with stakeholders to navigate these challenges. Regulators, too, are crafting laws (like the upcoming EU AI Act) that echo human rights norms in technical rules. Ethical AI is thus a collaborative effort – one that spans technologists, executives, policymakers, and civil society.
As part of our ongoing AI and CSR series, we will continue exploring how each aspect of social responsibility intersects with AI. (Our previous installments examined topics such as organizational governance in AI, and upcoming ones will delve into issues like labor practices and consumer impacts in the AI context.) By understanding these dimensions, businesses and readers alike can approach AI not just as a high-tech tool, but as a domain of social responsibility. Together, we can harness AI’s benefits while safeguarding the human rights that define our shared humanity.
Q1: How should companies think about AI and human rights?
They should treat AI as a potential risk to rights like privacy, equality, and due process—requiring the same ethical scrutiny as other business practices.
Q2: What kinds of harm can AI cause to individuals or communities?
AI can reinforce discrimination, enable mass surveillance, or misuse personal data without consent, often at large scale.
Q3: Why is algorithmic bias considered a human rights issue?
Because biased AI can unfairly deny opportunities or services based on race, age, gender, or disability—undermining equal treatment.
Q4: What’s a responsible way to deploy AI in sensitive areas?
Start with a human rights impact assessment, involve affected stakeholders, and ensure human review is always possible.
Q5: How are governments responding to human rights risks in AI?
They’re introducing laws like the EU AI Act and enforcing privacy or anti-bias regulations to hold AI systems accountable.