TL;DR: AI and Human Rights – ISO 26000 Guidance for Ethical AI Governance
- ISO 26000 & Human Rights in AI – ISO 26000 is a global CSR standard emphasizing due diligence, non-discrimination, and accountability in upholding human rights. Applying these principles to AI can mitigate risks like biased algorithms or unjust surveillance.
- Real-World AI Ethics Risks – Recent examples (2023–2025) show AI systems infringing on rights: hiring algorithms rejecting older applicants, welfare fraud detectors unfairly targeting single mothers, facial recognition enabling mass surveillance, and chatbots violating privacy.
- Ethical AI Governance Strategies – Businesses should integrate human rights into AI lifecycle management: conduct human rights impact assessments, ensure transparency and AI ethics oversight, and establish grievance mechanisms for those affected by automated decisions. This aligns with ISO 26000’s guidance on social responsibility and helps maintain public trust.
- Series Context – This article is part of a series on AI and CSR. Our previous article on Organizational Governance and AI discussed how ethical leadership and oversight set the stage for responsible AI. Here, we focus on human rights, building on that foundation and paving the way for upcoming discussions (e.g. on labour practices and consumer issues in AI).
As artificial intelligence permeates decisions about jobs, finances, security, and personal data, its ethical implications have become a core concern for businesses, regulators, and society at large. Among these concerns, human rights impacts are paramount. Issues like algorithmic bias, unjust discrimination, invasive surveillance, and erosion of privacy aren’t just technical glitches – they strike at fundamental rights to equality, freedom, and dignity. In the realm of corporate social responsibility (CSR), respecting human rights is non-negotiable, and this extends to how organizations develop and deploy AI.
ISO 26000, the international CSR standard, provides a useful lens to examine AI’s human rights challenges. ISO 26000 offers guidance on seven core subjects of social responsibility – including human rights – to help organizations operate ethically. It emphasizes due diligence, avoiding complicity in abuses, remedying grievances, and non-discrimination. In this third entry of our AI and CSR series, we leverage ISO 26000’s human rights framework to assess AI ethics issues and propose governance practices. (If you’re new to the series, consider reading our earlier post on AI and Organizational Governance for foundational principles.)
Human Rights Risks in AI: Bias, Surveillance, and Privacy
AI technologies can amplify human rights risks if not managed responsibly. Key areas of concern include algorithmic bias leading to discrimination, mass AI surveillance infringing on privacy and civil liberties, and opaque data practices violating individuals’ rights. Recent real-world cases highlight how these issues are playing out:
Algorithmic Bias and Discrimination in AI
Algorithmic bias occurs when AI systems unfairly favor or disadvantage certain groups of people. Biased AI can inadvertently discriminate on the basis of age, gender, race, or other protected characteristics – undermining the right to equality and freedom from discrimination. This risk often stems from training data that reflect historical prejudices or flawed assumptions in algorithm design.
A stark example came in 2023, when a hiring algorithm used by an online education company was found to automatically reject older job applicants, leading to an age discrimination lawsuit. In a landmark case, the U.S. Equal Employment Opportunity Commission (EEOC) settled its first-ever AI bias lawsuit against iTutorGroup, whose recruitment AI was filtering out candidates over 40 in violation of anti-discrimination laws. This case underscores that biased AI decisions can directly violate human rights (in this instance, the right to fair employment opportunity) and that regulators are now actively enforcing against such abuses.
Another real-world instance involves government benefits. In France, a welfare fraud detection algorithm came under fire for disproportionately flagging single mothers and disabled people as potential fraudsters. A coalition of human rights groups launched legal action in 2024, arguing that the automated system used by the French Family Allowance Fund (CNAF) is discriminatory. Because single-parent families (often single mothers) and people with disabilities appeared more frequently targeted, the algorithm effectively reinforced bias against vulnerable groups. Such outcomes conflict with the right to social security and the principle of non-discrimination. Algorithmic bias in social services can mean those who need help the most are unjustly denied benefits or subjected to undue scrutiny – a clear human rights concern.
Even outside of intentional discrimination, unintended bias can creep in. In the UK, for example, the Universal Credit benefits system uses automated calculations that have been prone to errors. Reports found its algorithm frequently miscalculates claimants’ incomes and needs, resulting in reduced support for families who then struggle with hunger, rent arrears, and debt. Here, an algorithmic design flaw turned into a human rights issue – impacting rights to food, housing, and an adequate standard of living. These cases illustrate why AI ethics cannot be divorced from human rights: biased algorithms can cause real harm to people’s lives at scale.
AI Surveillance and the Right to Privacy
Mass surveillance powered by AI is another flashpoint. Advances in facial recognition and predictive analytics enable monitoring of individuals on an unprecedented scale, posing threats to privacy, freedom of movement, and freedom of expression. Human rights advocates warn that without strict limits, AI-driven surveillance can create a “Big Brother” effect that chills democratic society.
Consider facial recognition technology (FRT), which has been deployed in public spaces by law enforcement and private companies. In a 2023 judgment, the European Court of Human Rights weighed in on FRT’s legality. In Glukhin v. Russia, the court underscored the “highly intrusive” nature of facial recognition and held that blanket use of FRT by police to identify protesters violates fundamental rights unless tightly regulated with safeguards. The case involved a lone peaceful protester in Moscow who was identified and arrested via CCTV linked to facial recognition. The ruling signaled that indiscriminate surveillance of the public – especially political demonstrations – infringes on the right to privacy and freedom of assembly. In essence, scanning everyone’s face in search of wrongdoing treats all citizens as suspects, a practice incompatible with human rights values.
Privacy regulators are also pushing back on invasive AI. Clearview AI, a US company that scraped billions of online photos to build a facial recognition database, has faced hefty fines in Europe for GDPR violations. In 2024 the Dutch Data Protection Authority fined Clearview €30.5 million, noting that “facial recognition is a highly intrusive technology that you cannot simply unleash on anyone in the world”reuters.com. This statement encapsulates the human rights standpoint: people should not have their biometric data captured and analyzed without consent or oversight. Several countries and cities have accordingly banned or restricted facial recognition in policing, citing racial biases and privacy rights. These developments show a growing consensus that AI surveillance must be constrained to protect the public’s rights.
Data Privacy, AI and Freedom of Expression
Beyond surveillance cameras, AI also threatens privacy through how it collects and uses personal data. Generative AI models and large-scale data analytics often gobble up personal information, sometimes without individuals’ knowledge. If AI systems are trained on personal emails, social media, or sensitive records, this can violate the right to privacy and data protection. Furthermore, AI-driven content moderation or recommendation algorithms can impact freedom of expression – for instance, by unfairly censoring certain viewpoints – though that is a complex issue of its own.
A prominent case highlighting data privacy issues occurred in 2023 with OpenAI’s ChatGPT. Italy’s data protection authority (Garante) temporarily banned ChatGPT nationwide over privacy concerns, accusing the service of lacking a legal basis for its massive personal data collection and failing to protect minors. As the first Western country to block a popular AI chatbot, Italy sparked a global conversation about AI and privacy rights. Regulators questioned whether AI companies can harvest and use people’s data to train models without explicit consent or transparency. OpenAI was forced to implement new privacy disclosures and options to reinstate the service in Italy. The incident underscored that even cutting-edge AI must comply with fundamental rights and data protection laws – a clear example of AI governance catching up with technology.
Other examples abound: social media algorithms that fueled disinformation and hate speech (implicating rights to safety and information), or “predictive policing” tools that unfairly target minority neighborhoods (implicating rights to equal treatment and justice). In each scenario, an AI system, if left unchecked, can perpetuate human rights violations at scale. These real-world cases from 2023–2025 reinforce the urgency of building human rights considerations into AI design, deployment, and oversight.
Guiding Ethical AI Governance with ISO 26000 Human Rights Principles
How can organizations prevent AI from undermining human rights? This is where ethical AI governance becomes crucial. Companies and governments deploying AI need robust frameworks to ensure their systems respect human rights by design. ISO 26000’s human rights principles offer strategic guidance for doing exactly that, helping translate high-level ethics into concrete practices.
- Human Rights Due Diligence: ISO 26000 emphasizes conducting due diligence to identify human rights risks in operations. For AI, this means proactively assessing how an algorithm or data practice could impact people. Before deploying an AI system, organizations should perform a Human Rights Impact Assessment – examining, for example, if a recruitment AI might discriminate by gender, or if a facial recognition deployment could infringe privacy in vulnerable communities. By involving diverse stakeholders (including those who might be affected) in testing and auditing AI, organizations can catch biased outcomes or rights risks early. Ongoing monitoring is also key, since AI behavior can change over time or be used in new contexts.
- Avoiding Complicity in Abuses: ISO 26000 warns companies to avoid complicity in human rights violations. In the AI context, this translates to careful oversight of where and how your AI tools are used. For instance, an AI provider should think twice before selling powerful surveillance software to regimes known for political repression, as that could facilitate abuses. Likewise, a social media company must consider how its AI-driven content algorithms might be misused to incite violence or discrimination. Ethical AI governance may require saying “no” to certain high-risk deployments, or building in safeguards (like watermarks on deepfakes, or limits on personal data collection) to prevent misuse.
- Fairness and Non-Discrimination: A core part of ISO 26000’s human rights guidance is non-discrimination and equality. Applied to AI, this means ensuring algorithms treat individuals fairly and do not exacerbate bias. Technical steps include using diverse and representative training data, applying algorithmic fairness techniques, and regularly auditing outcomes for disparate impact. Governance steps include setting up an AI ethics committee or review board that includes experts in civil rights. If biases are found (as in the hiring and welfare cases above), organizations must be ready to pause and fix the system – or even scrap it – to uphold the principle of equality. Inclusivity in AI design (bringing in voices from minority groups, for example) also helps in catching blind spots that homogeneous developer teams might miss.
- Transparency and Accountability: Human rights principles call for transparency and effective remedies when rights are harmed. For AI, transparency involves explaining how automated decisions are made and allowing those affected to challenge or appeal them. ISO 26000 suggests having clear grievance mechanisms, which in AI governance could mean providing users a way to contest an AI-driven decision – such as being denied a loan or content being taken down – and get a human review. Accountability also implies that organizations take responsibility for their AI’s actions. This can be achieved by internal policies that clarify who oversees AI ethics, external audits or certifications for high-stakes AI systems, and compliance with emerging AI regulations. By making AI decision-making more transparent, organizations build trust and enable oversight, aligning with the accountability aspect of CSR.
- Integrating Human Rights in AI Strategy: Ultimately, respecting human rights in AI is not just about avoiding bad outcomes; it’s about baking ethical considerations into the AI strategy and culture. ISO 26000 frames human rights as a cross-cutting issue that should influence governance, systems, and values of an organization. In practice, companies can adopt AI ethics charters that explicitly reference international human rights (like privacy, freedom of expression, non-discrimination) as guiding principles. Training AI engineers and product managers on human rights awareness is another step – they should understand, for example, the social impacts of false positives in a policing AI, or why consent is crucial for AI handling personal data. When human rights become a natural part of the AI development life cycle – from design to deployment and feedback – the result is more socially responsible AI aligned with CSR objectives.
Conclusion
AI’s influence on society will only grow, which is why anchoring AI development in human rights is critical for sustainable innovation. By looking to frameworks like ISO 26000, organizations can gain a roadmap for ethical AI governance – one that ensures technology serves people, and not the other way around. From preventing algorithmic bias to curbing excessive surveillance, a human rights-based approach helps avoid harm and build public trust in AI systems. In this article, we’ve seen how recent AI controversies (bias in hiring, welfare algorithms, facial recognition, data privacy breaches) all share a common lesson: without deliberate governance, AI can and will compromise human dignity and rights.
The good news is that companies are not starting from scratch. CSR standards and human rights principles provide time-tested guidance that can be adapted to the AI era. Many organizations are now establishing AI ethics committees, conducting impact assessments, and engaging with stakeholders to navigate these challenges. Regulators, too, are crafting laws (like the upcoming EU AI Act) that echo human rights norms in technical rules. Ethical AI is thus a collaborative effort – one that spans technologists, executives, policymakers, and civil society.
As part of our ongoing AI and CSR series, we will continue exploring how each aspect of social responsibility intersects with AI. (Our previous installments examined topics such as organizational governance in AI, and upcoming ones will delve into issues like labor practices and consumer impacts in the AI context.) By understanding these dimensions, businesses and readers alike can approach AI not just as a high-tech tool, but as a domain of social responsibility. Together, we can harness AI’s benefits while safeguarding the human rights that define our shared humanity.
🔍 FAQs – AI Ethics and Human Rights
Q1: How should companies think about AI and human rights?
They should treat AI as a potential risk to rights like privacy, equality, and due process—requiring the same ethical scrutiny as other business practices.
Q2: What kinds of harm can AI cause to individuals or communities?
AI can reinforce discrimination, enable mass surveillance, or misuse personal data without consent, often at large scale.
Q3: Why is algorithmic bias considered a human rights issue?
Because biased AI can unfairly deny opportunities or services based on race, age, gender, or disability—undermining equal treatment.
Q4: What’s a responsible way to deploy AI in sensitive areas?
Start with a human rights impact assessment, involve affected stakeholders, and ensure human review is always possible.
Q5: How are governments responding to human rights risks in AI?
They’re introducing laws like the EU AI Act and enforcing privacy or anti-bias regulations to hold AI systems accountable.