TL;DR: AI and CSR Series: AI’s Impact on Jobs, Workers, and Labour Ethics
- AI’s Impact on Jobs in Asia: Rapid AI adoption is automating routine tasks but also creating new roles, requiring businesses to proactively manage job transitions.
- Labour Ethics & Workplace Fairness: Ensuring fair treatment, non-discrimination, and worker well-being in an AI-driven workplace is crucial (e.g. transparent AI in hiring, algorithmic accountability).
- Responsible Automation Strategies: Aligned with ISO 26000, companies are investing in retraining and upskilling (e.g. Singapore’s SkillsFuture, Infosys’s reskilling pledge) to mitigate job displacement.
- CSR and Governance in AI: Effective corporate social responsibility in Asia involves governance frameworks and policies (from government and industry) that guide ethical AI deployment in the workplace.
Asia’s economies are witnessing profound shifts in employment as AI technologies become mainstream. A 2024 IMF analysis noted that Singapore – often seen as a bellwether for tech adoption – is “highly exposed” to AI-driven changes in the workplace due to its skilled workforce.. About half of Singapore’s jobs could see significant productivity boosts from AI augmentation, while the other half may face disruption if workers lack complementary skills. This dual reality highlights a broader trend across Asia: AI can both displace and enhance jobs.
On one hand, repetitive, routine tasks are increasingly automated. Many companies report that AI tools have reduced administrative workloads – for example, Singapore’s AI Readiness Index case studies indicate up to a 50% reduction in repetitive tasks in some industries, freeing employees for higher-value work. On the other hand, new job roles and skills are emerging. Demand is rising for data analysts, AI specialists, and cybersecurity experts to build and manage these systems. The World Economic Forum’s Future of Jobs data (2023) suggests that while a significant percentage of roles may be eliminated or redefined by 2030, nearly an equal number of new roles could be created, especially in tech-forward regions like East and Southeast Asia. The net outcome for Asian labour markets will depend on how businesses and governments manage this transition.
Singapore’s approach exemplifies proactive adaptation. In Budget 2024, Singapore launched the SkillsFuture “Level-Up” programme, including a S$4,000 training credit for citizens over 40 to learn AI and digital skills. This initiative is a direct response to AI-driven job changes, aiming to reskill mid-career workers for new roles. As Deputy PM Lawrence Wong explained, the goal is to “ensure workforce competitiveness” by nurturing talent in AI and related fields. Such policies are essential, as studies warn that without “targeted training policies…leveraging the SkillsFuture program,” AI’s benefits may be unevenly distributed and could worsen inequality (e.g. if women or younger workers are left behind).
Meanwhile, countries like Japan face a different dynamic: AI and robotics are deployed to fill labour shortages rather than cut costs. Japan’s aging population and new work-hour regulations have led to a shortage of truck drivers – dubbed the “2024 problem”. In response, Japanese companies and the government are ramping up automation in logistics. An International Federation of Robotics report noted that automation and robotics are seen as key to addressing Japan’s labor shortfall, especially with caps on overtime for drivers aimed at improving working conditions. This is a case where responsible automation is aligned with labour practices: the government enforced better labour standards (limiting excessive work hours) and technology is used to uphold those standards without crippling the industry.
Collaborative warehouse robots being deployed to assist with logistics tasks. In Japan, such robotics solutions are helping businesses maintain productivity amid worker shortages, illustrating how AI and automation can support labour practices when aligned with employee well-being. Source: iotworldtoday.com
Not all AI-driven changes are positive. There have been high-profile instances of companies choosing automation in ways that raise CSR concerns. For example, in mid-2024, Media Chinese International in Malaysia announced plans to integrate AI for news production and downsize nearly 44% of its staff over five years. The media group will automate tasks like video content creation and even use AI newsreaders, anticipating a one-third reduction in headcount within two years. This drastic move, driven by financial losses, underscores the social responsibility dilemma: how should companies balance economic survival with the impact on employees? Stakeholders have questioned whether such transitions could be managed more gradually or with retraining of staff for new digital roles. This example serves as a cautionary tale that the pursuit of efficiency via AI must be weighed against its human costs – a core concern of labour practices under CSR.
Labour Ethics and Responsible Automation in the Workplace
As AI transforms the world of work, maintaining ethical labour practices becomes both more challenging and more crucial. ISO 26000’s guidance on labour practices – including fair employment, workplace safety, and worker development – provides a useful framework. It emphasizes that organizations should ensure fair and safe working conditions, transparent communication, and respect for worker rights in all circumstances. When AI systems are introduced into the workplace, these principles need fresh interpretation and enforcement.
Fair employment and non-discrimination: A key ethical concern is ensuring that AI tools used in HR do not perpetuate bias. Many companies now use Automated Employment Decision Tools (AEDTs) – from AI résumé screeners to algorithms that suggest who to interview or promote. In Singapore, such practices are under scrutiny. The Ministry of Manpower has made clear that “regardless of the tools used,” employers must comply with fair employment guidelines, and any AI-driven discrimination is unlawfu. As Manpower Minister Dr. Tan See Leng noted in Parliament (Nov 2024), Singapore will hold employers accountable for AI outcomes in hiring or promotion: if an algorithm unfairly filters out candidates (for example, on the basis of age, gender, or race), affected parties can seek recourse through the Tripartite Alliance for Fair Employment Practices (TAFEP). To date, no official complaints have been filed, but the government is “closely monitoring” AI adoption in HR and working with industry partners to ensure guidelines remain adequate. This proactive stance exemplifies how governance and labour ethics intersect: clear policies and watchdogs are needed so that AI augments human decision-making without undermining equity or diversity in the workplace.
Worker well-being and algorithmic management: Beyond hiring, AI is increasingly used to manage workers’ day-to-day performance. In the gig economy and service sectors, algorithmic management systems (like those running ride-hailing or delivery apps) assign tasks, set prices, and evaluate performance with minimal human oversight. While this can improve efficiency, it also raises concerns about transparency and worker rights. Drivers and delivery riders in Southeast Asia, for instance, often struggle with opaque algorithms that dictate their incomes and work hours. A 2024 study highlighted how such platforms create an “opaque, unaccountable environment” with information asymmetry – companies have all the data, while workers are left in the dark. Furthermore, reduced human intervention means workers have little recourse to challenge automated decisions (such as sudden account suspension or pay cuts due to an algorithm).
There are emerging efforts to rein in algorithmic management in Asia. Notably, China’s Algorithmic Recommendation Regulation (effective 2022) includes provisions to protect gig workers. Under this law, food delivery platforms in China were required to adjust their algorithms to improve labour conditions – for example, giving drivers more time to complete deliveries and allowing them to request extensions without penalty. Major platforms complied by registering their algorithms and implementing these changes, demonstrating that governance can “nudge platforms to alter the priorities of their algorithms” in favor of worker welfare. This is a significant precedent in Asia for responsible automation governance: it shows that technology can be governed to respect fundamental labour rights, such as reasonable working hours and the right to due process in performance management.
Health, safety, and work conditions: Introducing AI and robotics into workplaces also brings physical and psychosocial safety considerations. ISO 26000 underscores the importance of health and safety in the workplace. In manufacturing and warehousing, collaborative robots (“cobots”) now work alongside humans. Ensuring these machines are safe – with proper fail-safes and training – is paramount. In offices, AI surveillance or monitoring tools (meant to track productivity) can infringe on privacy and create stress, affecting mental well-being. Ethical labour practice means finding the right balance: for instance, if AI monitors work patterns to boost efficiency, employees should be informed and consulted (aligning with the principle of worker consultation and social dialogue that ISO 26000 advocates). Some companies in Asia have begun establishing internal committees or feedback channels for employees to voice concerns about new AI tools, echoing ISO 26000’s call for grievance mechanisms in the workplace. The Tripartite Alliance in Singapore and various unions in Asia are similarly encouraging social dialogue on AI adoption, to ensure changes are made with worker input rather than imposed unilaterally.
ISO 26000 provides a comprehensive blueprint for social responsibility, and its section on Labour Practices is directly relevant to managing AI-driven workplace changes. Key aspects of ISO 26000’s labour guidance include: employment relationships, conditions of work, social protection, social dialogue, health & safety, and human development (training). Here’s how businesses can apply these principles to AI initiatives:
- Employment relationships and social dialogue: Organizations should treat employees as stakeholders in AI deployments. This means consulting employees (and unions where applicable) when introducing AI that affects jobs or work processes. For example, before implementing an AI system that reorganizes workflow or schedules, a company might hold briefings and gather employee feedback. Such dialogue not only aligns with ISO 26000 but also improves adoption, as workers are more likely to embrace AI if they understand its purpose and have a say in its implementation. Some forward-thinking firms in Asia have even created AI ethics committees that include employee representatives, ensuring human-centric oversight of workplace AI.
- Fair conditions of work: According to ISO 26000, fair labour practices include fair remuneration, job security, and reasonable working hours. Responsible use of AI should reinforce, not undermine, these conditions. If AI increases productivity, organizations could share the gains (e.g. through bonuses or reduced working hours for work-life balance) rather than simply downsizing. If AI enables 24/7 operations, companies must guard against creating an “always-on” expectation for employees. Asia’s culture of long working hours must not be exacerbated by AI. In practice, this might involve setting policies such as “right to disconnect” after hours, even as AI keeps systems running. Japan’s overtime regulation mentioned earlier is one policy-level example of guarding fair work conditions in an automated age.
- Training and human development: Perhaps the most critical ISO 26000 principle for the AI era is the emphasis on training and skills development. The standard encourages companies to invest in their people – and with AI, this translates to robust upskilling and reskilling programs. A shining example is India’s IT industry. In 2024, Infosys’s CEO Salil Parekh affirmed that the company did “not foresee any layoffs” due to AI; instead, Infosys had already trained over 250,000 employees in AI and pledged to continuously reskill staff for new tech roles. This corporate stance aligns perfectly with ISO 26000’s guidance to support employees through technological changes. Likewise, Tata Consultancy Services (TCS) and other major Asian tech employers have taken pride in a no-layoff philosophy, choosing to redeploy and retrain workers as tasks evolve. These cases illustrate how businesses can apply social responsibility by treating employees as assets to be developed, not costs to be cut, even when AI could technically replace certain jobs.
- Social protection and transition support: ISO 26000 highlights the importance of providing support when employment is affected – for instance, notice periods, severance, or assistance in finding new jobs. In the context of AI, if redundancies do occur, responsible companies in Asia are expected to handle them humanely. This could include offering generous retrenchment benefits, funding for retraining (perhaps through government programs like SkillsFuture or similar schemes in other countries), or phasing changes over time to allow natural attrition. The Media Chinese case, where nearly half the workforce may be cut for AI adoption, has sparked debate in Malaysia about whether the company will adequately support those employees in transitioning to new careers. A CSR-aligned approach would involve working with local authorities and industry groups to find alternative employment or training for affected staff over that five-year plan, rather than abrupt layoffs.
In summary, ISO 26000’s labour practices framework serves as a moral compass for AI integration. It reminds organizations that technological progress should not come at the expense of fundamental labour rights and dignity. By following its guidance – from fair treatment and dialogue to training and safety – businesses can ensure that AI becomes a tool for empowerment of workers, not a threat.
Case Studies: AI and Labour Practices in Action (Asia Focus)
To ground these concepts, let’s look at a few recent real-world examples in Asia where AI and labour practices intersect:
- Singapore – Ethical AI in HR: Singapore’s public and private sectors are actively guarding against AI-driven bias in employment. In late 2024, the government confirmed that Tripartite Guidelines on Fair Employment apply fully to AI tool. One multinational firm in Singapore piloting AI for initial résumé screening found that the algorithm was unintentionally favoring certain university graduates. Upon audit (encouraged by the forthcoming Workplace Fairness Legislation), the company adjusted the tool to focus on skills rather than past affiliations – an example of self-regulation aligning with national fairness norms. This shows how a combination of policy, oversight, and corporate initiative can maintain ethics in hiring. It also hints at future requirements: we may soon see mandatory bias audits for AI hiring tools in the region, similar to regulations in New York or the EU, adapted to the Asian context.
- Malaysia – Balancing Automation with Social Responsibility: The Media Chinese International case (mentioned above) is a live example of the tension between digital transformation and labour impact. As the media group moves to automate news production, it faces public scrutiny. Industry commentators in Malaysia have urged that any productivity gains from AI be partially invested in the remaining workforce – for instance, retraining print journalists as digital content creators or upskilling employees to manage the new AI systems. This situation is being closely watched by CSR advocates as a test of whether corporate transformations in Asia will prioritize stakeholder responsibility or shareholder profit alone.
- China – Algorithmic Fairness for Gig Workers: China’s early steps to regulate algorithms stand out in Asia. Following the 2022 regulations, platforms like Ele.me and Meituan (food delivery) reportedly tweaked their dispatch algorithms to reduce undue pressure on delivery riders. There are anecdotal reports in 2023 of couriers in Beijing noticing slightly more generous delivery times and an option to appeal if they couldn’t meet a timer, which was not available before. While challenges remain, this is a case of a government using law to enforce responsible tech design that accounts for labour rights. It aligns with ISO 26000’s principle that businesses should ensure decent working conditions even when using advanced technology.
- India – IT Industry Reskilling: The big Indian IT service companies (Infosys, TCS, Wipro) provide a case study in strategic workforce upskilling. As generative AI made waves in 2023–2024, these companies launched massive internal training drives. Infosys, for example, integrated AI courses into its Lex platform and by mid-2024 had over 270,000 employees trained in AI skills. The company’s leadership publicly stated that growth in AI work “will ensure strong hiring growth” rather than layoffs, projecting a net positive job outlook. This commitment is both practical (keeping their talent relevant) and deeply aligned with CSR – investing in people to adapt to change. It also resonates culturally in Asia, where providing stable employment is often seen as a company’s social duty. Smaller firms and startups in Asia are now following suit, partnering with online learning providers to reskill their teams on AI, thus democratizing the benefits of technology.
- Japan – Human-Centric Robotics: In Japan’s manufacturing sector, an ethos of “coexistence” between humans and robots is emphasized. Companies like Toyota have long used the concept of “jidoka” (automation with a human touch), which in modern terms means machines are intelligent but workers are still empowered to manage and improve the system. In 2025, several Japanese factories implementing AI vision systems for quality control also launched an internal program to reposition affected line workers into quality assurance analyst roles, combining their domain experience with new tech training. This prevented layoffs and even improved quality outcomes, serving as a best-practice example of redesigning jobs in the age of AI rather than cutting them.
Each of these examples reinforces the article’s central message: responsible AI adoption in the workplace is achievable when guided by strong ethics and social responsibility principles. Companies in Asia that follow suit are likely to enjoy not just smoother transitions, but also enhanced reputation, employee loyalty, and sustainable growth.
Conclusion: Towards a Responsible AI-Driven Workplace in Asia
As AI continues to reshape the future of work, businesses in Asia – from Singapore’s financial hubs to China’s tech giants and India’s IT services – must anchor their strategies in ethical labour practices. ISO 26000 offers a valuable guide, reminding us that corporate responsibility to employees is as important as innovation. By proactively addressing the CSR dimensions of AI – be it through upskilling programs, transparent and fair AI systems, or stakeholder engagement – companies can turn a potential source of social risk into an opportunity for positive impact.
This article is part of the “AI and CSR Series” (Entry #4), building on previous discussions of AI governance and human rights. Together, these insights make it clear that AI and responsible business go hand in hand. For HR professionals, this means updating policies and training to handle AI ethically. For corporate governance leaders, it means asking tough questions about how AI decisions are made and who they affect. And for CSR practitioners, it means championing initiatives that ensure no one gets left behind in the AI revolution.
By keeping people at the center of AI integration, Asian companies can uphold the region’s rich legacy of community and progress. The future of work with AI doesn’t have to be a zero-sum game – with wise governance and a commitment to fair labour practices, it can create value for businesses and society, hand in hand.
FAQs
Q1: How is AI affecting jobs in Asia?
A1: AI is both creating and transforming jobs in Asia. It automates routine tasks (reducing some roles) but also generates new opportunities in tech, data, and AI system management. The net effect varies by industry, but with proactive training and adaptation, many workers can transition into new roles rather than be displaced.
Q2: What does ISO 26000 say about labour practices and AI?
A2: ISO 26000 provides broad guidance on fair labour practices – like fair treatment, safe work, and employee development – which can be applied to AI. It implies that companies should use AI in ways that uphold workers’ rights, involve employees in decisions, ensure safety, and invest in skills development to help workers adapt to technological change.
Q3: How can businesses practice responsible automation?
A3: Businesses can practice responsible automation by conducting impact assessments before deploying AI, consulting with employees, and putting in place measures like bias audits for AI decisions. They should also commit to retraining employees for new roles, maintain fair labor conditions (no excessive workload or surveillance), and provide support to any workers affected by automation.
Q4: Are there examples of ethical AI use in HR?
A4: Yes. Some companies use AI in recruitment or performance reviews with caution – for instance, using AI to assist (not replace) human decision-makers and regularly checking the AI for bias. In Singapore, employers follow Tripartite Guidelines to ensure AI doesn’t discriminate. Global firms like IBM have also developed principles for AI in HR to promote fairness and transparency.
Q5: Why should HR and CSR practitioners in Asia care about AI and labour practices?
A5: HR and CSR practitioners are key to ensuring that AI adoption aligns with ethical standards. They help shape policies that protect employees, promote upskilling, and maintain inclusive workplaces. By being involved, they can prevent legal or reputational issues, improve employee morale, and ensure that the introduction of AI ultimately benefits both the organization and its people – a win-win that supports sustainable business success.