Asia’s economies are witnessing profound shifts in employment as AI technologies become mainstream. A 2024 IMF analysis noted that Singapore – often seen as a bellwether for tech adoption – is “highly exposed” to AI-driven changes in the workplace due to its skilled workforce.. About half of Singapore’s jobs could see significant productivity boosts from AI augmentation, while the other half may face disruption if workers lack complementary skills. This dual reality highlights a broader trend across Asia: AI can both displace and enhance jobs.
On one hand, repetitive, routine tasks are increasingly automated. Many companies report that AI tools have reduced administrative workloads – for example, Singapore’s AI Readiness Index case studies indicate up to a 50% reduction in repetitive tasks in some industries, freeing employees for higher-value work. On the other hand, new job roles and skills are emerging. Demand is rising for data analysts, AI specialists, and cybersecurity experts to build and manage these systems. The World Economic Forum’s Future of Jobs data (2023) suggests that while a significant percentage of roles may be eliminated or redefined by 2030, nearly an equal number of new roles could be created, especially in tech-forward regions like East and Southeast Asia. The net outcome for Asian labour markets will depend on how businesses and governments manage this transition.
Singapore’s approach exemplifies proactive adaptation. In Budget 2024, Singapore launched the SkillsFuture “Level-Up” programme, including a S$4,000 training credit for citizens over 40 to learn AI and digital skills. This initiative is a direct response to AI-driven job changes, aiming to reskill mid-career workers for new roles. As Deputy PM Lawrence Wong explained, the goal is to “ensure workforce competitiveness” by nurturing talent in AI and related fields. Such policies are essential, as studies warn that without “targeted training policies…leveraging the SkillsFuture program,” AI’s benefits may be unevenly distributed and could worsen inequality (e.g. if women or younger workers are left behind).
Meanwhile, countries like Japan face a different dynamic: AI and robotics are deployed to fill labour shortages rather than cut costs. Japan’s aging population and new work-hour regulations have led to a shortage of truck drivers – dubbed the “2024 problem”. In response, Japanese companies and the government are ramping up automation in logistics. An International Federation of Robotics report noted that automation and robotics are seen as key to addressing Japan’s labor shortfall, especially with caps on overtime for drivers aimed at improving working conditions. This is a case where responsible automation is aligned with labour practices: the government enforced better labour standards (limiting excessive work hours) and technology is used to uphold those standards without crippling the industry.
Collaborative warehouse robots being deployed to assist with logistics tasks. In Japan, such robotics solutions are helping businesses maintain productivity amid worker shortages, illustrating how AI and automation can support labour practices when aligned with employee well-being. Source: iotworldtoday.com
Not all AI-driven changes are positive. There have been high-profile instances of companies choosing automation in ways that raise CSR concerns. For example, in mid-2024, Media Chinese International in Malaysia announced plans to integrate AI for news production and downsize nearly 44% of its staff over five years. The media group will automate tasks like video content creation and even use AI newsreaders, anticipating a one-third reduction in headcount within two years. This drastic move, driven by financial losses, underscores the social responsibility dilemma: how should companies balance economic survival with the impact on employees? Stakeholders have questioned whether such transitions could be managed more gradually or with retraining of staff for new digital roles. This example serves as a cautionary tale that the pursuit of efficiency via AI must be weighed against its human costs – a core concern of labour practices under CSR.
As AI transforms the world of work, maintaining ethical labour practices becomes both more challenging and more crucial. ISO 26000’s guidance on labour practices – including fair employment, workplace safety, and worker development – provides a useful framework. It emphasizes that organizations should ensure fair and safe working conditions, transparent communication, and respect for worker rights in all circumstances. When AI systems are introduced into the workplace, these principles need fresh interpretation and enforcement.
Fair employment and non-discrimination: A key ethical concern is ensuring that AI tools used in HR do not perpetuate bias. Many companies now use Automated Employment Decision Tools (AEDTs) – from AI résumé screeners to algorithms that suggest who to interview or promote. In Singapore, such practices are under scrutiny. The Ministry of Manpower has made clear that “regardless of the tools used,” employers must comply with fair employment guidelines, and any AI-driven discrimination is unlawfu. As Manpower Minister Dr. Tan See Leng noted in Parliament (Nov 2024), Singapore will hold employers accountable for AI outcomes in hiring or promotion: if an algorithm unfairly filters out candidates (for example, on the basis of age, gender, or race), affected parties can seek recourse through the Tripartite Alliance for Fair Employment Practices (TAFEP). To date, no official complaints have been filed, but the government is “closely monitoring” AI adoption in HR and working with industry partners to ensure guidelines remain adequate. This proactive stance exemplifies how governance and labour ethics intersect: clear policies and watchdogs are needed so that AI augments human decision-making without undermining equity or diversity in the workplace.
Worker well-being and algorithmic management: Beyond hiring, AI is increasingly used to manage workers’ day-to-day performance. In the gig economy and service sectors, algorithmic management systems (like those running ride-hailing or delivery apps) assign tasks, set prices, and evaluate performance with minimal human oversight. While this can improve efficiency, it also raises concerns about transparency and worker rights. Drivers and delivery riders in Southeast Asia, for instance, often struggle with opaque algorithms that dictate their incomes and work hours. A 2024 study highlighted how such platforms create an “opaque, unaccountable environment” with information asymmetry – companies have all the data, while workers are left in the dark. Furthermore, reduced human intervention means workers have little recourse to challenge automated decisions (such as sudden account suspension or pay cuts due to an algorithm).
There are emerging efforts to rein in algorithmic management in Asia. Notably, China’s Algorithmic Recommendation Regulation (effective 2022) includes provisions to protect gig workers. Under this law, food delivery platforms in China were required to adjust their algorithms to improve labour conditions – for example, giving drivers more time to complete deliveries and allowing them to request extensions without penalty. Major platforms complied by registering their algorithms and implementing these changes, demonstrating that governance can “nudge platforms to alter the priorities of their algorithms” in favor of worker welfare. This is a significant precedent in Asia for responsible automation governance: it shows that technology can be governed to respect fundamental labour rights, such as reasonable working hours and the right to due process in performance management.
Health, safety, and work conditions: Introducing AI and robotics into workplaces also brings physical and psychosocial safety considerations. ISO 26000 underscores the importance of health and safety in the workplace. In manufacturing and warehousing, collaborative robots (“cobots”) now work alongside humans. Ensuring these machines are safe – with proper fail-safes and training – is paramount. In offices, AI surveillance or monitoring tools (meant to track productivity) can infringe on privacy and create stress, affecting mental well-being. Ethical labour practice means finding the right balance: for instance, if AI monitors work patterns to boost efficiency, employees should be informed and consulted (aligning with the principle of worker consultation and social dialogue that ISO 26000 advocates). Some companies in Asia have begun establishing internal committees or feedback channels for employees to voice concerns about new AI tools, echoing ISO 26000’s call for grievance mechanisms in the workplace. The Tripartite Alliance in Singapore and various unions in Asia are similarly encouraging social dialogue on AI adoption, to ensure changes are made with worker input rather than imposed unilaterally.
ISO 26000 provides a comprehensive blueprint for social responsibility, and its section on Labour Practices is directly relevant to managing AI-driven workplace changes. Key aspects of ISO 26000’s labour guidance include: employment relationships, conditions of work, social protection, social dialogue, health & safety, and human development (training). Here’s how businesses can apply these principles to AI initiatives:
In summary, ISO 26000’s labour practices framework serves as a moral compass for AI integration. It reminds organizations that technological progress should not come at the expense of fundamental labour rights and dignity. By following its guidance – from fair treatment and dialogue to training and safety – businesses can ensure that AI becomes a tool for empowerment of workers, not a threat.
To ground these concepts, let’s look at a few recent real-world examples in Asia where AI and labour practices intersect:
Each of these examples reinforces the article’s central message: responsible AI adoption in the workplace is achievable when guided by strong ethics and social responsibility principles. Companies in Asia that follow suit are likely to enjoy not just smoother transitions, but also enhanced reputation, employee loyalty, and sustainable growth.
As AI continues to reshape the future of work, businesses in Asia – from Singapore’s financial hubs to China’s tech giants and India’s IT services – must anchor their strategies in ethical labour practices. ISO 26000 offers a valuable guide, reminding us that corporate responsibility to employees is as important as innovation. By proactively addressing the CSR dimensions of AI – be it through upskilling programs, transparent and fair AI systems, or stakeholder engagement – companies can turn a potential source of social risk into an opportunity for positive impact.
This article is part of the “AI and CSR Series” (Entry #4), building on previous discussions of AI governance and human rights. Together, these insights make it clear that AI and responsible business go hand in hand. For HR professionals, this means updating policies and training to handle AI ethically. For corporate governance leaders, it means asking tough questions about how AI decisions are made and who they affect. And for CSR practitioners, it means championing initiatives that ensure no one gets left behind in the AI revolution.
By keeping people at the center of AI integration, Asian companies can uphold the region’s rich legacy of community and progress. The future of work with AI doesn’t have to be a zero-sum game – with wise governance and a commitment to fair labour practices, it can create value for businesses and society, hand in hand.
Q1: How is AI affecting jobs in Asia?
A1: AI is both creating and transforming jobs in Asia. It automates routine tasks (reducing some roles) but also generates new opportunities in tech, data, and AI system management. The net effect varies by industry, but with proactive training and adaptation, many workers can transition into new roles rather than be displaced.
Q2: What does ISO 26000 say about labour practices and AI?
A2: ISO 26000 provides broad guidance on fair labour practices – like fair treatment, safe work, and employee development – which can be applied to AI. It implies that companies should use AI in ways that uphold workers’ rights, involve employees in decisions, ensure safety, and invest in skills development to help workers adapt to technological change.
Q3: How can businesses practice responsible automation?
A3: Businesses can practice responsible automation by conducting impact assessments before deploying AI, consulting with employees, and putting in place measures like bias audits for AI decisions. They should also commit to retraining employees for new roles, maintain fair labor conditions (no excessive workload or surveillance), and provide support to any workers affected by automation.
Q4: Are there examples of ethical AI use in HR?
A4: Yes. Some companies use AI in recruitment or performance reviews with caution – for instance, using AI to assist (not replace) human decision-makers and regularly checking the AI for bias. In Singapore, employers follow Tripartite Guidelines to ensure AI doesn’t discriminate. Global firms like IBM have also developed principles for AI in HR to promote fairness and transparency.
Q5: Why should HR and CSR practitioners in Asia care about AI and labour practices?
A5: HR and CSR practitioners are key to ensuring that AI adoption aligns with ethical standards. They help shape policies that protect employees, promote upskilling, and maintain inclusive workplaces. By being involved, they can prevent legal or reputational issues, improve employee morale, and ensure that the introduction of AI ultimately benefits both the organization and its people – a win-win that supports sustainable business success.