Artificial intelligence (AI) is rapidly transforming the landscape of human resource management (HRM), presenting both unprecedented opportunities and significant challenges [6]. This review synthesizes current research on the application of AI in HRM, categorizing findings into key themes: risk management, task organization, human-AI collaboration, well-being, and governance. The analysis highlights the potential of AI to enhance efficiency, reduce bias, and improve employee experience, while also underscoring the crucial need for ethical considerations, robust governance frameworks, and a focus on human well-being to ensure responsible and effective AI integration.

Navigating the Risks of AI in HRM

The deployment of AI in HRM is not without its risks, particularly those with potentially catastrophic consequences [1]. The US National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework (AI RMF) to guide AI developers and users in assessing and mitigating these risks [1]. This framework emphasizes the importance of identifying and managing potential unintended uses and misuses of AI systems, including those that could violate human rights [1]. The framework also underscores the need to include catastrophic risk factors in risk assessments and to report on AI risk factors effectively [1].

The adoption of AI in HRM necessitates a proactive approach to risk management. This includes addressing potential biases in AI systems, ensuring data privacy, and preventing algorithmic discrimination [6]. The authors of [14] highlight that ethical concerns about AI deployment vary significantly based on implementation context and specific use cases within human service organizations. They propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing [14].

Re-Thinking Work in the Age of Generative AI

Generative AI is poised to reshape the nature of work, necessitating a re-evaluation of how tasks are organized and performed [2]. The "AI task tensor" provides a framework for understanding the impact of generative AI on human work [2]. This framework considers eight dimensions of tasks performed by a human-AI dyad, encompassing task formulation, implementation, and resolution [2]. The tensor helps to organize emerging research on generative AI and offers a starting point for understanding how work will be performed with the emergence of generative AI [2]. This framework can be used to build analytical tractability and management intuition [2].

The integration of AI into HRM requires a nuanced understanding of how practitioners use AI-related resources and guidelines [3]. The People + AI Guidebook is used by industry practitioners for addressing AI's design challenges, for education, cross-functional communication, and for developing internal resources [3]. Practitioners desire more support for early phase ideation and problem formulation to avoid AI product failures [3].

Fostering Effective Human-AI Collaboration

Successful AI integration in HRM hinges on effective human-AI collaboration [6, 7, 10]. Recent research has explored how AI can learn to decide whether to complete a task or delegate it to a human [7]. Studies show that task performance and satisfaction improve through AI delegation, regardless of whether humans are aware of the delegation [7]. Moreover, increased levels of self-efficacy have been identified as the underlying mechanism for these improvements in performance and satisfaction [7].

Human-guided training is a critical component of AI development and deployment [4]. Human oversight during training can alleviate technical and ethical pressures on AI, boosting AI performance with human intuition while addressing fairness and explainability [4]. The Model Mastery Lifecycle framework provides guidance on human-AI task allocation and how human-AI interfaces need to adapt to improvements in AI task performance over time [10]. Within the framework, a zone of uncertainty where the issues of human-AI task allocation and user interface design are likely to be most challenging is identified [10].

AI can also be integrated in the design process to improve collaboration [15]. Enabling AI to have awareness can enhance the communication fluidity between human and AI, thus enhancing collaboration efficiency [15].

Prioritizing Employee Well-being in the AI Era

The integration of AI into HRM processes has the potential to significantly impact employee well-being [6]. While AI can enhance efficiency and reduce bias, it raises concerns about job security, fairness, and privacy [6]. Transparency in AI systems emerges as a critical factor in fostering trust and positive employee attitudes [6]. AI systems can both support and undermine employee well-being, depending on how they are implemented and perceived [6]. Organizational strategies, such as clear communication, upskilling programs, and employee involvement in AI implementation, are crucial for mitigating negative impacts and enhancing positive outcomes [6]. The successful integration of AI in HR requires a balanced approach that prioritizes employee well-being, facilitates human-AI collaboration, and ensures ethical and transparent AI practices alongside technological advancement [6].

Governing AI in HRM: Frameworks and Challenges

The rapid development and broad application of AI necessitate robust governance frameworks [13, 18]. Adaptive governance, where AI governance and AI co-evolve, is essential for governing generative AI [13]. This approach emphasizes the need for regulators to promote proactive risk management and maintain ongoing vigilance and agility [18].

The use of AI in HRM raises concerns about bias and fairness [12]. Gender bias in leadership extends beyond human managers and towards AI-driven decision-makers [12]. As AI takes on greater managerial roles, understanding and addressing these biases will be crucial for designing fair and effective AI management systems [12].

Progressive decentralization is proposed as a model for governing the agent-to-agent economy of trust [11]. Cryptoeconomic incentives can help design decentralized governance systems that allow AI agents to autonomously interact and exchange value while ensuring human oversight via progressive decentralization [11].

AI in HRM: Applications and Future Directions

AI is finding applications across various HRM functions, including talent analytics, resource optimization, and performance evaluation [5, 16]. AI can optimize resources in Unmanned Aerial Vehicle (UAV)-assisted Internet of Things (IoT) networks [5]. Incorporating generative AI models for real-time decision-making can improve resource management within these networks [5].

AI techniques have revolutionized HRM, particularly in talent analytics [16]. The availability of large-scale talent and management-related data provides opportunities for business leaders to comprehend organizational behaviors and gain tangible knowledge from a data science perspective [16]. A comprehensive survey on AI technologies used for talent analytics in the field of human resource management has been conducted [16].

Recommender Systems (RS) can be used to bridge the gap between AI data analytics and AI-based portfolio construction methods [9]. This field of AI is referred to as Recommender Systems (RS) [9].

The impact of generative AI on the U.S. federal workforce is being investigated [17]. The study highlights policy recommendations essential for workforce planning in the era of AI [17]. The study aims to inform strategic workforce planning and policy development within federal agencies and demonstrates how this approach can be replicated across other large employment institutions and labor markets [17].

Future Directions

Future research should focus on several key areas. First, more research is needed on the long-term impacts of AI on employee well-being, including mental health, job satisfaction, and career development [6]. Second, further investigation is required to understand and mitigate biases in AI systems used in HRM, ensuring fairness and equitable outcomes for all employees [12]. Third, there is a need for the development of more robust and adaptable governance frameworks that can keep pace with the rapid evolution of AI technologies [13, 18]. Fourth, research should explore how to effectively integrate AI into HRM processes to enhance human-AI collaboration and optimize task allocation [7, 10]. Fifth, further investigation is needed on the role of AI in training and development, including the use of AI-powered tools to personalize learning experiences and upskill the workforce [4]. Finally, continued efforts are required to develop and refine ethical guidelines and best practices for the responsible and transparent use of AI in HRM, ensuring that human values and organizational goals are aligned [1, 14]. By addressing these areas, researchers and practitioners can work together to harness the power of AI to transform HRM, creating more efficient, equitable, and human-centered workplaces.

==================================================

References

  • Anthony M. Barrett, Dan Hendrycks, Jessica Newman, Brandie Nonnecke. Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks. arXiv:2206.08966v3 (2022). Available at: http://arxiv.org/abs/2206.08966v3
  • Anil R. Doshi, Alastair Moore. Towards an AI task tensor: A taxonomy for organizing work in the age of generative AI. arXiv:2503.15490v1 (2025). Available at: http://arxiv.org/abs/2503.15490v1
  • Nur Yildirim, Mahima Pushkarna, Nitesh Goyal, Martin Wattenberg, Fernanda Viegas. Investigating How Practitioners Use Human-AI Guidelines: A Case Study on the People + AI Guidebook. arXiv:2301.12243v2 (2023). Available at: http://arxiv.org/abs/2301.12243v2
  • Cary Coglianese, Colton R. Crum. Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence. arXiv:2402.08466v2 (2024). Available at: http://arxiv.org/abs/2402.08466v2
  • Sana Sharif, Sherali Zeadally, Waleed Ejaz. Resource Optimization in UAV-assisted IoT Networks: The Role of Generative AI. arXiv:2405.03863v1 (2024). Available at: http://arxiv.org/abs/2405.03863v1
  • Soheila Sadeghi. Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes. arXiv:2412.04796v1 (2024). Available at: http://arxiv.org/abs/2412.04796v1
  • Patrick Hemmer, Monika Westphal, Max Schemmer, Sebastian Vetter, Michael Vössing, Gerhard Satzger. Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction. arXiv:2303.09224v1 (2023). Available at: http://arxiv.org/abs/2303.09224v1
  • Philip Feldman, Aaron Dant, Harry Dreany. War Elephants: Rethinking Combat AI and Human Oversight. arXiv:2404.19573v1 (2024). Available at: http://arxiv.org/abs/2404.19573v1
  • Alicia Vidler. Recommender Systems in Financial Trading: Using machine-based conviction analysis in an explainable AI investment framework. arXiv:2404.11080v1 (2024). Available at: http://arxiv.org/abs/2404.11080v1
  • Mark Chignell, Mu-Huan Miles Chung, Jaturong Kongmanee, Khilan Jerath, Abhay Raman. The Model Mastery Lifecycle: A Framework for Designing Human-AI Interaction. arXiv:2408.12781v1 (2024). Available at: http://arxiv.org/abs/2408.12781v1
  • Tomer Jordi Chaffer. Governing the Agent-to-Agent Economy of Trust via Progressive Decentralization. arXiv:2501.16606v1 (2025). Available at: http://arxiv.org/abs/2501.16606v1
  • Hao Cui, Taha Yasseri. Gender Bias in Perception of Human Managers Extends to AI Managers. arXiv:2502.17730v1 (2025). Available at: http://arxiv.org/abs/2502.17730v1
  • Anka Reuel, Trond Arne Undheim. Generative AI Needs Adaptive Governance. arXiv:2406.04554v1 (2024). Available at: http://arxiv.org/abs/2406.04554v1
  • Brian E. Perron, Lauri Goldkind, Zia Qi, Bryan G. Victor. Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s). arXiv:2501.11705v1 (2025). Available at: http://arxiv.org/abs/2501.11705v1
  • Zhuoyi Cheng, Pei Chen, Wenzheng Song, Hongbo Zhang, Zhuoshu Li, Lingyun Sun. An Exploratory Study on How AI Awareness Impacts Human-AI Design Collaboration. arXiv:2502.16833v1 (2025). Available at: http://arxiv.org/abs/2502.16833v1
  • Chuan Qin, Le Zhang, Yihang Cheng, Rui Zha, Dazhong Shen, Qi Zhang, Xi Chen, Ying Sun, Chen Zhu, Hengshu Zhu, Hui Xiong. A Comprehensive Survey of Artificial Intelligence Techniques for Talent Analytics. arXiv:2307.03195v2 (2023). Available at: http://arxiv.org/abs/2307.03195v2
  • William G. Resh, Yi Ming, Xinyao Xia, Michael Overton, Gul Nisa Gürbüz, Brandon De Breuhl. Complementarity, Augmentation, or Substitutivity? The Impact of Generative Artificial Intelligence on the U.S. Federal Workforce. arXiv:2503.09637v1 (2025). Available at: http://arxiv.org/abs/2503.09637v1
  • Cary Coglianese, Colton R. Crum. Regulating Multifunctionality. arXiv:2502.15715v1 (2025). Available at: http://arxiv.org/abs/2502.15715v1
  • More Saikat Barua's questions See All
    Similar questions and discussions