In the era of fast development and broad generalization of artificial intelligence (AI) to all industries, we moved to the age of tremendous technological opportunities. AI has also been proven to revolutionize education [1], professional coaching [3], human-computer interactions [11], and transportation [8], among many other industries, all to a variety of degrees, of enablers, depending on the nature of the applied AI capabilities, and their strength. This technological revolution, however, is not without its challenges, but this one, our specific situation, could be something we never faced before. AI is an ethical, social and legal issue, and responsible AI (RAI) development and deployment requires thorough examination. Based on recent literature, this review synthesizes the critical themes underpinning responsible research and AI practice. Then, we look at how AI is adapting in various domains by following the perception of stakeholders, and we discuss barriers to responsible implementation and future directions of making AI for humanity and taking care of possible harms from it.

AI in Education and Skills Development

The integration of AI in education is rapidly evolving, with conversational AI tools emerging as a significant area of exploration [1]. These tools offer the potential to personalize learning experiences, provide tailored feedback, and enhance student engagement. However, the effectiveness of these tools hinges on their ability to adapt to diverse educational contexts and instructional needs [1]. Educators are actively exploring how AI can aid in assessment, curriculum development, and making real-world connections for students [1]. This includes understanding how AI can adjust the cognitive demand of instruction to meet the unique needs of learners. However, the capacity of AI to consistently adapt its responses across different educational settings remains a challenge, highlighting the need for ongoing development and refinement of these tools [1].

Beyond formal education, the development of AI literacy is crucial for all stakeholders. Research-through-Design methodologies are gaining prominence, particularly in the context of generative AI [2]. This approach involves hands-on interaction with AI technologies, fostering the development of both practical and critical competencies among students [2]. This "critical responsivity" is essential for equipping individuals with the skills to navigate and shape the evolving landscape of AI [2]. Furthermore, understanding the public's perception of AI is crucial, especially regarding children's understanding and potential misconceptions. Studies reveal that children often hold misconceptions about AI, sometimes conceptualizing it as human-like entities or machines with pre-installed intelligence [9]. This highlights the need for tailored AI literacy curricula to address these misconceptions and promote a more accurate understanding of AI.

AI in Professional Domains: Teaching and Data Storytelling

The application of AI is also transforming professional practices, with significant implications for fields like coaching [3] and data storytelling [7]. In professional coaching, generative AI tools are being adopted for research, content creation, and administrative tasks [3]. Coaches report that AI tools are valuable aids, particularly in automating tasks and providing readily available information [3]. Ethical considerations are also paramount, with transparency and data privacy emerging as key concerns [3]. The perceived effectiveness of AI tools strongly influences their adoption, but the primary use case remains augmentation rather than replacement of human coaches [3]. This suggests the need for human-centered AI integration, prioritizing AI literacy training and ethical guidelines to ensure responsible implementation [3].

In data storytelling, AI is poised to assist data workers in creating compelling narratives from complex datasets [7]. However, human-AI collaboration in this domain reveals nuanced preferences. While data workers express enthusiasm for AI assistance, they also identify specific tasks and stages in the workflow where they prefer human control or oversight [7]. This is fueled by a desire for creativity and a reluctance to cede control over the narrative. The preferred collaboration patterns vary depending on the task, emphasizing the importance of designing AI tools that seamlessly integrate with human workflows and address specific needs [7].

Stakeholder Perspectives and the Challenges of Responsible AI

A critical dimension of responsible AI involves understanding the perspectives of various stakeholders and the barriers they perceive in the implementation of RAI practices [4, 8]. In the transportation sector, for example, transportation professionals exhibit mixed attitudes towards AI's impact [8]. While there is widespread optimism about AI's potential to improve efficiency and the traveler experience, concerns remain regarding equity and the potential for AI to exacerbate existing inequalities [8]. The study also reveals that many respondents are worried about AI ethics and the need for targeted education to improve understanding of AI among transportation professionals [8].

Stakeholder perspectives outside of technology companies are crucial to ensure that AI is developed and deployed responsibly [4]. Legal, civil society, and government stakeholders play a vital role in governing and auditing AI deployments [4]. These stakeholders are increasingly reliant on RAI artifacts like model cards and transparency notes [4]. However, they also express concerns about the potential unintended consequences of these artifacts, including impacts on power dynamics and the ability of civil society to protect end-users from AI harms [4]. The study highlights the need for structural changes and improvements in the design, use, and governance of RAI artifacts to support more collaborative and proactive external oversight of AI systems [4].

The Role of AI in Human-Computer Interaction and Affective Computing

The impact of AI on human-computer interaction (HCI) and user experience (UX) is substantial [11]. AI is transforming how user research is conducted and UX is designed, and the way in which users interact with computing systems, applications, and services [11]. AI-enabled capabilities are improving the overall UX, which is a key area for responsible AI research [11].

A particularly intriguing area of research focuses on the potential of AI to recognize and respond to human emotions, particularly pain and empathy [5]. Computational pain recognition and empathic AI show promise for healthcare and human-computer interaction [5]. The integration of empathy into AI systems presents both opportunities and challenges [5]. While there is a consensus on the importance of empathic AI, future research must address the technical barriers. The responsible evaluation of cognitive methods and computational techniques is also crucial to ensure that AI systems are developed ethically and effectively [5].

Tools, Governance, and the AI Lifecycle

Effective RAI implementation requires robust governance structures and the availability of appropriate tools [6, 10]. Implementing RAI within an organization is complex due to the involvement of multiple stakeholders, each with their own responsibilities across the AI lifecycle [6]. These responsibilities are often ambiguously defined, leading to potential confusion and inefficiencies [6]. A systematic review of RAI tools reveals significant imbalances across stakeholder roles and lifecycle stages [6]. The majority of available tools are designed to support AI designers and developers during the data-centric and statistical modeling stages, while neglecting other roles and stages [6]. This highlights critical gaps in RAI governance research and practice [6]. Furthermore, existing tools are rarely validated, which leaves critical gaps in usability and effectiveness, providing a starting point for researchers and practitioners to create more effective and holistic approaches to responsible AI development and governance [6].

Universities also play a crucial role in promoting the responsible use of AI in research [10]. Institutions must guide researchers in using generative AI responsibly and navigate a complex regulatory landscape [10]. A framework for the responsible use of generative AI in research can help universities establish a principles-based position statement and support initiatives in training, communication, infrastructure, and process change [10]. While there is a growing body of literature about AI's impact on academic integrity for undergraduate students, there is comparatively little attention on the impacts of generative AI for research integrity, and the vital role of institutions in helping to address those challenges [10].

Industry Engagement and the Future of Responsible AI

Despite the growing emphasis on responsible AI, there is limited understanding of industry's engagement in this critical subfield [12]. An analysis of industry's engagement in responsible AI research reveals that the majority of AI firms show limited or no engagement [12]. There is a stark disparity between industry's involvement in conventional AI research and its contributions to responsible AI [12]. Leading AI firms exhibit significantly lower output in responsible AI research compared to their conventional AI research and the contributions of leading academic institutions [12]. The scope of responsible AI research within industry is narrower, with a lack of diversity in key topics addressed [12]. The disconnect between responsible AI research and the commercialization of AI technologies suggests that industry patents rarely build upon insights generated by the responsible AI literature [12]. This gap highlights the potential for AI development to diverge from a socially optimal path, risking unintended consequences due to insufficient consideration of ethical and societal implications [12].

Future Directions

The responsible development and deployment of AI requires a multifaceted approach, encompassing technological advancements, ethical considerations, and robust governance structures. Several key areas warrant further exploration:

  • Enhancing AI Adaptability and Responsiveness: Future research should focus on developing AI tools that are more adaptable to diverse contexts, particularly in education [1]. This includes improving the ability of AI systems to understand and respond to evolving needs, proactively anticipating challenges, and providing tailored support.
  • Promoting AI Literacy and Critical Thinking: Educational initiatives should prioritize the development of AI literacy across all age groups and professional domains [2, 9]. This includes addressing misconceptions about AI, fostering critical thinking skills, and empowering individuals to engage with AI technologies responsibly.
  • Designing Human-Centered AI Systems: A critical aspect of responsible AI involves designing systems that augment human capabilities and prioritize human well-being [3, 7]. This includes understanding the preferences and needs of users, designing AI tools that seamlessly integrate with human workflows, and ensuring transparency and accountability in AI decision-making.
  • Strengthening RAI Governance and Oversight: Robust governance frameworks are essential to ensure that AI is developed and deployed ethically [4, 6, 10]. This includes establishing clear guidelines, promoting transparency, and empowering stakeholders to participate in the AI lifecycle.
  • Fostering Industry Engagement in RAI Research: Industry must increase its engagement in responsible AI research to mitigate potential risks and ensure that AI development aligns with societal values [12]. This includes investing in research, sharing knowledge, and collaborating with academic institutions and civil society organizations.
  • Addressing Ethical and Societal Implications: Further research is needed to address the ethical and societal implications of AI, including issues related to bias, fairness, privacy, and security [5, 8]. This includes developing methods for evaluating the ethical impact of AI systems and establishing mechanisms for accountability.
  • Exploring the Potential of Empathic AI: Further research should focus on the potential of AI to recognize and respond to human emotions, particularly pain and empathy [5]. This includes addressing the technical challenges of developing empathic AI systems and exploring the ethical implications of these technologies.

In conclusion, the responsible development and deployment of AI is a complex and evolving endeavor. By addressing the challenges and opportunities outlined in this review, and by fostering collaboration across disciplines and sectors, we can navigate the AI revolution in a way that benefits humanity and creates a more equitable and sustainable future.

==================================================

References

  • Alex Liu, Lief Esbenshade, Min Sun, Shawon Sarkar, Jian He, Victor Tian, Zachary Zhang. Adapting to Educate: Conversational AI's Role in Mathematics Education Across Different Educational Contexts. arXiv:2503.02999v1 (2025). Available at: http://arxiv.org/abs/2503.02999v1
  • Jesse Josua Benjamin, Joseph Lindley, Elizabeth Edwards, Elisa Rubegni, Tim Korjakow, David Grist, Rhiannon Sharkey. Responding to Generative AI Technologies with Research-through-Design: The Ryelands AI Lab as an Exploratory Study. arXiv:2405.04677v1 (2024). Available at: http://arxiv.org/abs/2405.04677v1
  • Jennifer Haase. Augmenting Coaching with GenAI: Insights into Use, Effectiveness, and Future Potential. arXiv:2502.14632v1 (2025). Available at: http://arxiv.org/abs/2502.14632v1
  • Anna Kawakami, Daricia Wilkinson, Alexandra Chouldechova. Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders. arXiv:2408.12047v1 (2024). Available at: http://arxiv.org/abs/2408.12047v1
  • Siqi Cao, Di Fu, Xu Yang, Stefan Wermter, Xun Liu, Haiyan Wu. Can AI detect pain and express pain empathy? A review from emotion recognition and a human-centered AI perspective. arXiv:2110.04249v2 (2021). Available at: http://arxiv.org/abs/2110.04249v2
  • Blaine Kuehnert, Rachel M. Kim, Jodi Forlizzi, Hoda Heidari. The "Who'', "What'', and "How'' of Responsible AI Governance: A Systematic Review and Meta-Analysis of (Actor, Stage)-Specific Tools. arXiv:2502.13294v1 (2025). Available at: http://arxiv.org/abs/2502.13294v1
  • Haotian Li, Yun Wang, Q. Vera Liao, Huamin Qu. Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling. arXiv:2304.08366v2 (2023). Available at: http://arxiv.org/abs/2304.08366v2
  • Yiheng Qian, Tejaswi Polimetla, Thomas W. Sanchez, Xiang Yan. How do transportation professionals perceive the impacts of AI applications in transportation? A latent class cluster analysis. arXiv:2401.08915v1 (2024). Available at: http://arxiv.org/abs/2401.08915v1
  • Pekka Mertala, Janne Fagerlund. Finnish 5th and 6th graders' misconceptions about Artificial Intelligence. arXiv:2311.16644v1 (2023). Available at: http://arxiv.org/abs/2311.16644v1
  • Shannon Smith, Melissa Tate, Keri Freeman, Anne Walsh, Brian Ballsun-Stanton, Mark Hooper, Murray Lane. A University Framework for the Responsible use of Generative AI in Research. arXiv:2404.19244v1 (2024). Available at: http://arxiv.org/abs/2404.19244v1
  • Wei Xu. AI in HCI Design and User Experience. arXiv:2301.00987v3 (2023). Available at: http://arxiv.org/abs/2301.00987v3
  • Nur Ahmed, Amit Das, Kirsten Martin, Kawshik Banerjee. The Narrow Depth and Breadth of Corporate Responsible AI Research. arXiv:2405.12193v1 (2024). Available at: http://arxiv.org/abs/2405.12193v1
  • More Saikat Barua's questions See All
    Similar questions and discussions