The rapid advancements of digital technologies such as AI, robotics, and cloud computing are transforming organizations and societies. While these technologies offer benefits, such as improving efficiency and enhancing well-being, there are concerns about their negative impacts on individuals, organizations, and society, known as the "dark side" of technology. As a result, numerous proposals for guidelines and normative statements for responsible AI governance have emerged. However, there is a research gap regarding the implementation of these guidelines within organizations, particularly regarding the responsibility of AI stakeholders and their required skills. In this study, we investigate whether and how AI guidelines are implemented in organizational recruitment processes by analyzing circa 2.800 AI-related job advertisements using natural language processing (NLP) techniques, which are rapidly gaining popularity in management research for their capability to analyze and understand human language automatically. This study aims to provide insights into the demand for responsible AI skills in various contexts and labor markets. Our preliminary findings revealed that existing job descriptions largely overlook the concept of responsibility. Additionally, our analysis revealed that job descriptions neglected widely recognized responsibility-related concepts such as transparency, explainability, or human-centeredness. The paper provides a unique synthesis and original insights for future research on responsible technologies, stakeholders, and organizations. It will also offer a comprehensive taxonomy of responsibility principles and competencies that organizational stakeholders should possess in truly responsible organizations. The paper also provides practical implications for companies by raising awareness about the current lack of integration of responsible AI principles into all their organizational processes including recruitment through job advertisements, which are critical pieces of information through which candidates and other stakeholders build their perceptions of organizational image and reputation. Finally, our findings underscore the need for immediate action by management scholars and organizational managers to develop profiles for AI professionals that incorporate also responsibility-related skills, and not only those of technical nature. Doing this could be "small" in terms of effort, but "huge" in terms of its impact step on advancing Responsible AI, and bringing us closer to our dream of ethical and trustworthy AI.

The Responsibility Conundrum: Moving Beyond Technology and Code

Tursunbayeva, A.
;
Moschera, L.
2023-01-01

Abstract

The rapid advancements of digital technologies such as AI, robotics, and cloud computing are transforming organizations and societies. While these technologies offer benefits, such as improving efficiency and enhancing well-being, there are concerns about their negative impacts on individuals, organizations, and society, known as the "dark side" of technology. As a result, numerous proposals for guidelines and normative statements for responsible AI governance have emerged. However, there is a research gap regarding the implementation of these guidelines within organizations, particularly regarding the responsibility of AI stakeholders and their required skills. In this study, we investigate whether and how AI guidelines are implemented in organizational recruitment processes by analyzing circa 2.800 AI-related job advertisements using natural language processing (NLP) techniques, which are rapidly gaining popularity in management research for their capability to analyze and understand human language automatically. This study aims to provide insights into the demand for responsible AI skills in various contexts and labor markets. Our preliminary findings revealed that existing job descriptions largely overlook the concept of responsibility. Additionally, our analysis revealed that job descriptions neglected widely recognized responsibility-related concepts such as transparency, explainability, or human-centeredness. The paper provides a unique synthesis and original insights for future research on responsible technologies, stakeholders, and organizations. It will also offer a comprehensive taxonomy of responsibility principles and competencies that organizational stakeholders should possess in truly responsible organizations. The paper also provides practical implications for companies by raising awareness about the current lack of integration of responsible AI principles into all their organizational processes including recruitment through job advertisements, which are critical pieces of information through which candidates and other stakeholders build their perceptions of organizational image and reputation. Finally, our findings underscore the need for immediate action by management scholars and organizational managers to develop profiles for AI professionals that incorporate also responsibility-related skills, and not only those of technical nature. Doing this could be "small" in terms of effort, but "huge" in terms of its impact step on advancing Responsible AI, and bringing us closer to our dream of ethical and trustworthy AI.
2023
978-0-9956413-6-5
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11367/119956
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact