Blog
Simon Altmejd
October 1, 2024
-
6
min lire
Gestion et gouvernance

Comment l'IA peut-elle façonner l'avenir de l'autogestion : aperçu de la littérature académique

In this article, I want to share some insights from the academic literature about the potential ways in which artificial intelligence (AI) may shape the future of self-management. AI is a set of emerging technologies built on learning algorithms, computational power and statistical techniques that can approximate the output of knowledge workers[1]. 

Technologies using AI are currently evolving at a dizzying speed. As organizations continue to experiment with them in unexpected ways, new forms of organizations often emerge, such as what we have seen with Uberization and the Gig economy[2]. And as public and political debate about the regulation of these technologies continue, multiple scenarios remain possible for the future of work, some more desirable than others[3]. 

In this article, I will synthesize the literature to propose key opportunities as well as risks that AI brings for managers seeking to implement self-management practices. I will divide my analysis into the three essential dimensions of organizing: decision-making, coordination, and control[4]. In doing so, I will briefly discuss how each dimension may be affected by AI in the context of self-management.

[💡article pick] What is Shared Governance?

Decision-making: increasing autonomy of workers by informing their courses of action

Self-management requires teams of employees to work with a high level of autonomy. When AI tools are successfully integrated, they can importantly improve the basis of how teams make decisions, such as by incorporating rich and diverse sources of data[4]. This can enable workers to draw insights from the complete stock of knowledge of their organization to make more informed decisions. In turn, having the capacity to make more informed decisions can increase the autonomy of teams, thus reducing the need for managerial oversight.

As has been revealed in the last few months, generative AI using Large Language Models (LLM) can be especially powerful for assisting employees. While management research on LLM is still at an infancy stage, many studies are showing how the use of generative AI at work (such as ChatGPT) tends to act as a skill leveler[5][6]. Raising the performance of less skilled workers to put them at par with higher performers can flatten hierarchies of expertise in a way that encourages every employee to create value.

Furthermore, using generative AI to democratize organizational knowledge can also increase the transparency of key organizational processes[7], which has been established as a necessary condition for self-management[8].

While tools such as Holaspirit enable role transparency, using it in combination with LLM software may further facilitate self-management, such as by providing employees with recommended courses of action. By providing personal assistance to employees, the psychological barrier of engaging in self-management becomes lower, thus making it a viable practice for a larger pool of workers and their diverse personality profiles[9].

To help employees save time, we've thus recently integrated an AI assistant on our platform. Instead of creating roles from scratch, members can now generate the content of a role or circle using AI. This means that they can map their organization twice as fast as before! Want to know more about it? Click here to discover our new AI-powered org chart.

[💡article pick] How Does AI Impact Employees Within an Organization?

Coordination: improving large-scale task integration without managerial hierarchies

As we know, it is often easier for smaller organizations to remain fully self-managed over time than it is for large ones[10]. One reason for this is that managerial authority remains a very effective way to integrate complex interdependent tasks between distant organizational units, especially in times of high uncertainty when a lot of mutual adjustments is necessary. In these situations, AI can play a key role in re-engineering workflows to reduce inter-team conflicts, thus enabling large-scale collaboration without the need for managerial authority[11].

For example, Amazon Mechanical Turk (MTurk), a crowdsourcing marketplace to hire resources, uses AI to reduce coordination cost by allocating tasks, checking for quality, and aggregating different outputs into a cohesive whole[12]. While MTurk functions through a centralized governance system, the mechanisms through which it enables large-scale collaboration may be of great value for self-managing organizations[13].

In times of crisis, it can be especially difficult for organizations to maintain self-management practices, as old habits of centralized authority often tend to creep back in[14]. And while traditional hierarchies remain an efficient tool for solving intra-organizational conflicts, the need for conflict resolution in times of crisis can be diminished with a smoother and more intelligent integration of tasks, which AI can help achieve.

It is important to note, however, that AI technologies tend to perform especially poorly in times of crisis. This is because learning algorithms can only be trained on data acquired from past occurrences, which often only reflect normal patterns of action. Therefore, as new and unexpected events occur, such as the COVID-19 pandemic, AI models prove to be particularly error prone [15]. Human intelligence remains therefore central for coordinating in times of crisis. When it comes to self-management, it is important to remember that dividing and integrating tasks within organizations often remains an informal process that requires proximity and mutual understanding. While AI can help, it is no panacea.  

Control: providing real-time feedback to workers on their performance

Organizational control is central for organizations to ensure that the work of departments, teams, and individuals remain aligned with the goals of the organization. In traditional organizations, this is often achieved through top-down and hierarchical control mechanisms.

Because AI can continuously and consistently monitor deviance from key benchmarks, it can be (and already has been) used to provide feedback to employees and teams on their performance to help them achieve their goals. This in turn can help employees and teams self-adjust and make necessary changes when needed.

While this can enable the implementation of less hierarchical control mechanisms through self and peer forms of monitoring, there are also important privacy concerns that need to be addressed. For instance, AI is increasingly used in organizations to increase the scope of surveillance[16], creating what scholars have coined “algorithmic cages”[17]. One example of this is Uber, where drivers are nudged into compliance through rigid punishment and reward systems enabled by AI algorithms[18]. Such examples are not unique to the Gig economy, as can be seen by the increasing popularity of office surveillance technologies[19]. Such new trends in the labor market poses a significant threat for the growth of self-management, as it is easy to imagine how constant algorithmic surveillance is unconducive for fostering psychological safety.

Therefore, whether AI can be used to promote the growth of localized communities of self-management rather than leading to digital authoritarianism may ultimately depend on whether it can be used in ways that promote the privacy of workers. This depends not only on the choices of tools that managers decide to implement, but also on the ways that these technologies are regulated in the political sphere[20]. While the further development of AI technologies can create empowering conditions for employees, it is important to remember that the way that it will shape the future of work rests on the decisions of managers and policy makers across the globe. The story is still unfolding, and managers have the leading act.

Citations:

[1] Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62-70.
[2] Faraj, S., & Pachidi, S. (2021). Beyond Uberization: The co-constitution of technology and organizing. Organization Theory, 2(1), 2631787721995205.
[3] Bodrožić, Z., & S. Adler, P. (2022). Alternative futures for the digital transformation: A macro-level Schumpeterian perspective. Organization Science, 33(1), 105-125.
[4] Faraj, S., Renno, W., & Bhardwaj, A. (2022). AI and Uncertainty in Organizing.
[5] Noy, Shakked and Whitney Zhang, “Experimental evidence on the productivity effects of generative artificial intelligence,” 2023. Available at SSRN: https: // ssrn. com/ abstract= 4375283 .
[6] Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., ... & Lakhani, K. R. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, (24-013).
[7] Schildt, H. (2017). Big data and organizational design–the brave new world of algorithmic management and computer augmented transparency. Innovation, 19(1), 23–30.
[8] Martela, F. (2019). What makes self-managing organizations novel? Comparing how Weberian bureaucracy, Mintzberg’s adhocracy, and self-organizing solve six fundamental problems of organizing. Journal of Organization Design, 8(1), 1-23.
[9] Reitzig, M. (2022). How to get better at flatter designs: considerations for shaping and leading organizations with less hierarchy. Journal of Organization Design, 11(1), 5-10.
[10] Foss, Nicolai J., and Peter G. Klein. Why managers matter: the perils of the bossless company. PublicAffairs, 2022.
[11] Puranam, P. (2022). Deflating the rhetoric around “flat firms”. Journal of Organization Design, 11(1), 15-17.
[12] Schwartz, O. (2019). Untold history of AI: How Amazon’s mechanical turkers got squeezed inside the machine. IEEE Spectrum. Institute of Electrical and Electronics Engineers, New York.
[13] Lee, M. Y., & Edmondson, A. C. (2017). Self-managing organizations: Exploring the limits of less-hierarchical organizing. Research in organizational behavior, 37, 35-58.
[14] Lee, M. Y., & Green, P. (2022). Is Flat for Everyone? Investigating Who Thrives and Who Struggles in Decentralized Structures.
[15] Faraj, S., Renno, W., & Bhardwaj, A. (2021). Unto the breach: What the COVID-19 pandemic exposes about digitalization. Information and Organization, 31(1), 100337.
[16] Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
[17] Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62-70.
[18] Cameron, L. D., & Rahman, H. (2022). Expanding the locus of resistance: Understanding the co-constitution of control and resistance in the gig economy. Organization Science, 33(1), 38-58.
[19] de Vaujany, F.-X., Leclercq-Vandelannoitte, A., Munro, I., Nama, Y., & Holt, R. (2021). Control and Surveillance in Work Practice: Cultivating Paradox in ‘New’ Modes of Organizing.
[20] Bodrožić, Z., & S. Adler, P. (2022). Alternative futures for the digital transformation: A macro-level Schumpeterian perspective. Organization Science, 33(1), 105-125.

Révolutionnez votre façon de travailler dès maintenant !