top of page
Color logo_Horizontal_edited.png
Color logo_Horizontal_edited.png

Publications

Artboard 27 1.png

2024

Between Formal Authority and Authority of Competence–The Mechanisms of Algorithmic Conformity

Yotam Liel, Lior Zalmanson

Academy of Management Proceedings

Volume 2024, Issue 1

Academy of Management

In the era of AI-human collaborations within organizations, preserving independent human judgment is crucial. Recent studies reveal a concerning trend known as "algorithmic conformity," where individuals frequently adhere to flawed algorithmic recommendations. Despite the documented prevalence of this behavior, the underlying mechanisms have remained elusive. Addressing this gap, our study proposes two key mechanisms influencing individuals' conformity to algorithmic recommendations in work-related contexts: the perceived superior capabilities of algorithms ("authority of competence") and the belief that these algorithms may hold formal authority in the work environment, including the ability to enforce consequences for non-compliance ("formal authority"). Through realistic gig-work simulations involving 1,134 participants, our experiments demonstrate that algorithmic compliance is driven by perceived competence but is significantly amplified when workers attribute formal authority to algorithms. Additional experiments reveal that explicitly acknowledging AI's formal authority intensifies algorithmic conformity. Remarkably, this inclination persists even in task domains perceived as less suitable for AI, such as facial sentiment recognition. In this context, algorithmic conformity is primarily driven by perceptions of formal authority rather than competence. These findings underscore the inherent risks associated with algorithmic collaboration, emphasizing the imperative to educate, empower and train humans-in-the-loop to exercise their judgment in gig-economy settings and beyond.
Artboard 27 1.png

2023

Co-writing with opinionated language models affects users’ views

Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, Mor Naaman

Proceedings of the 2023 CHI conference on human factors in computing systems

2023/4/19

If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully
Artboard 27 1.png

2023

Turning off your better judgment–conformity to algorithmic recommendations

Yotam Liel, Lior Zalmanson

Academy of Management Proceedings

Volume 2023, Issue 1

Academy of Management

In many practical settings, humans rely on artificial intelligence algorithms to facilitate decision processes: from choosing driving routes and picking leisure activities to defining investment plans and guiding judicial procedures. Given that algorithms are known to be vulnerable to systematic errors and biases, this research seeks to understand: To what degree do humans take algorithmic advice at face value, even when such advice might be expected to conflict with their better judgment? To address this question, we draw from classic studies of social conformity and explore the extent to which gig-economy workers engaged in simple and objective perceptual-judgment (image classification) tasks conform to algorithmic recommendations that are clearly erroneous. Results from three studies (n = 1,085) show that substantial percentages of workers follow erroneous algorithmic recommendations (on 8.8%–26.5% of tasks); workers who do not view such recommendations do not make similar mistakes independently (only on about 1% of tasks). Moreover, workers are more likely to conform to erroneous algorithmic recommendations than to identical recommendations generated by other humans. We further show that workers become less likely to conform to an erroneous algorithmic recommendation when it is presented alongside a second, conflicting (correct) recommendation. Finally, conformity diminishes when workers perceive the real-life impact of their decisions as high (versus low). Given our realistic setup, our findings are directly applicable to workers engaged in hybrid machine–human judgment tasks, in addition to providing broader insights into the nature of human reliance on algorithms—and the risks it might entail.
Artboard 27 1.png

2021

The Online Community Management Triad-Managerial Dynamics of Community Admins in the Age of Algorithms

Yaara Welcman, Lior Zalmanson

aisel.aisnet.org

Online communities have gained prominence in recent years. With their growth in numbers, there has been an increasing challenge to enforce rules and community guidelines. Many community platforms have introduced AI to take over specific managerial duties from the human admins to address this challenge. In this paper, we study this new form of online community hybrid management. To do so, we draw on classical sociological theory and employ Simmel's concept of the triad (1950), which details the dynamics of the social relations between three actors, modifying it to include a non-human actor, namely machine learning algorithms, and employing it in a qualitative study of Facebook Groups. This triadic approach to analyze current online community management dynamics can help increase our understanding of human-algorithms co-management, particularly in online communities and shed light on broader challenges and opportunities in implementing algorithmic management in social environments.
Artboard 27 1.png

2017

Hands on the wheel: Navigating algorithmic management and Uber drivers

Marieke Möhlmann, Lior Zalmanson

38th ICIS Proceedings

With the rise of big data and networking capabilities, information systems can now automate management practices and perform complex tasks that were previously the responsibility of middle or upper management. These new practices, known as “algorithmic management,” have been applied by ride-hailing platforms such as Uber, whose business model is dependent on overseeing, managing, and controlling myriads of self-employed workers. This study seeks to understand this phenomenon from an information systems management perspective, highlighting the inherent paradox between workers’ sense of autonomy and these systems’ need of control. The paper offers a conceptualization of algorithmic management and employs interviews with Uber drivers and forum data to identify a series of mechanisms that drivers use to regain their autonomy under algorithmic management, including guessing, resisting, switching, and gaming the Uber system.
bottom of page