top of page

2025
Mind Your Manners: The Dynamics of Politeness in Human-AI vs. Human-Human Interactions
Teddy Lazebnik, Lior Zalmanson, Osnat Mokryn
Proceedings of the ACM on Human-Computer Interaction
Volume 9, Issue 7, CSCW450
The rapid integration of artificial intelligence (AI) into communication systems has significantly altered how users interact with digital tools and collaborate with AI agents. This study investigates the dynamics of politeness in human-AI interactions through a controlled experiment with 1,684 participants, each completing sequential text-based tasks with a conversational AI system.

2025
Turning Off Your Better Judgment: Algorithmic Conformity in Artificial Intelligence-Human Collaboration
Yotam Liel, Lior Zalmanson
Journal of Management Information Systems
As artificial intelligence (AI) becomes increasingly integral to society, humans’ tendency to forgo their own judgment to adopt algorithmic advice is eliciting substantial concern. Prior research suggests that such overreliance is driven by informational influences (confidence in AI’s superior judgment) or by desire to reduce attentional load.

2024
What is the Value of AI’s Creative Work? Human Appraisals and Consumer Choice of Computational Aided Product Design
Hilah Geva, Lior Zalmanson
45th International Conference on Information Systems (ICIS 2024 Proceedings)
In the evolving landscape of e-commerce, the emergence of AI in product design has sparked debate regarding the perceived value of AI-designed products versus those designed by humans. This research-in-progress examines consumer attitudes toward AI-designed products and the implications for e-commerce platforms’ disclosure policies.
Initial findings from two online experiments suggest that consumers allocate greater value and are more likely to choose human (vs. AI) designs. The authors further explore the mediating roles of perceived effort and creativity in shaping valuation and choice.

2024
Between Formal Authority and Authority of Competence–The Mechanisms of Algorithmic Conformity
Yotam Liel, Lior Zalmanson
Academy of Management Proceedings
Volume 2024, Issue 1
Academy of Management
We propose a new mechanism: normative pressure, stemming from the legitimacy afforded to algorithms within social or work-related structures. Using a setup inspired by social conformity research, we conducted four studies involving 1,445 crowd-workers performing straightforward image-classification tasks. Substantial percentages of participants followed erroneous AI recommendations on these tasks, despite being able to perform them perfectly without support. Conformity decreased when participants perceived their decisions’ real-life impact as high (versus low).

2021
The Online Community Management Triad-Managerial Dynamics of Community Admins in the Age of Algorithms
Yaara Welcman, Lior Zalmanson
aisel.aisnet.org
Online communities have gained prominence in recent years. With their growth in numbers, there has been an increasing challenge to enforce rules and community guidelines. Many community platforms have introduced AI to take over specific managerial duties from the human admins to address this challenge. In this paper, we study this new form of online community hybrid management. To do so, we draw on classical sociological theory and employ Simmel's concept of the triad (1950), which details the dynamics of the social relations between three actors, modifying it to include a non-human actor, namely machine learning algorithms, and employing it in a qualitative study of Facebook Groups. This triadic approach to analyze current online community management dynamics can help increase our understanding of human-algorithms co-management, particularly in online communities and shed light on broader challenges and opportunities in implementing algorithmic management in social environments.

2021
Algorithmic Management of Work on Online Labor Platforms: When Matching Meets Control
Mareike Möhlmann, Lior Zalmanson, Ola Henfridsson, Robert Wayne Gregory
MIS Quarterly
Volume 45, Issue 4
Online labor platforms (OLPs) can use algorithms along two dimensions: matching and control. While previous research has paid considerable attention to how OLPs optimize matching and accommodate market needs, OLPs can also employ algorithms to monitor and tightly control platform work. In this paper, we examine the nature of platform work on OLPs, and the role of algorithmic management in organizing how such work is conducted. Using a qualitative study of Uber drivers' perceptions, supplemented by interviews with Uber executives and engineers, we present a grounded theory that captures the algorithmic management of work on OLPs. In the context of both algorithmic matching and algorithmic control, platform workers experience tensions relating to work execution, compensation, and belonging. We show that these tensions trigger market-like and organization-like response behaviors by platform

2017
Hands on the wheel: Navigating algorithmic management and Uber drivers
Marieke Möhlmann, Lior Zalmanson
38th ICIS Proceedings
With the rise of big data and networking capabilities, information systems can now automate management practices and perform complex tasks that were previously the responsibility of middle or upper management. These new practices, known as “algorithmic management,” have been applied by ride-hailing platforms such as Uber, whose business model is dependent on overseeing, managing, and controlling myriads of self-employed workers. This study seeks to understand this phenomenon from an information systems management perspective, highlighting the inherent paradox between workers’ sense of autonomy and these systems’ need of control. The paper offers a conceptualization of algorithmic management and employs interviews with Uber drivers and forum data to identify a series of mechanisms that drivers use to regain their autonomy under algorithmic management, including guessing, resisting, switching, and gaming the Uber system.

We propose a new mechanism: normative pressure, stemming from the legitimacy afforded to algorithms within social or work-related structures. Using a setup inspired by social conformity research, we conducted four studies involving 1,445 crowd-workers performing straightforward image-classification tasks. Substantial percentages of participants followed erroneous AI recommendations on these tasks, despite being able to perform them perfectly without support. Conformity decreased when participants perceived their decisions’ real-life impact as high (versus low).


Publications
bottom of page



