Inproceedings,

The Promise of Evaluative Algorithmic Advice: A Field Experiment on Writing Improvement

, , , and .
Academy of Management (AOM) Meeting, Chicago, IL, USA, (2024)
DOI: 10.5465/AMPROC.2024.19749abstract

Abstract

The design and impact of algorithmic advice has become more important than ever with generative artificial intelligence’s (AI) diffusion in organizational and private realms. Unfortunately, challenges associated with the computational nature of AI-based systems and human sensemaking can hinder augmentation of human work. Prevalent forms of algorithmic advice commonly provide a user with ‘one best’ solution that can induce the user to fixate on the advice and neglect own critical reasoning. To overcome such interaction challenges, we explore the potential of evaluative algorithmic advice that provides users with more open-ended and engaging feedback. As part of two controlled experiments, we find that humans prefer evaluative over contrastive advice for writing feedback. We then conducted a field experiment in the context of an educational business pitch writing task with two conditions: (i) contrastive algorithmic advice improving users’ writing without any further feedback; and (ii) evaluative algorithmic advice providing feedback in form of open-ended questions and pro and contra arguments. Users’ writing could be improved regardless of the type of algorithmic advice, yet we show that users exposed to evaluative advice engaged significantly more with the task and the advice. Our study explores how algorithmic advice may serve as a critical stimulator rather than attenuating human agency in AI augmentation.

Tags

Users

  • @ls_leimeister

Comments and Reviews