Roger Saumure, PhD Candidate, The Wharton School; Robert Meyer
Abstract: Consumers are increasingly turning to large language models (LLMs) as an aid to everyday writing (i.e., email, text). While it is clear that LLMs can enhance the grammatical and syntactical structure of written communication, might they also lead people to communicate things that depart from their original intentions? We explored this question through an experimental paradigm in which participants were first asked to create an opinionated message, then viewed a suggested revision generated by the Chat-GPT 4 LLM that was either more positive or negative than the original. Results of our experiment reveal that LLMs do exert a substantial influence on written communication, but this effect has important moderators. Notably, participants who initially conveyed negative (vs. positive) opinions were less resistant to persuasion from LLMs, and text revisions that made a message more positive (vs. negative) were embraced more readily.

