Anthropomorphization of Large Language Models

This article first appeared in Data Science Briefings, the DataMiningApps newsletter. Subscribe now for free if you want to be the first to receive our feature articles, or follow us @DataMiningApps. Do you also wish to contribute to Data Science Briefings? Shoot us an e-mail over at briefings@dataminingapps.com and let’s get in touch!

Contributed by Manon Reusens

The capabilities of Large Language Models (LLMs) have significantly advanced in recent years, enabling state-of-the-art models to engage in human-like conversations. As a result, the use of these LLMs has surged, with people currently increasingly relying on chatbots like ChatGPT as personal assistants for a wide range of tasks, from text writing and recipe generation to even more ambitious goals, like preparing for the Olympics (Sporza, 2024). The potential applications seem limitless, enhancing both everyday life and professional workflows. However, as LLMs exhibit more human-like behavior, a new challenge has emerged: the anthropomorphization of these models.

Anthropomorphization refers to linking human traits to non-human objects or entities (Deshpande et al., 2023). This behavior has been evident throughout history, consider for example the Greek gods that received human-like appearances. Anthropomorphization is also a mechanism that helps people navigate and manage the complexity of systems by making them more relatable and easier to understand.

LLMs have been designed to become as good at writing language as humans, therefore it is unsurprising that anthropomorphization occurs. First of all, LLMs are trained on enormous amounts of human-written data available on the internet. Through this training, a model starts to gather a first understanding of language. Next, techniques such as Reinforcement Learning from Human Feedback (Ouyang et al., 2022) and persona assignment (Gupta et al., 2024) are employed to guide models to respond from specific perspectives, so that they align with specific values and norms that the modeler wants to enforce. Moreover, chatbots like ChatGPT often respond in the first person, further reinforcing the perception that these models possess human-like traits, thus fueling anthropomorphization.

Anthropomorphization presents both opportunities, and challenges. On the positive side, the impressive capabilities of these models, coupled with their human-like characteristics, can boost productivity within organizations (Shanahan et al., 2023). LLMs can streamline repetitive tasks, especially of white-collar workers. This shift allows employees to move from creators to curators of content (Kanbach et al., 2024). Additionally, LLMs can be harnessed for personalized marketing strategies, which have been proven to be lucrative (Matz et al., 2017). Finally, also anthropomorphization increases general trust in AI (Deshpande et al., 2023).

However, several important caveats must be considered alongside these benefits. First, it is essential that users remain critical of the output generated by LLMs, even when these models are used to enhance workflow efficiency. While white-collar roles may shift from content creation to curation (Kanbach et al., 2024), users should recognize that LLMs can still produce inaccurate results. These errors are not intentional—unless a model is explicitly designed to deceive (Shanahan et al., 2023)—but may occur for example due to outdated training data or hallucinations. Therefore, it is crucial not to blindly accept LLM-generated content without scrutiny.

When using LLMs for personalized marketing, businesses must also ensure they comply with data privacy regulations, such as GDPR. Customers should be fully aware of the data being collected to refine the model, as well as the data that can be inferred from their interactions. People often unknowingly reveal more information than they realize (Kanbach et al., 2024). For instance, using specific phrasings like “taking a hook turn on your daily commute” may lead a model to infer that a person lives in Melbourne, where this is common (Staab et al., 2024).

While anthropomorphization can foster trust in AI, it is important not to oversell the capabilities of LLMs. Overpromising can lead to user disappointment and erode trust. Additionally, a significant risk of anthropomorphization is that businesses may avoid accountability for model-driven decisions (Cheng et al., 2024). Companies should be aware that they are responsible for the output of the models they implement. A recent court case involving Air Canada illustrates this point: the airline attempted to blame a chatbot for misleading a customer, but the court ruled in favor of the customer, holding the company accountable and requiring compensation (The Guardian, 2024).

In conclusion, the rapid advancement of LLMs has unlocked several opportunities, from enhancing workplace productivity to enabling personalized marketing strategies. However, the anthropomorphization of these models introduces both opportunities and risks. While LLMs can streamline workflows and build trust through human-like interactions, users must remain critical of the output, recognizing the potential for inaccuracies due to limitations like outdated data or hallucinations. Businesses should also navigate the ethical and legal implications, especially regarding data privacy and accountability. Ultimately, the successful integration of LLMs depends on a balance between leveraging their capabilities and maintaining human oversight to mitigate risks and ensure responsible use.

References:

  • Deshpande, A., Rajpurohit, T., Narasimhan, K., & Kalyan, A. (2023, December). Anthropomorphization of AI: Opportunities and Risks. In Proceedings of the Natural Legal Language Processing Workshop 2023 (pp. 1-7).
  • Gupta, A., Meng, P., Yurtseven, E., O’Brien, S., & Zhu, K. (2024). Aavenue: Detecting llm biases on nlu tasks in aave via a novel benchmark. arXiv preprint arXiv:2408.14845
  • Kanbach, D. K., Heiduk, L., Blueher, G., Schreiter, M., & Lahmann, A. (2024). The genai is out of the bottle: generative artificial intelligence from a business model innovation perspective. Review of Managerial Science, 18 (4), 1189–1220
  • Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. (2017). Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences, 114 (48), 12714-12719. Retrieved from https://www.pnas.org/doi/abs/10.1073/pnas.1710966114
  • Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., . . . Lowe, R. (2022). Training language models to follow instructions with human feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, & A. Oh (Eds.), Advances in neural information processing systems (Vol. 35, pp. 27730–27744). Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf
  • Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models. Nature, 623 (7987), 493–498
  • Sporza. (2024). John Heymans, de bevlogen bio-ingenieur die dankzij ChatGPT in de olympische finale staat: “Het is gestoord”.  https://sporza.be/nl/2024/08/07/john-heymans-de-bevlogen-bio-ingenieur-die-dankzij-chatgpt-in-de-olympische-finale-staat-het-is-gestoord~1723027243763/
  • Staab, R., Vero, M., Balunovi´c, M., & Vechev, M. (2023). Beyond memorization: Violating privacy via inference with large language models. arXiv preprint arXiv:2310.07298
  • The Guardian. (2024). Air Canada ordered to pay customer who was misled by airline’s chatbot — theguardian.com. https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit