Beyond the Paper Screen / ChatGPT, or Not ChatGPT: an Anthropologist’s Soliloquy

Anthropology emerged in the 19th century as an academic discipline for the study of diverse beliefs, customs and worldviews that Europeans and Euro-Americans deemed “primitive.” Even as anthropology sought to distance itself from this troubling past throughout the 20th century, its defining question remained: What does it mean to be human? While biological aspects of human existence (human evolution, for example) are a significant concern of anthropology, what makes us unique as a species is the expansive and complex array of capabilities that we acquire as we grow up, which is what anthropologists mean when they say “culture.”

So, what do we do with this idea of culture, and of humanity, if a machine can learn and acquire new capabilities as a human would — or even better and faster?

I have been thinking about this question for some time and was glad to join the recent faculty discussion on ChatGPT, an AI that is trained to interact with human users “in a conversational way” (openai.com/blog/chatgpt, accessed 4/11/2023). The focus of the discussion was the appropriate use of this AI in the academic setting, and how to teach students about it so that they would not violate their academic integrity. Listening to the exchange among my colleagues, I couldn’t wait any longer. I went online to start exploring myself. Registration was remarkably easy — just an email and phone number to prove you are not an evil robot, agree to the terms, and the unassuming-looking dialog box appeared on my screen.

I decided to start with my familiar territory and typed, “Who are the most influential American anthropologists?” As soon as I pressed “Enter,” ChatGPT began to generate a list of ten names. I studied the list and was impressed. It covered all the big names I would expect in such a list, but also included Zora Neale Hurston, whose groundbreaking work on African-American folklore has only recently been added to the anthropological canon. Emboldened by this encouraging result, I began to dig deeper. What about your number-eight choice, Zora Neale Hurston? Tell me about her anthropological work. Can you write a college-level essay about it? It was mesmerizing to watch the AI generate a well-put-together essay about the significance of Zora Neale Hurston’s research without missing a beat — until it did.

“why you didn’t talk bout aaev??” (Please excuse my poor typing — I was a little upset.)

“You are correct,” ChatGPT admitted immediately, “that Zora Neale Hurston’s use of African-American Vernacular English (AAVE) is an important aspect of her anthropological research that I did not address in my previous essay.”

Polite and professional even in the moment of disgrace, it went on to revise the original essay at my request, which seamlessly incorporated the discussion on AAEV. No wonder this “chatty” AI attracted over 100 million users in the first two months of its release in November 2022, including, without a doubt, a rising number of college students looking for a shortcut to their assignments.

In the meantime, the conversation unfolding on-screen was focused on ChatGPT’s ability to reference its sources. It was indeed capable of identifying relevant information from appropriate sources, and yet, it would not provide page numbers where the information came from. “It can’t cite sources properly!” A gleeful voice declared. Our time was out before we got to any sort of place as to how we could realistically teach students about using AIs.

While the current buzz was created by the accessibility of this giant and publicly available version, AI has been creeping into our daily lives for some time, as anyone who has spoken to Amazon’s Alexa or Apple’s Siri would know. Many articles I read since the faculty discussion have expressed concern over AI’s broader impact on human life, including the potential for taking jobs away from human workers. In this part of Southern California, where many warehouses and distribution centers are located, this trend is already visible in the form of “dark warehouses,” or the 100% automated warehouses managed without humans (hence no need for lighting).

As usual, students are quicker than the faculty to catch on to the new technology. Now that I’m becoming more familiar with ChatGPT’s style, I can see that AI-generated texts have been popping up in my course assignments since the beginning of this semester. Students might be ecstatic that ChatGPT can do it all for them, but what will it do to their intellectual development if they are letting the machine do all the learning that they were supposed to do? What do they do after college in the crowded marketplace where humans are competing for jobs not only against one another but also against AIs?

The ability to acquire a vast amount of new knowledge, skills and capabilities once securely defined humanity and distinguished it from all other creatures. If the presence of AI blurs that definition, here’s another crucial element to consider. Human learning takes place in the context of society — in relationship with other human beings, with consideration for their well-being and collective good; so should machine learning. Google and many others will soon follow suit and unleash powerful AIs for public use. If AI or not AI is no longer a question, keeping “the social” in the learning curve of these powerful machines may determine the future direction of humanity.


Sawa Kurotani

Kurotani is a professor of anthropology at the University of Redlands.