By James Miller
In the fall of 2025, a freshman humanities course at Duke Kunshan University became the testing ground for a pedagogical experiment: Can artificial intelligence, often feared as the death of critical thinking, actually save it?

Professor James Miller’s section of “The Art of Interpretation: Written Texts” (ARHU 101) was designed not merely to tolerate generative AI, but to integrate it as a “non-human participant” in the classroom. The goal was to cultivate “critical AI literacy,” training students to view Large Language Models (LLMs) not as oracles for easy answers, but as “Socratic interlocutors.”
Reviewing the anonymous course evaluations*, and specific student reflections, a complex picture emerges. The experiment suggests that while AI can accelerate inquiry, the humanities classroom remains a resolutely human domain, dependent on feeling, emotion, and “slow reading” that algorithms cannot replicate.
The “Slow” and “Fast” Classroom
Miller’s curriculum was built on the tension between “slow reading” and “fast inquiry”. The syllabus required students to engage in traditional, close analysis of texts, ranging from the ancient Epic of Gilgamesh to the modern graphic novel Persepolis. However, this was paired with AI-driven activities designed to “rapidly contextualize” and creatively respond to those texts.
For example, while studying the Epic of Gilgamesh, students critiqued an AI-generated “Lost Tablet” to better understand the original epic’s style. Later, they used image generators to replicate the visual style of Marjane Satrapi, the author of Persepolis, to test the limits of what current AI models can achieve. The course culminated in an “AI-Enhanced Creative Project” worth 50% of the final grade, where students had to synthesize themes from the course into a creative work using AI tools. However, the majority of the final grade was not based on the actual output but on the student’s process log, a record of their interactions with the AI tool, and their continuing critical reflection on it.

Student Verdict: Engagement vs. Fatigue
The experiment appears to have been largely successful in terms of student engagement. Students explicitly recognized the value of the AI integration. When asked what activities should be retained, one student pointed to the final project: “It improves our application skills of AI, which is important.” Another noted that the project “helped me to see and evaluate AI’s possibility.” One student remarked, “You made me [think] of AI [in] a more critical way.” It was also interesting for the International students to be exposed to the Chinese tools that the Chinese students mostly used (DeepSeek, Doubao and Kimi), and vice-versa.
However, the integration was not without friction and it was refreshing to note that not all students see AI as the answer to all their questions. The novelty of “AI pedagogy exercises” wore off for some, with one student admitting they “got a bit boring.” Another frankly stated, “I don’t really enjoy using AI.”
More critically, some students expressed concern that the heavy reliance on technology might be counterproductive to the foundational goals of a humanities education. “I think [the AI activities are] interesting and helpful, but students can easily become lazy or not learn fundamental skills if they get used to doing everything with AI,” one evaluation read. This chimes with the idea that educators should stress traditional approaches to education as a kind of mental strength-building. Although summarizing the key messages of complex texts is something that AIs can do quicker than humans, students are still willing to do this intellectual labor when they view it as contributing to their own mental agility and stamina.
The Limits of “Data Imitating”
Perhaps the most profound takeaways from Miller’s experiment came from the students’ philosophical reflections on the technology itself. While acknowledging the efficiency of AI, students drew a sharp line between information processing and the human experience of art or literature.
Renhao Guo, a student in the course, noted that while AI is powerful, “it can’t provide the subtle feelings that the human creative process can give.” Guo described the AI output as “data imitating,” arguing that the machine cannot form a “real and deep relation with human beings”.
Fellow student Yuejia Ma offered a nuanced view, appreciating AI for “quickly summarizing the core ideas of lengthy readings” and saving time for brainstorming. Yet, Ma emphasized that “AI cannot replace one’s deep reading and emotional resonance with texts.” Ma concluded that the “human warmth and connection offered by teachers cannot be provided by AI”.
The Human Factor Remains Key
Professor Miller’s Fall 2025 experiment at Duke Kunshan University suggests that while AI can function as a “dynamic pedagogical partner,” it has not altered the fundamental DNA of humanities education. The syllabus aimed to use AI to make “humanistic thinking… more visible”, and based on the feedback, it succeeded in making the distinct value of human thought clearer by contrast with what LLMs are currently capable of generating.
As universities worldwide grapple with the role of ChatGPT and similar tools, ARHU 101 demonstrates that the future of the humanities may not lie in banning these tools, nor in surrendering to them. Instead, the path forward may be using them to prove, as student Yuejia Ma observed, that “human agency and guidance are vital”.
Note: * To write this article, the author used Google Gemini to evaluate the anonymous course evaluations for the course in the context of the learning objectives and course description in the syllabus.