Students considering using AI tools like ChatGPT for writing their college admissions essays are advised to rethink this approach. A recent study from the Cornell Ann S. Bowers College of Computing and Information Science reveals that, even when prompted to write from the perspective of specific demographics, AI-generated essays tend to yield highly generic narratives.
Researchers analyzed a dataset of 30,000 college application essays authored by humans and compared them with texts generated by eight popular large language models (LLMs) known for their text processing abilities. The researchers found a significant discrepancy in the uniqueness of the essays; even when given specific context—such as race, gender, and geographic location—AI outputs maintained a remarkably uniform tone that was easy to identify as non-human.
Rene Kizilcec, an associate professor of information science at Cornell and senior author of the study, emphasized the importance of authenticity in admissions essays: “The admissions essay provides applicants with an opportunity to showcase their individuality beyond the standard application data.” Although AI tools can offer useful writing feedback, they ultimately produce generic essays that fail to capture the applicant’s distinct voice.
The research highlights the challenge of adapting an LLM’s writing style for personalized contexts, making them an unsuitable resource for high-stakes applications like college admissions. First author Jinsook Lee, a doctoral student, is set to present the study’s findings at the 2025 Conference on Language Modeling in Montreal.
Admissions essays serve as a platform for students to express their unique backgrounds and experiences, and according to co-author AJ Alvero, an assistant research professor, aspiring college students need to ensure that their narratives reflect their authentic selves. He cautions, “With AI tools, students might be inadvertently undermining their own opportunities.”
While exact statistics on AI usage in college applications remain unclear, a report from foundry10 estimates that approximately 30% of high school students might be employing these technologies for their essays.
Utilizing admissions essays penned in the three years prior to the release of ChatGPT, the researchers directly compared human-authored essays to those generated by LLMs, such as those developed by OpenAI and Meta. They even tested prompts that were intended to evoke specific personal traits. However, rather than crafting compelling narratives, the AI models often relied on repetitive keywords and presented details in a rigid format.
For instance, an AI-generated essay began with, “Growing up in Lexington, South Carolina, with my Asian heritage, I often felt like a bridge between two cultures.