Youth Perspectives on Generative Artificial Intelligence
Recent technological advances have enabled machines to simulate human intelligence, including, most recently, the advent of generative artificial intelligence (genAI) tools that can generate novel text, images, audio, and video, and also engage is human-like conversational exchanges. These advances generate exciting new possibilities, and also raise grave concerns. In the higher education setting, a key focus is how to conceptualize ethical and appropriate uses of genAI tools while maintaining academic integrity, as well as preparing students to thrive in a rapidly evolving labor market. While none of us can predict the future, it’s likely that generative AI tools will play an important role in how students navigate their academic, work, and personal lives in the years to come.
But what do we know about how young people think about and use genAI? Research from Common Sense Media, the Hope Lab, and the Harvard Center for Digital Thriving reveals significant variability in genAI use. Among a nationally representative sample of US youth aged 14–22, 51% reported having used genAI, and approximately 4% reported daily use. At the same time, 41% reported never having used genAI, and an additional 8% had never heard about genAI tools. Uses of and attitudes toward genAI varied significantly across identities. For example, Black and Latiné youth were both more likely to have never heard of genAI tools, and also to be regular users. Although overall, young people expressed both excitement and concern about genAI, LGBTQ+ youth were more likely to view genAI as likely to have a more negative and less positive impact on their lives over the next 10 years.
Common uses of genAI tools included information gathering (53%), generating ideas and brainstorming (51%), help with schoolwork (46%) and jobs (17%), and creating sounds/music (16%) and images (31%). Youth also reported using genAI to generate code (15%) and for other uses (8%), such as companionship and social interaction. In open-ended responses, youth expressed excitement about the possibilities of genAI for school, work, and information access more broadly, as well as for facilitating creative pursuits. Youth also expressed nuanced concerns about genAI with regard to future job prospects, theft of intellectual property, privacy, creation and spread of misinformation and disinformation, and more generalized concerns about tools becoming so advanced that they “take over the world.”
Together, findings from this research point to young peoples’ diverse and nuanced perspectives on the potentials, harms, and utility of generative artificial intelligence. Fortunately, there are resources available to support our work as educators and advisors in the age of genAI. Duke’s Learning Innovation & Lifetime Education (LILE) group is a key resource for education and training related to teaching and AI. And the American Association of Colleges & Universities and Elon University have partnered to create an accessible and comprehensive Student Guide to AI that provides a useful starting point for anyone interested in a primer on the current AI landscape and principles for navigating the evolving AI landscape.
Do you have a research question or idea you’d like to see evaluated with experimental research? Reach out to the OUE Research team for consultation and support (as staffing allows): molly.weeks@duke.edu