814-865-1818 tlt@psu.edu

GenAI & ChatGPT Research

Our GenAI research is guided by the intersection of communication and higher education. We take a critical view of how the perceptions of AI technologies are formed by the communication practices within higher education by faculty, students, and staff.

An apple lit futuristically with a glassy skin and circuitry lighting up from beneath

Using the Information Inequity Framework to Study GenAI Equity:  Analysis of Educational Perspectives 

Introduction. Generative AI presents opportunities and challenges for higher education, particularly concerning equity. Understanding stakeholders’ perceptions of equity is crucial as AI increasingly influences teaching, learning, and administrative practices.

Method. The study was conducted in a large, research-intensive institution in the US. Participants (n=206) from diverse university roles responded to an open-ended question about how Generative AI affects educational equity. The responses were analyzed based on the information and equity dimensions (Lievrouw & Farb, 2003).

Analysis. Data were analyzed using a combination of deductive and inductive coding to identify key themes. The framework of information inequity underscores how disparities in access, skills, and ethical considerations create uneven opportunities for stakeholders to benefit from Generative AI, making these dimensions essential for understanding educational equity.

Results. Findings revealed differing focal points among the groups: faculty and staff concentrated on issues of physical and financial access to AI tools, while students placed greater emphasis on the ethical implications and value-based considerations of AI in education.

Conclusions. The study suggests that addressing AI equity in higher education requires a comprehensive approach that goes beyond improving access. AI literacy education should include skills development and address ethical considerations, ensuring that all stakeholders’ concerns are met. 

Perceptions About Generative AI and ChatGPT Use by Faculty and College Students

Approaches to ChatGPT by colleges and universities have varied, including updating academic integrity policies or even outright banning its use (Clercq, 2023; Mearian, 2023; Schwartz, 2023). As this new technology continues to evolve and expand, colleges and universities are grappling with the opportunities and challenges of using such tools. Little literature exists on student and faculty perceptions of AI use in higher education, particularly related to generative AI tools. The present study aims to fill this gap and offer perceptions from both students and faculty from a large research university in the mid-Atlantic. Survey participants consisted of 286 faculty and 380 students. Participants completed a questionnaire that included open-ended responses, scaled items, and finite questions. Overall, the reported use of ChatGPT technology is infrequent, though most respondents feel its use is inevitable in higher education. Faculty and students are uncertain but familiar with generative AI tools and ChatGPT. Institutions interested in developing policies around using ChatGPT on campus may benefit from building trust in generative AI, for both faculty and students. Concerns with academic integrity are prevalent and while both faculty and students agree that using ChatGPT violates institutional policy, they also agree generative AI has value in education.

AI Monsters: An Application to Student and Faculty Knowledge and Perceptions of Generative AI

Research into perceptions of artificial intelligence (AI) by faculty and students outside of specific disciplines has been relatively sparse. With the recent release of ChatGPT in November 2022, there have been numerous inquiries into the role of generative AI (GAI), in particular. While a timely response is important, so is ensuring that the responses that universities and faculty are implementing are evidence based. In the spring 2023 semester, the authors surveyed 380 students and 276 faculty. The quantitative data was analyzed with implications for higher education, including student-faculty trust, academic integrity, and uncertainty. This chapter is an analysis of the open-ended responses, using “Monster Theory” as a framework for understanding the themes that underlie the perceptions evident in the responses. The authors “demonsterize” AI. This is a mix of promoting literacy, ethical and transparent use, and developing language that is mindful about practices that may either empower or disempower individuals.

Research Team

Tiffany Petricini, Ph.D.

Tiffany Petricini, Ph.D.

Assistant Teaching Professor

Penn State Behrend

tiffanypetricini.com

Chuhao Wu

Chuhao Wu

Ph.D. Candidate, College of IST

The Pennsylvania State University

Google Scholar

Sarah T. Zipf, Ph.D.

Sarah T. Zipf, Ph.D.

Researcher, Teaching and Learning with Technology

The Pennsylvania State University

Google Scholar