A broader conversation about AI ethics in higher ed

iStock

In May, artificial intelligence (AI) use in the classroom once again took center stage when a student sued Northeastern University. The lawsuit arose after the student discovered that her professor had used ChatGPT to create or edit course materials without proper citations, while simultaneously prohibiting students from using AI in any form, cited or otherwise. 

This incident, reported by Kashmir Hill in The New York Times, underscores the double standard in AI usage and raises critical questions about the ethical use of AI in education.

While AI is touted by many as a tool to enhance efficiency and act as an unpaid teaching assistant to professors and graduate students, it is also feared by others as the boogeyman lurking behind closed doors, waiting to undermine all human-human interaction in the classroom. The appropriate role of AI in higher education remains a complex issue, with no single answer. Each institution must determine its ethical stance and be prepared to support it.

This article is part of a regular column provided by the Instructional Technology Council, an affiliated council of the American Association of Community Colleges.

For example, the Colorado Community College System recently adopted a system policy encouraging faculty and instructors to “actively incorporate discussions with students about their own use of AI in their work transforming that into teaching opportunities to students,” and to “take care to avoid the misuse or infringement of copyrighted materials, … and other protected content.” 

Similarly, Arkansas State University developed a policy in 2024 that emphasizes a human-centered approach, also focusing on gen AI’s use as a tool and assistant that must be cited, but even this policy looks more at the student’s need to cite than the instructor’s. 

Lacking clarity on policies/guidelines

Despite these efforts, many colleges lack clear ethical policies or guidelines for both faculty and students. This absence leads to confusion and uncertainty. Transparency is crucial, not only for students but also for faculty, instructors and staff. Modeling proper standards is essential for building community in both online and traditional classrooms. Even without ethical considerations, citing AI tools like ChatGPT or Gamma, which assist in refreshing lecture notes or creating presentations, is a best practice.

Instructors may not realize the need to cite AI assistance because the original work is their own. However, disclosing AI assistance is ethical and straightforward. For instance, a slide generated by Gamma can be ethically cited in the lower right-hand corner, and presenters can verbally acknowledge their use of AI to their audience. This transparency aligns with many existing policies and guidelines, helping learners understand how to use these tools effectively.

Institutions like Stanford University, MIT, Harvard, New York University and the University of Washington have taken steps to address AI ethics. Stanford’s Responsible AI initiative, MIT’s guidance on generative AI tools, and similar efforts at other universities emphasize transparency and responsible AI use. These institutions are at the forefront of developing comprehensive frameworks that uphold academic integrity, fairness, data privacy, transparency and inclusion.

However, much more conversation is needed around this issue. It is clear that most colleges do not have an ethical policy or guideline in place for faculty and instructors, much less for students, to follow. Until these are in place, there will continue to be confusion and uncertainty.

Raising awareness

The absence of clear guidelines and instructions is not because instructors and faculty don’t want to adhere to ethical standards. Like students, they may not be aware of the necessity to do so. The original work is their own, so why should they have to provide a citation for the revision that used ChatGPT 4.o’s assistance to update the content? Why cite Gamma for PowerPoint slides when that is the AI platform’s sole purpose – to create slide presentations?

But is it ethical not to disclose to your audience that you had assistance creating that presentation? And how difficult would it be to cite your work? The figure below represents a slide generated by Gamma that has been ethically cited in the lower right-hand corner. Additionally, the presenter can verbally note to their audience that they’ve used gen AI and in the interest of transparency, as noted in many of the policies or guidelines outlined above, they can explain how the learner can do this for themselves and use the tools to the best of their ability in the future.

A broader conversation about AI ethics in higher education is necessary. Institutions must develop clear guidelines to ensure ethical AI use, fostering an environment of transparency and integrity. As AI continues to evolve and integrate into educational settings, the commitment to ethical standards will be crucial in shaping the future of teaching and learning.

About the Author

Cynthia Krutsinger
Cynthia Krutsinger is dean of online learning at Pike Peak State College in Colorado Springs, Colorado. She is a board member of the Instructional Technology Council and serves as co-chair of its AI Affinity Group.
The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.