Career-ready in the age of AI

iStock

As artificial intelligence (AI) continues to reshape education, work and daily life, educators across disciplines face a timely and transformative opportunity: to equip students with the AI literacy they need to thrive in a rapidly evolving world. AI is no longer confined to computer science or engineering — it’s a cross-disciplinary competency that intersects with communication, critical thinking, ethical reasoning, creativity and collaboration.

This article offers a practical framework for building AI literacy into college courses across disciplines, with a focus on ethical use, critical thinking, real-world readiness and maintaining the human connection at the center of learning. It explores strategies to help students not just navigate but actively shape a world increasingly influenced by AI.

This article is part of a monthly column provided by the Instructional Technology Council, an affiliated council of the American Association of Community Colleges.

What Is AI literacy and why does it matter?

AI literacy goes beyond simply knowing how to use popular AI tools like ChatGPT or Midjourney. It encompasses understanding how AI systems work, recognizing their potential and limitations, applying them critically and ethically, and engaging in informed dialogue about their societal impact.

In today’s landscape, AI literacy is becoming foundational for personal, academic and professional success. According to Cengage Group’s 2024 Employability Report, 47% of employers now expect candidates to have some level of AI skills, and 41% said familiarity with AI would make candidates more competitive in hiring decisions. Strikingly, 66% of leaders would not hire someone without AI skills.

At the same time, students are already immersed in a world shaped by artificial intelligence. According to the 2024 Digital Education Council’s Global AI Student Survey, 86% of students use AI tools to support their coursework, regardless of whether their instructors have introduced them. This widespread use signals a growing need for structured, ethical guidance. Students are not simply using AI — they are looking for support in understanding how to use it responsibly and thoughtfully.

However, there is a noticeable readiness gap. Fifty-eight percent of students report insufficient AI knowledge or skills, and 48% say they do not feel prepared for a workforce that relies heavily on AI technologies.

In short, students are already immersed in AI. Colleges must ensure they are not just using the tools but developing the judgment to apply them responsibly. AI literacy is now a fundamental career readiness skill, and one that students will need in every field.

Teaching with (not around) AI

Rather than focusing solely on compliance or control, forward-thinking educators are shifting from policy enforcement toward building AI literacy. Teaching with AI isn’t about having all the answers; it’s about preparing students to engage critically, ethically and creatively with technologies that are rapidly transforming the world around them.

This shift moves us away from relying on detection tools and toward meaningful dialogue. When we invite students into open conversations about how AI systems function, where they fall short and how they can be used responsibly, we create space for deeper learning. Instead of positioning AI as a threat to human work, we can reframe it as a tool that augments human capabilities — supporting students in becoming more effective thinkers, collaborators and problem-solvers.

Aligned with the AI student competencies outlined by the Modern Language Association — including critical analysis, ethical engagement, and intercultural awareness — AI can be integrated into coursework in ways that promote agency and skill-building across disciplines. Examples include:

  • Using AI to scaffold original thinking.
    Students might use generative AI to explore new ideas or draft early versions of projects, then refine their work through peer dialogue, critique and reflection.
  • Designing assignments for AI evaluation.
    Students can practice identifying bias, limitations or inaccuracies in AI-generated content, building their critical thinking and digital literacy skills.
  • Modeling responsible collaboration with AI.
    Faculty can demonstrate how AI supports — not replaces — human judgment. This includes showing how they use AI to brainstorm, summarize or analyze, while still relying on their own expertise and values to make decisions.
  • Applying AI in context-specific ways.
    In history courses, AI can help to analyze historical texts or reveal patterns across time. In business, it can help interpret market data. In healthcare, it might support the personalization of care plans. The goal is not to teach AI as a separate topic, but to embed it meaningfully within the content and practices of each discipline.

This kind of teaching aligns with what students are asking for. Survey data shows that learners want more than access to tools — they want hands-on experience, guidance on ethical use, and a chance to explore AI’s implications for their future careers.

By integrating AI literacy into teaching and learning, educators can help students build the critical skills they need to thrive in a world where AI is not replacing them, but working alongside them.

Career-connected learning

AI literacy is not just a technical skill; it enhances the broader transferable skills employers are seeking.

In Cengage’s 2024 survey, employers emphasized adaptability (62%), communication (59%) and problem-solving (57%) as top desired skills — competencies that can be strengthened through thoughtful, AI-supported assignments. The survey also highlighted that employers are using generative AI — particularly for research, writing and data analysis — contrasting with 55% of colleges and learning programs that discourage AI use.

Examples of AI-enhanced, career-connected learning:

  • Communication
    Students revise AI-generated drafts to align with different professional tones and audiences. In journalism or marketing courses, this might involve tailoring messages for diverse platforms, from formal press releases to social media posts.
  • Collaboration
    Teams evaluate and select appropriate AI tools for group projects based on goals, limitations and ethical implications. In an engineering course, students might compare tools for rapid prototyping or modeling, making decisions together about what aligns with project needs and shared values.
  • Adaptability
    Students navigate case studies where evolving AI developments shift the scope, expectations or tools available for the assignment. In a public policy course, for example, students might respond to the release of new generative AI guidelines and assess how they impact an existing civic technology proposal.
  • Problem-solving
    Students use AI to generate potential solutions to real-world challenges, then analyze and refine those options using human judgment, stakeholder input and ethical reasoning. For example, in an environmental science course, students might compare AI-suggested sustainability strategies for a local community project.

Through experiences like these, AI literacy becomes a bridge — not a barrier — to career readiness. When students learn to use AI thoughtfully and ethically, they strengthen the very skills that employers value most — and become better equipped to thrive in a workforce that expects them to work alongside these tools, not in competition with them.

Ethics and academic integrity

As AI tools become more embedded in everyday academic and professional work, questions about academic integrity are becoming more complex. Instead of relying on bans or detection software, educators can adopt proactive strategies that center student learning and ethical decision-making. Students need guidance in thinking critically and ethically about how and when to use AI in their work.

To foster responsible AI use, faculty can implement the following strategies:

  • Develop course-specific AI guidelines.
    Begin by establishing clear expectations in your syllabus or course policies that articulate when and how AI tools can be used. Align these expectations with student AI competencies such as critical analysis, ethical reasoning and communication.
  • Promote transparency.
    Label assignments with clear indicators of acceptable AI use, using structured frameworks like the AI Assessment Scale (e.g., “No AI use,” “AI use with citation,” “AI collaboration encouraged”) from Perkins, Furze, Roe and MacVaugh. This helps students navigate AI expectations with confidence and clarity.
  • Model effective AI use.
    Demonstrate how AI can be used as a productive thinking partner in research, revision and ideation. When faculty openly share their own AI workflows, it reinforces responsible practices and sets the tone for ethical engagement.
  • Facilitate ethical reflection.
    Encourage students to reflect on their use of AI throughout the course. Assignments might include brief reflections on when they chose to use AI, how it helped or hindered their thinking, and what they learned from the process.

According to the Digital Education Council’s 2024 Global Faculty Survey, 66% of faculty believe AI will be essential to preparing students for the workforce. However, only 32% feel confident supporting students in its ethical use, revealing a significant professional development gap.

Supporting students in developing ethical AI literacy is not just about avoiding misconduct. It’s about fostering habits of integrity, reflection and accountability — skills that will serve them not only in the classroom, but throughout their careers.

Centering human connection

Even as AI becomes increasingly integrated into education, it is more important than ever to elevate the uniquely human aspects of teaching and learning. Faculty are encouraged to reflect on the uniquely human qualities, skills, experiences and ways of knowing within their disciplines, and to intentionally design learning experiences that bring those qualities to the forefront.

Thoughtfully integrating AI invites educators to:

  • Design assessments that highlight human skills.
    Create assignments that prioritize empathy, creativity, critical reasoning and ethical decision-making — areas where human judgment and nuance cannot be replaced by AI.
  • Foster identity-affirming inquiry.
    Engage students in exploring how AI technologies impact different communities, challenging them to bring personal experience, cultural understanding, and diverse perspectives into academic work.
  • Prioritize authentic dialogue and feedback.
    Use AI to assist with routine tasks, freeing up more time for personalized discussions, mentorship, and community-building practices that deepen connection and trust.

Educators who intentionally center human capacities — curiosity, compassion, collaboration, critical inquiry — will be best equipped to build resilient, inclusive learning environments where students can thrive alongside, not under, emerging technologies.

Takeaways

By intentionally embedding AI literacy across the curriculum, educators can help students:

  • Build critical, ethical and adaptable skills for the AI era.
  • Gain digital agency and the confidence to use AI tools thoughtfully and responsibly.
  • Preserve and elevate the human elements of learning — creativity, empathy, communication—even in a technology-rich world.

This is not about adding another requirement or task to already full courses. It is about rethinking how existing learning outcomes — like critical thinking, collaboration, and communication — are approached in ways that acknowledge and leverage the evolving reality of AI in students’ lives.

In today’s AI-driven world, career readiness depends on AI literacy. Preparing students to engage with AI critically and ethically is essential — and just as importantly, to grow and develop the uniquely human skills that will set them apart in an AI-influenced world. Our students are counting on us to lead with clarity, care, and purpose.

* * *

Kate Grovergrys, MA, is a full-time faculty member at Madison College in Wisconsin. She develops professional development on topics such as inclusive teaching practices and artificial intelligence.

Tina Rettler-Pagel, Ed.D., is also a full-time faculty member at Madison College. She spends most of her time on projects and initiatives focused on digital learning, but also supports faculty in exploring and planning for the pedagogical opportunities of generative AI.

Grovergrys and Rettler-Pagel are members of the ITC Affinity Group.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.