What Your Syllabus Needs: An AI Policy Guide for Faculty

Your syllabus needs a clear AI policy. Not a vague paragraph copied from a colleague's syllabus or a blanket ban that you can't enforce—a policy that reflects your actual expectations, explains the reasoning behind them, and gives students clear guidance for every assignment.

If you don't have an AI policy in your syllabus, students are making their own rules. And they're probably making different rules than you'd choose.

The Spectrum of AI Policies

There is no single correct AI policy. Your policy should reflect your discipline, your learning objectives, and your teaching philosophy. Here are the three main approaches:

Restrictive: AI Use Prohibited

Appropriate when: You're assessing foundational skills, personal reflection, or disciplinary writing conventions that students must master independently.

Template language:

"In this course, the use of AI writing tools (including but not limited to ChatGPT, Google Gemini, Claude, and Copilot) is not permitted for any graded assignments unless explicitly stated otherwise. All submitted work must be entirely your own. The goal of this policy is to ensure you develop [specific skills] through your own practice and effort. If you have questions about whether a specific tool or use case is permitted, ask before submitting."

Moderate: AI as a Tool With Disclosure

Appropriate when: You want students to develop AI literacy while still doing their own thinking. This is the most common approach and the one I recommend for most courses.

Template language:

"AI tools may be used as aids in this course, but all submitted work must reflect your own understanding, analysis, and voice. Permitted uses include brainstorming, outlining, grammar checking, and generating initial ideas for revision. AI-generated text may not be submitted as your own work without substantial revision and original contribution. For any assignment where you use AI tools, you must include a brief disclosure statement describing how you used them (which tools, for what purpose, and how you modified the output). Assignments submitted without required disclosure will be returned for revision."

Integrated: AI Use Required or Encouraged

Appropriate when: AI literacy is an explicit learning objective, or when using AI is a professional skill in your discipline.

Template language:

"This course incorporates AI tools as part of the learning experience. Several assignments will require you to use AI and critically evaluate its output. You will be assessed on your ability to: (1) write effective prompts, (2) critically evaluate AI-generated content, (3) identify errors and limitations in AI output, and (4) integrate AI output with your own analysis and expertise. For each AI-integrated assignment, you must submit your prompt history alongside your final work."

What Every AI Policy Should Include

Regardless of where you fall on the spectrum, every effective AI policy covers these elements:

1. Scope Definition

Define what counts as "AI use." Is grammar checking with Grammarly considered AI? What about using Google's AI-generated search summaries for research? What about speech-to-text tools? Students need clarity on where the line is.

2. Assignment-Level Specificity

A blanket policy for the whole course is less effective than assignment-specific guidance. Consider adding a line to each assignment: "AI tools: [Not permitted / Permitted with disclosure / Required]." This lets you calibrate for each learning objective.

3. The Reasoning

Students comply more readily with policies they understand. Instead of just stating the rule, explain why. "I'm asking you to write this essay without AI because the goal is to develop your ability to construct an argument from scratch—a skill you'll need in [specific professional context]."

4. Disclosure Requirements

If you allow any AI use, require disclosure. A simple format works:

AI Disclosure: For this assignment, I used [tool name] to [specific purpose]. I then [how you modified/built on the output]. The final work reflects my own [analysis/argument/understanding].

This normalizes honest AI use and gives you insight into how students are working.

5. Consequences

State consequences clearly, and make them proportional. First offenses might warrant a revision opportunity with a reflective assignment. Repeated violations might result in a grade reduction. Whatever you choose, apply it consistently.

Common Mistakes to Avoid

  • Being too vague. "Use AI responsibly" means nothing to a student. What specific behaviors do you expect?
  • Relying solely on detection. As discussed in our article on academic integrity, detection tools are not reliable enough to serve as your primary enforcement mechanism.
  • Setting a policy you can't enforce. If you ban all AI use but assign take-home essays with no process documentation, your policy is unenforceable. Align your policy with your assessment design.
  • Forgetting to update. AI tools evolve rapidly. Review your policy each semester and adjust as tools and norms change.
  • Ignoring institutional guidance. Check whether your institution has an AI policy. Your syllabus policy should be consistent with (or stricter than) institutional guidelines, not contradictory.

Start Now

If your current syllabus doesn't address AI, add a policy before your next class meeting. Use one of the templates above as a starting point, customize it for your course, and discuss it with your students. A clear policy established now prevents ambiguity—and conflict—later.

And if you want hands-on help designing assignments that align with your policy, the free ARAD training walks you through the process step by step.

Tim Mousel
Tim Mousel, M.S.

Founder of Evolve AI Institute. White House AI Task Force invitee, Forbes-featured educator, and active faculty member with 30+ years in higher education.

Get articles like this in your inbox —