Last updated: Mar 25, 26 00:28 UTC | Permalink

Policy on use of AI

Our policy on use of AI can be found below:

For all written work:

We ask for written reflections to understand your own thought process and practice communication, not because we want to read lots of LLM output. Please show respect and do not use LLM-based tools to generate your written assignments.

For IP1, IP2, and Activities:

The use of Artificial Intelligence (AI) tools is forbidden.

  • This includes the use of Copilot or other tools that suggest the next line or few lines of code.
  • This includes use of tools that take natural-language input and generate code such as Chat GPT, Cursor, AugmentCode, Chat modes in VSC, and “/” commands in CoPilot
  • This includes chat based tools that allow you to ask questions about your code (“where is the controller for the /user/ API endpoint”).

You may use LLM-based tools like ChatGPT or Claude as shortcuts in situations where you might use Google Search, Stack Overflow, etc, for learning purposes only. You may never copy-paste code produced by online resources.

Remember: The basic policy is that you are responsible for the code you submit. We reserve the right to interview you orally to make sure that you understand everything in your submission.

For IP3 and the Team Project:

The following policy is a draft. We solicit feedback from students who have already used AI tools in their programming.

The use of Artificial Intelligence (AI) tools is permitted, subject to the following:

  • The basic policy is that you are responsible for the code you contribute to the project, and you are responsible for the code you review in the project. Telling a groupmate or course staff member “I don’t know, it’s what the AI produced” or “I don’t know, the AI said it made sense” is not in line with the minimal expectations of the course, and repeatedly failing to be accountable for the code you write or the code reviews you sign off on will result in failing the course. We reserve the right to interview you orally to make sure that you understand everything in your submission.
  • Each team should have a common policy about the use of AI in their project. It is not fair for one team member to be using AI, and the others not (or vice versa). Your team policy should take into account the relative experiences of the team members with AI coding tools.
  • You will still have to debug your code and tests. These models are trained mostly on code that works, so they are generally bad at debugging code they have never seen before. If you don’t understand the code, then you will not be able to debug it, nor will your TA be able to help you.
  • Any monetary costs associated with these tools are to be borne by the student (sorry). We encourage students to share information about available student discounts.
  • Do your own reflections, assessments, and reports. The point of reflections is what happens in your brain, not in producing text that course staff gets to read.

If you do use such a tool, here are a few suggestions:

  • Don’t ask to go far beyond what you can review and understand as you go. Remember, you are ultimately responsible for the code
  • “Vibe” coding, in which the AI writes large chunks of code without supervision, is strongly discouraged. (see bullet above about your personal responsibility for the code).
  • Beware letting the chatbot lead you on wild goose chases if the first or second suggestion doesn’t nail the problem.
  • Treat it like a very junior (but over-eager) engineer, who needs constant supervision and frequent correction.
  • Use rules (like .cursorrules) to set the ground rules for the AI. There are lots of sources for useful sets of rules. We encourage you to share such rulesets, both within your team and with other students in the course. (And with the course staff: we want to learn, too!)

© 2025-26 Adeel Bhutta, Robert Simmons and Mitch Wand. Released under the CC BY-SA license