Last updated: Aug 27, 25 23:38 UTC | Permalink

Policy on use of AI

Our policy on use of AI can be found below:

For IP1, IP2, and Activities:

The use of Artificial Intelligence (AI) tools is forbidden, subject to the following:

  • The use of auto-complete tools that suggest the next line or few lines of your code, is permitted. The use of these tools saves you typing, without much impact on your learning experience.
  • The use of tools that take natural-language input and generate code is forbidden. This includes tools like Chat GPT, Cursor, AugmentCode, etc. It also includes Chat modes in VSC and “/” commands in CoPilot.

You may use LLM-based tools like ChatGPT or Claude as shortcuts in situations where you might use Google Search, Stack Overflow, etc, for learning purposes only. You may never copy-paste code produced by online resources.

Remember: The basic policy is that you are responsible for the code you submit. We reserve the right to interview you orally to make sure that you understand everything in your submission.

For the Team Project:

The following policy is a draft. We solicit feedback from students who have already used AI tools in their programming.

The use of Artificial Intelligence (AI) tools is permitted, subject to the following:

  • The basic policy is that you are responsible for the code you contribute to the project, and you are responsible for the code you review in the project. Telling a groupmate or course staff member “I don’t know, it’s what the AI produced” or “I don’t know, the AI said it made sense” is not in line with the minimal expectations of the course, and repeatedly failing to be accountable for the code you write or the code reviews you sign off on will result in failing the course. We reserve the right to interview you orally to make sure that you understand everything in your submission.
  • Each team should have a common policy about the use of AI in their project. It is not fair for one team member to be using AI, and the others not (or vice versa). Your team policy should take into account the relative experiences of the team members with AI coding tools.
  • You will still have to debug your code and tests. These models are trained mostly on code that works, so they are generally bad at debugging code they have never seen before. If you don’t understand the code, then you will not be able to debug it, nor will your TA be able to help you.
  • Any monetary costs associated with these tools are to be borne by the student (sorry). We encourage students to share information about available student discounts.
  • Do your own reflections, assessments, and reports. The point of reflections is what happens in your brain, not in producing text that course staff gets to read.

If you do use such a tool, here are a few suggestions:

  • Don’t ask to go far beyond what you can review and understand as you go. Remember, you are ultimately responsible for the code
  • “Vibe” coding, in which the AI writes large chunks of code without supervision, is strongly discouraged. (see bullet above about your personal responsibility for the code).
  • Beware letting the chatbot lead you on wild goose chases if the first or second suggestion doesn’t nail the problem.
  • Treat it like a very junior (but over-eager) engineer, who needs constant supervision and frequent correction.
  • Use rules (like .cursorrules) to set the ground rules for the AI. There are lots of sources for useful sets of rules. We encourage you to share such rulesets, both within your team and with other students in the course. (And with the course staff: we want to learn, too!)

© 2025-26 Adeel Bhutta, Joydeep Mitra and Mitch Wand. Released under the CC BY-SA license