I've added a Logic Verification Protocol to our training and evaluation Guidelines. This clause requires that any block of code produced with AI assistance (e.g., from GitHub Copilot or ChatGPT) that is greater than five lines has a comment from the human author explaining why he/she chose that particular version of the code instead of another possible version, rather than simply stating what the code does. This simple requirement encourages the learner(s) to actively engage with and think about the logic behind the code being produced by the AI assistance. The shift in behavior has been immediate. Students no longer view AI as a magic bullet and have begun to see it as a suggestion engine for raw materials and not final products. We have also seen a dramatic reduction in misplaced imports of libraries and a noticeable increase in the performance during oral defense of works due to the need for students to validate and confirm the logic of the AI before submitting their work.