ai left

A Faculty Guide to A.I.

Thanks to the introduction of easily accessible artificial intelligence technologies, our information environment is changing rapidly. We've put together this guidance to help you and your students navigate this strange new world. If you need further assistance, please do not hesitate to email or schedule a one-on-one consultation with one of our experts.

An assessment that a team from CAT and SSC conducted of the university-licensed Turnitin tool for AI detection as well as four other freely-available AI detectors found that these detector tools are fallible and must be used with significant caution. Because of the potential inaccuracy of AI detector tools, summative scores should only serve as "indicators" and not as definitive findings. An instructor would still need to talk with the student and consider other contextual information in order to make a decision about suspected academic misconduct.

Beware of falsely accusing students outright. Our ability to accurately identify the use of AI generative tools at present is quite weak.

Last update: 3 January 2024.

What is generative A.I.?

Generative AI (such as ChatGPT, Bard, Bing, Dall-E, etc.) is a type of artificial intelligence that has the capability to create surprisingly coherent text and images. They are large language models that can write and converse with users by drawing on an enormous corpus of text from a variety of sources—including books, web texts, Wikipedia, articles, internet forums, and more—on which they have been trained. These models are growing progressively larger: for instance, GPT-4, the most recent iteration of GPT-3, has 170 trillion parameters compared to its predecessor’s 175 billion. Generative AI does not have cognition; that is, it can’t think. Instead, in a similar manner to the autocomplete function that works in other applications you use every day, it works by finding and replicating the most common patterns in speech available in its dataset. Note, however, that AI technology is constantly evolving and improving, and we cannot be sure what its future capabilities will be. For an expanded explanation of how these technologies work, visit this article “A curious person’s guide to artificial intelligence.” 

 

Learn more about what generative AI is, how it works and how to access AI tools.

What are the pros/cons of using A.I.?

Generative AI tools have the potential to revolutionize the way we interact with technology, opening new doors to faster, more efficient, and more creative learning processes.  But that accelerated pace can run away with us unless we are aware of the downsides of these tools.


Learn more about possible benefits and pitfalls  of generative AI.

Equity

Generative AI tools have the potential to widen the institutional performance gaps that impact learning in higher education, but also the potential to create a more equitable learning environment. Generative AI is prone to error and can perpetuate biases and stereotypes. Bias may also arise in faculty assessments of which students are making unauthorized use of AI. Students may over-rely on the tools and neglect to develop skills critical to success in higher education and beyond. Inequitable access to broadband internet and to previous education that promoted skills in prompt writing and related skills may exacerbate existing institutional performance gaps. However, generative AI can be thought of as an assistive technology that will support the learning of many of our students. Tools such as ChatGPT can provide useful suggestions for succeeding in college or getting started on assignments, offer immediate feedback on writing and help students develop communication and planning skills.
 

Learn more about using generative AI to promote equity in your classroom.

Mis/disinformation

As the efficacy and availability of generative AI tools advances, both we and our students will face a variety of information-related challenges. Generative AI can be used to automate the generation of online misinformation and propaganda, flooding our information environment with disinformation, distracting from accurate information and increasing skepticism in content generated by credible scholarly and journalistic sources. Source material used for training data, the design of a model’s algorithm, data labeling processes, product design decisions and policy decisions all contribute to the possible replication of biases and unfair stereotypes (Ferrara, 2). Content generated by AI tools can seem accurate but be entirely made up, a phenomenon known as AI hallucinations. Our task as educators is to prepare our students to navigate an information environment characterized by the use of generative AI by inoculating against disinformation, helping them develop the skill and habit of verifying information, and building a conception of the components of a healthy information environment.


Learn more about inoculating our students against mis- and disinformation in the age of AI.

 

How can I deal with A.I. in my class?

Decision tree 

Instructors will have to make the decision as to whether, and to what extent students will be permitted to use generative AI tools to support their learning. To help you make the decision in your classes, review our “Should I Allow My Students to Use Generative AI Tools?” decision tree.

Make it your friend 

One approach to addressing generative AI in your classes is to encourage students to use the tools to meet course learning goals, for example, by having students experiment with prompt writing or using text generation tools as part of their writing process. 

Learn more about productively using generative AI tools to support student learning in your classes.

Critical examination

AI stands to have a significant impact on how we live and work in coming years. One approach to addressing the existence of generative AI in our classes is to design assignments and activities that take AI as an object of critical inquiry.

Learn more about adopting AI as an object of inquiry

Creatively Work Around AI

If your goal is to disallow or discourage students from using generative AI tools, it will be necessary to design assessments and class activities that are either difficult to complete with the use of AI tools or that are completed in a context in which it is difficult to make use of these tools.

Learn more about strategies for creatively working around AI

And don't miss our series on using PI (Pedagogical Intelligence) to manage AI.

Talking to your students about AI 

Whatever approach we decide to take in our classes with respect to the use and study of generative AI tools, it will be critical to talk to our students about AI and learning. It is important that we’re transparent with students about the choices we’ve made and that we speak with rather than at our students about the impact of AI on their learning and lives. 

Learn more about talking with your students about AI.

Syllabus guidance

Once you have decided what authorized and appropriate use of AI tools in your classroom will look like, be sure to clearly indicate your expectations in your syllabus. The CAT has drafted syllabus language to get you started.

Detection tools 

Faculty may disallow the use of generative AI tools, or allow their use but in a limited way. How, then, are we to identify when students are using AI inappropriately? Many companies claim to have created tools that can consistently and accurately identify text or images generated or modified by AI tools, but there is, as of yet, no tool that can fully accurately differentiate AI generated from human generated content. If we choose to use an AI detection tool, we must do so with extreme caution. Temple currently recommends the use of no tool other than Turnitin’s AI detector. However, even Turnitin's tool raises concerns. Fore on this issue, see Evaluating the Effectiveness of Turnitin’s AI Writing Indicator Model.

Learn more about AI detection tools

What do I do if I think students are using A.I. inappropriately?

There may be instances where you strongly suspect a student has inappropriately used generative AI to complete work for your course. In such cases:

  • Don’t take it personally! Cheating can often feel like a personal attack and a betrayal of all the work you’ve put into your teaching. Remember that a student’s decision to use AI to take shortcuts is probably about them, not about you. 
  • Check your biases. Is your suspicion of your student’s work well-founded? Would you have the same concerns if the work had been handed in by other students? 
  • Beware of falsely accusing students outright. Our ability to accurately identify the use of AI generative tools at present is quite weak.  
  • Ask the student to meet with you. Simply say something like “I have some concerns about your assignment. Please come to see me.” 
  • When you meet with the student, try not to be confrontational (remember that you may not be certain they used AI in an unauthorized manner). Instead, start by asking them questions that will give them a moment to tell the story of their writing process, such as: How were you feeling about the assignment? What do you think was challenging about it? Why don’t you tell me what your process was for getting it done. If there is research involved, you can ask what research they used. If they were writing on something they were supposed to read or visit (an art exhibit, for instance), ask pointed questions that get at whether they actually engaged in that activity.  
  • Then state your concerns: I’m concerned because the writing in this assignment doesn’t seem to match the writing in your other assignments, and the AI detector tool said that it is AI written. Point out any inconsistencies, odd language, repetition, or hallucinated citations with the student.  
  • Use developmental language. Remember that your student may have used generative AI without realizing it is considered cheating, or there may have been factors that made them feel that they needed to cheat. A conversation with your student can be a learning opportunity for them. 
  • Discuss with the colleagues in your department what a reasonable penalty might be for unauthorized use of generative AI. Consider also when it might be necessary to contact The Office of Student Conduct and Community Standards. (Remember, however, that speaking with your student is always the first step before taking further action.) If your conclusion is that the student cheated, you’ll have to decide whether you allow them to complete the assignment again on their own (perhaps with a penalty) or whether you’ll give no options to right the ship. Consider that we are in a developmental stage with these tools and it might be good to give the do-over if the student owns up to it.
  • Self-reflect. Given that students often take shortcuts for reasons related to the course structure, review our blog post on academic integrity and AI in order to take steps to promote academic integrity and consider whether your course is designed to reflect these best practices.
What is the university’s policy regarding generative A.I.?

In a letter to the faculty on August 17, 2023, Provost Mandel made clear that, ultimately, it will be important for faculty to determine the appropriate use of AI tools in their classrooms. However, the letter also clarified the following policies for fall 2023:


“Because we will all need time to fully explore the capabilities of generative AI tools, for the Fall 2023 semester, Temple has established a blanket policy that the use of generative AI tools is prohibited for students, unless an instructor explicitly grants permission. Students will be informed of this policy before the start of the semester, and Temple’s revised Student Code of Conduct now explicitly defines unauthorized use of generative AI tools as academic misconduct. In addition, you should add a statement to your syllabus outlining your stance on students’ potential use of such tools and discuss your decision with your students. These model syllabus statements, which can be adopted or modified as needed, have been developed to assist you in articulating this decision.

Temple has acquired a university-wide license for Turnitin’s recently released tool designed to detect AI-generated text (this is a different tool than the plagiarism detection tool that Temple has used for years). This AI-detection tool should be used with an abundance of caution, as it is often inaccurate and does not allow the ability to verify its results. However, it appears to be better than other tools available at this time. For Fall 2023, Turnitin’s AI detector tool will be available for you if you wish to use it. You must request access and complete a brief asynchronous training before the tool will be enabled. The training will provide guidance on how to interpret inaccurate and ambiguous results. Please login at TUhelp and select the "Turnitin AI Detector" option to submit your request.”

 

If you have questions about these policies, feel free to reach out to Stephanie Fiore at  with questions.