ai left

A Faculty Guide to A.I.

Thanks to the introduction of easily accessible generative artificial intelligence tools, our information environment and even the manner in which we do our work is changing rapidly. 

It can feel overwhelming, and our tendencies as faculty might be to hope it dies down and does not impact our classes in any lasting way. However, we know that these tools are already being used in industry, education, health care, and elsewhere. Therefore, we must think about what our responsibility is as educators to get our students prepared for a world of work that leverages these tools’ power. And, of course, any new technology brings promise with it, opening up novel ways to reach students and help them learn more deeply. Finally, of course, we don’t believe it is going away anytime soon. So engaging with these tools so that we understand what they can do, how they might impact teaching and learning, and how we can move forward productively is the only path forward that makes sense. We've put together this guidance to help you and your students navigate what might feel like an unfamiliar landscape, and we encourage you to dive in. Information is power, and we are putting that power in your hands.

If you need further assistance, please do not hesitate to email cat@temple.edu or schedule a one-on-one consultation with one of our experts.

Last update: 19 September 2024.

What is generative A.I.?

Generative AI (such as ChatGPT, Bard, Bing, Dall-E, etc.) is a type of artificial intelligence that has the capability to create surprisingly coherent text and images. They are large language models that can write and converse with users by drawing on an enormous corpus of text from a variety of sources—including books, web texts, Wikipedia, articles, internet forums, and more—on which they have been trained. These models are growing progressively larger: for instance, GPT-4, the most recent iteration of GPT-3, has 170 trillion parameters compared to its predecessor’s 175 billion. Generative AI does not have cognition; that is, it can’t think. Instead, in a similar manner to the autocomplete function that works in other applications you use every day, it works by finding and replicating the most common patterns in speech available in its dataset. Note, however, that AI technology is constantly evolving and improving, and we cannot be sure what its future capabilities will be. For an expanded explanation of how these technologies work, visit this article “A curious person’s guide to artificial intelligence.” 

 

Learn more about what generative AI is, how it works and how to access AI tools.

What are the pros/cons of using A.I.?

Generative AI tools have the potential to revolutionize the way we interact with technology, opening new doors to faster, more efficient, and more creative learning processes.  But that accelerated pace can run away with us unless we are aware of the downsides of these tools.


Learn more about possible benefits and pitfalls  of generative AI.

Equity

Generative AI tools have the potential to widen the institutional performance gaps that impact learning in higher education, but also the potential to create a more equitable learning environment. Generative AI is prone to error and can perpetuate biases and stereotypes. Bias may also arise in faculty assessments of which students are making unauthorized use of AI. Students may over-rely on the tools and neglect to develop skills critical to success in higher education and beyond. Inequitable access to broadband internet and to previous education that promoted skills in prompt writing and related skills may exacerbate existing institutional performance gaps. However, generative AI can be thought of as an assistive technology that will support the learning of many of our students. Tools such as ChatGPT can provide useful suggestions for succeeding in college or getting started on assignments, offer immediate feedback on writing and help students develop communication and planning skills.
 

Learn more about using generative AI to promote equity in your classroom.

Mis/disinformation

As the efficacy and availability of generative AI tools advances, both we and our students will face a variety of information-related challenges. Generative AI can be used to automate the generation of online misinformation and propaganda, flooding our information environment with disinformation, distracting from accurate information and increasing skepticism in content generated by credible scholarly and journalistic sources. Source material used for training data, the design of a model’s algorithm, data labeling processes, product design decisions and policy decisions all contribute to the possible replication of biases and unfair stereotypes (Ferrara, 2). Content generated by AI tools can seem accurate but be entirely made up, a phenomenon known as AI hallucinations. Our task as educators is to prepare our students to navigate an information environment characterized by the use of generative AI by inoculating against disinformation, helping them develop the skill and habit of verifying information, and building a conception of the components of a healthy information environment.


Learn more about inoculating our students against mis- and disinformation in the age of AI.

 

How can I deal with A.I. in my class?

Decision tree 

Instructors will have to make the decision as to whether, and to what extent students will be permitted to use generative AI tools to support their learning. To help you make the decision in your classes, review our “Should I Allow My Students to Use Generative AI Tools?” decision tree.

Make it your friend 

One approach to addressing generative AI in your classes is to encourage students to use the tools to meet course learning goals, for example, by having students experiment with prompt writing or using text generation tools as part of their writing process. 

Learn more about productively using generative AI tools to support student learning in your classes.

Critical examination

AI stands to have a significant impact on how we live and work in coming years. One approach to addressing the existence of generative AI in our classes is to design assignments and activities that take AI as an object of critical inquiry.

Learn more about adopting AI as an object of inquiry

Creatively Work Around AI

If your goal is to disallow or discourage students from using generative AI tools, it will be necessary to design assessments and class activities that are either difficult to complete with the use of AI tools or that are completed in a context in which it is difficult to make use of these tools.

Learn more about strategies for creatively working around AI

And don't miss our series on using PI (Pedagogical Intelligence) to manage AI.

Talking to your students about AI 

Whatever approach we decide to take in our classes with respect to the use and study of generative AI tools, it will be critical to talk to our students about AI and learning. It is important that we’re transparent with students about the choices we’ve made and that we speak with rather than at our students about the impact of AI on their learning and lives. 

Learn more about talking with your students about AI.

Syllabus guidance

Once you have decided what authorized and appropriate use of AI tools in your classroom will look like, be sure to clearly indicate your expectations in your syllabus. The CAT has drafted syllabus language to get you started.

Detection tools 

Faculty may disallow the use of generative AI tools, or allow their use but in a limited way. How, then, are we to identify when students are using AI inappropriately? Many companies claim to have created tools that can consistently and accurately identify text or images generated or modified by AI tools, but there is, as of yet, no tool that can fully accurately differentiate AI generated from human generated content. If we choose to use an AI detection tool, we must do so with extreme caution. Temple currently recommends the use of no tool other than Turnitin’s AI detector. However, even Turnitin's tool raises concerns. Fore on this issue, see Evaluating the Effectiveness of Turnitin’s AI Writing Indicator Model.

Learn more about AI detection tools

What do I do if I think students are using A.I. inappropriately?

There may be instances where you strongly suspect a student has inappropriately used generative AI to complete work for your course. In such cases:

  • Don’t take it personally! Cheating can often feel like a personal attack and a betrayal of all the work you’ve put into your teaching. Remember that a student’s decision to use AI to take shortcuts is probably about them, not about you. 
  • Check your biases. Is your suspicion of your student’s work well-founded? Would you have the same concerns if the work had been handed in by other students? 
  • Beware of falsely accusing students outright. Our ability to accurately identify the use of AI generative tools at present is quite weak.  
  • Ask the student to meet with you. Simply say something like “I have some concerns about your assignment. Please come to see me.” 
  • When you meet with the student, try not to be confrontational (remember that you may not be certain they used AI in an unauthorized manner). Instead, start by asking them questions that will give them a moment to tell the story of their writing process, such as: How were you feeling about the assignment? What do you think was challenging about it? Why don’t you tell me what your process was for getting it done. If there is research involved, you can ask what research they used. If they were writing on something they were supposed to read or visit (an art exhibit, for instance), ask pointed questions that get at whether they actually engaged in that activity.  
  • Then state your concerns: I’m concerned because the writing in this assignment doesn’t seem to match the writing in your other assignments, and the AI detector tool said that it is AI written. Point out any inconsistencies, odd language, repetition, or hallucinated citations with the student.  
  • Use developmental language. Remember that your student may have used generative AI without realizing it is considered cheating, or there may have been factors that made them feel that they needed to cheat. A conversation with your student can be a learning opportunity for them. 
  • Discuss with the colleagues in your department what a reasonable penalty might be for unauthorized use of generative AI. Consider also when it might be necessary to contact The Office of Student Conduct and Community Standards. (Remember, however, that speaking with your student is always the first step before taking further action.) If your conclusion is that the student cheated, you’ll have to decide whether you allow them to complete the assignment again on their own (perhaps with a penalty) or whether you’ll give no options to right the ship. Consider that we are in a developmental stage with these tools and it might be good to give the do-over if the student owns up to it.
  • Self-reflect. Given that students often take shortcuts for reasons related to the course structure, review our blog post on academic integrity and AI in order to take steps to promote academic integrity and consider whether your course is designed to reflect these best practices.
What is the university’s policy regarding generative A.I.?

In July of 2024, the Office of the President issued Guidelines for Generative Artificial Intelligence

Additionally, Provost Mandel has stated that it will be important for faculty to determine the appropriate use of AI tools in their classrooms. He has provided a blanket policy for Temple University stating that the use of generative AI tools is prohibited for students unless an instructor explicitly grants permission. Students are informed of this policy before the start of the semester, and Temple’s revised Student Code of Conduct now explicitly defines unauthorized use of generative AI tools as academic misconduct. In addition, the provost has advised faculty to add a statement to syllabi outlining the stance on students’ potential use of such tools and discuss your decision with your students. These model syllabus statements, which can be adopted or modified as needed, have been developed to assist you in articulating this decision. 

Temple has acquired a university-wide license for Turnitin’s AI Detector tool, designed to detect AI-generated text. This AI-detection tool should be used with great caution, as it is often inaccurate and does not allow the ability to verify its results.  Turnitin’s AI detector tool will be available for faculty who wish to use it. Faculty must request access and complete a brief asynchronous training before the tool will be enabled.  Please login at TUhelp, click on the “Request Help” button and select the "Turnitin AI Detector" option to submit your request.” 

As generative AI keeps evolving, there is increasing interest in using AI Notetaker tools. In response, in September of 2024, Temple Information Technology Services (ITS) issued guidelines specifically addressing AI Notetakers

If you have questions about these policies, or if you would like to learn more about generative AI in the teaching environment, please visit the resources available on the AI Faculty Guide website and feel free to reach out to Stephanie Fiore at with questions. 

How Can AI Be Used to Take Notes in a Zoom Session?

Temple University is enabling certain functions of Zoom AI Companion. This tool can generate a detaIled summary of your meeting, organize your meeting summary by topic, and more. It is important, however, that faculty and staff who wish to use the tool do so responsibly following university guidelines, which are clarified in the document linked below. Also addressed are alternative AI note-taking tools such as Otter.ai and Reader.ai, which are prohibited at Temple University.