ai left

A Faculty Guide to A.I.

Thanks to the introduction of easily accessible generative artificial intelligence tools, our information environment and even the manner in which we do our work is changing rapidly. 

It can feel overwhelming, and our tendencies as faculty might be to hope it dies down and does not impact our classes in any lasting way. However, we know that these tools are already being used in industry, education, health care, and elsewhere. Therefore, we must think about what our responsibility is as educators to get our students prepared for a world of work that leverages these tools’ power. And, of course, any new technology brings promise with it, opening up novel ways to reach students and help them learn more deeply. Finally, of course, we don’t believe it is going away anytime soon. So engaging with these tools so that we understand what they can do, how they might impact teaching and learning, and how we can move forward productively is the only path forward that makes sense. We've put together this guidance to help you and your students navigate what might feel like an unfamiliar landscape, and we encourage you to dive in. Information is power, and we are putting that power in your hands.

If you need further assistance, please do not hesitate to email cat@temple.edu or schedule a one-on-one consultation with one of our experts.

Last Update: 19 December 2024

What is generative A.I.?

Generative AI (such as ChatGPT, Bard, Bing, Dall-E, etc.) is a type of artificial intelligence that has the capability to create surprisingly coherent text and images. They are large language models that can write and converse with users by drawing on an enormous corpus of text from a variety of sources—including books, web texts, Wikipedia, articles, internet forums, and more—on which they have been trained. These models are growing progressively larger: for instance, GPT-4, the most recent iteration of GPT-3, has 170 trillion parameters compared to its predecessor’s 175 billion. Generative AI does not have cognition; that is, it can’t think. Instead, in a similar manner to the autocomplete function that works in other applications you use every day, it works by finding and replicating the most common patterns in speech available in its dataset. Note, however, that AI technology is constantly evolving and improving, and we cannot be sure what its future capabilities will be. For an expanded explanation of how these technologies work, visit this article “A curious person’s guide to artificial intelligence.” We also suggest familiarizing yourself with Temple’s Guidelines for Generative Artificial Intelligence (GenAI) for information on:

  • privacy and confidentiality;
  • Copyright and intellectual property; 
  • accuracy and appropriateness;
  • equity and opportunity;
  • transparency; and
  • other principles related to these technologies.

Learn more about what generative AI is, how it works and how to access AI tools.

What are the pros/cons of using A.I.?

Generative AI tools have the potential to revolutionize the way we interact with technology, opening new doors to faster, more efficient, and more creative learning processes.  But that accelerated pace can run away with us unless we are aware of the downsides of these tools.

Learn more about possible benefits and pitfalls  of generative AI.

Equity

Generative AI tools have the potential to widen the institutional performance gaps that impact learning in higher education, but also the potential to create a more equitable learning environment. Generative AI is prone to error and can perpetuate biases and stereotypes. Bias may also arise in faculty assessments of which students are making unauthorized use of AI. Students may over-rely on the tools and neglect to develop skills critical to success in higher education and beyond. Inequitable access to broadband internet and to previous education that promoted skills in prompt writing and related skills may exacerbate existing institutional performance gaps. However, generative AI can be thought of as an assistive technology that will support the learning of many of our students. Tools such as ChatGPT can provide useful suggestions for succeeding in college or getting started on assignments, offer immediate feedback on writing and help students develop communication and planning skills.

Learn more about using generative AI to promote equity in your classroom.

Mis/disinformation

As the efficacy and availability of generative AI tools advances, both we and our students will face a variety of information-related challenges. Generative AI can be used to automate the generation of online misinformation and propaganda, flooding our information environment with disinformation, distracting from accurate information and increasing skepticism in content generated by credible scholarly and journalistic sources. Source material used for training data, the design of a model’s algorithm, data labeling processes, product design decisions and policy decisions all contribute to the possible replication of biases and unfair stereotypes (Ferrara, 2). Content generated by AI tools can seem accurate but be entirely made up, a phenomenon known as AI hallucinations. Our task as educators is to prepare our students to navigate an information environment characterized by the use of generative AI by inoculating against disinformation, helping them develop the skill and habit of verifying information, and building a conception of the components of a healthy information environment.

Learn more about inoculating our students against mis- and disinformation in the age of AI.

Productive Struggle and Deep Learning

In a series of blog posts on the impact of AI on student reading skills, Marc Watkins writes, “Unchecked, the lure of frictionless reading could produce a profoundly atrophied culture of surface-level ideas rather than exploring them in depth.” As noted above, AI tools can function as assistive technologies that support students in meeting course goals. However, if used uncritically, they can reduce productive struggle with course content and skill development. While an AI tool may be able to quickly generate code that performs a task, answer a difficult mathematical problem or summarize a challenging text, oftentimes, the learning comes in grappling with the algorithm, working through the problem, or synthesizing ideas in a text Allowing a tool to step in and complete the task or quickly outline for the student the exact steps to take may limit the actual learning that may have derived from working through the process on one’s own., There are ways to use generative AI to provoke deep learning and valuable skill building (see , What are some approaches to using or working around AI in my class?) however, for this to happen, these tools must be integrated into our course thoughtfully and in support of course goals.

How do I make decisions about the use of A.I.?

The speed with which generative AI tools have been emerging and improving can leave us all feeling a bit lost and overwhelmed. While your inclination may be to hide under the proverbial covers and carry on as before, it’s important to intentionally approach possible use of AI tools by your students. Approaching AI proactively rather than reactively will save stress where students use the tools in unexpected and possibly undesirable ways and will allow you to reflect on whether these tools may actually be of service to you and your students in meeting course goals. 

If you’re not sure where to begin in deciding whether, and to what extent, students will be permitted to use generative AI tools to support their learning, we offer you this decision tree as a helpful tool. As is outlined in the decision tree, determining how you will address generative AI in your classes will require you to:

  • reflect on your level of familiarity with the tools, 
  • take steps to become more familiar with them if you haven’t already, 
  • examine the learning goals for your course and determine whether AI can be useful in helping students to reach these goals,  and 
  • consider your readiness to carefully vet content that students create with the help of or in response to the effects of generative AI. 
How do I promote academic integrity in the age of A.I.?

New methods for cheating may make us feel as if all students are cheating because they can, but the research does not support this view. For example, low-cost online essay mills are widely available to students, but as few as about 3% of students employ them (Rundle et al. 2023). In our interviews with students on Temple’s campus, there has been a clear mix of attitudes about AI, just as faculty are mixed on their relationship with these tools. As we navigate teaching and learning in the age of AI, it is important not to make assumptions about our students, nor to approach them in an adversarial manner. Instead, we want to take actions that help our students choose academic honesty as they complete the work for our courses.  

Start By Designing a Course that Encourages Academic Honesty

Our guide to encouraging academic honesty will help you think proactively about how you can design your course to encourage the intrinsic motivation, self-efficacy, and assessment protocols that encourage students to engage in academically honest behavior. It will also provide information on how to talk to your students about academic integrity, a key strategy in the academic honesty toolkit. In the long run, our best course of action is to move away from trying to “catch” student cheating and instead think creatively about pedagogical practices that may head off cheating at the pass.  

Familiarize Yourself With the Limitations of AI Detectors

You’ve designed your course proactively to encourage academic honesty and spoken to students about academic honesty, but you still believe some of your students are using AI to cheat. What’s next? While it may be tempting to rely on AI detectors to catch incidents of unauthorized use of generative AI, it is not a viable solution, both because the detectors are not reliable and because students will find workarounds to beat the detectors (there are already “how-to” videos posted online that teach students how to do this). A CAT/SSC assessment of Turnitin and four other AI detectors demonstrated that AI detectors are prone to both incorrectly identifying human-created content as AI-created (false positives) and failing to identify AI-generated content (false negatives). They may also correctly identify that there is AI-generated content in a student's submission, but misidentify which content was AI-created.  Recent studies of AI Detector tools have shown that detector error rates also increase significantly the more students mix their own work with the output of AI tools, modify the output of AI tools and apply prompt optimization strategies (also called prompt engineering) to AI output. In addition, certain AI detectors may misclassify non-native English speakers at a much higher rate than native speakers of English because of the manner in which they express themselves.  

Some faculty claim that they can tell when writing is AI generated from its style, but this claim is difficult to prove except in cases where the student produces work at a level that they should not be able to achieve or when there are samples of the student’s in-class work that can point to inconsistencies. And, of course, we may not be as good at it as we think.  In recent studies (Murray & Tersigni 2024; Fleckenstein et al. 2024) instructors could not reliably identify texts generated by AI and were overconfident in their ability to distinguish between AI and student-generated writing. Students can also train the tool on how to write in a manner that imitates the student’s writing. However, their findings suggest that detection is a skill that can be developed with training over time (Dugan et al., 2023).  

Therefore, while AI detectors or our own reading of students’ work can point towards academic dishonesty, we need to exercise caution when accusing a student. The harm of accusing a student of cheating who has not actually engaged in academically dishonest behavior cannot be overestimated. It can erode trust in the classroom and create unnecessary stress for students. 

Make a Plan to Deal With Inappropriate AI Use

There may be instances where you strongly suspect a student has inappropriately used generative AI to complete work for your course. In such cases:

  • Don’t take it personally! Cheating can often feel like a personal attack and a betrayal of all the work you’ve put into your teaching. Remember that a student’s decision to use AI to take shortcuts is probably about them, not about you. 
  • Check your biases. Is your suspicion of your student’s work well-founded? Would you have the same concerns if the work had been handed in by other students? 
  • Beware of falsely accusing students outright as our ability to accurately identify the use of AI generative tools is weak.   
  • Ask the student to meet with you. When you meet with the student, try not to be confrontational. Instead, start by asking questions that will give them a moment to tell the story of their writing process, such as: How were you feeling about the assignment? What do you think was challenging about it? Why don’t you tell me what your process was for getting it done. If there is research involved, you can ask what research they used. If they were writing on something they were supposed to read or visit (an art exhibit, for instance), ask questions that get at whether they actually engaged in that activity.  
  • State your concerns. Point out any inconsistencies, odd language, repetition, or hallucinated content with the student, as well as any information you may have gotten from an AI detector in a matter-of-fact manner.  
  • Use developmental language. Remember that your student may have used generative AI without realizing it is considered cheating, or there may have been factors that made them feel that they needed to cheat. A conversation with your student can be a learning opportunity for them. 
  • Discuss with the colleagues in your department what a reasonable penalty might be for unauthorized use of generative AI. Consider also when it might be necessary to contact The Office of Student Conduct and Community Standards. (Remember, however, that speaking with your student is always the first step before taking further action.) If your conclusion is that the student cheated, you’ll have to decide whether you allow them to complete the assignment again on their own (perhaps with a penalty) or whether you’ll give no options to right the ship. Consider that we are in a developmental stage with these tools and it might be good to give the do-over if the student owns up to it.
  • Self-reflect. Given that students often take shortcuts for reasons related to course structure, review our guide to encouraging academic honesty. Consider whether your course is designed to reflect these best practices.
How do I communicate my expectations surrounding A.I. use?

Talking to your students about AI 

Whatever approach we decide to take in our classes with respect to the use and study of generative AI tools, it will be critical to talk to our students about AI and learning. It is important that we’re transparent with students about the choices we’ve made and that we speak with rather than at our students about the impact of AI on their learning and lives. 

Learn more about talking with your students about AI.

Syllabus guidance

Once you have decided what authorized and appropriate use of AI tools in your classroom will look like, be sure to clearly indicate your expectations in your syllabus. The CAT has drafted syllabus language to get you started.

What are some approaches to A.I. in the classroom?

Make it your friend 

One approach to addressing generative AI in your classes is to encourage students to use the tools to meet course learning goals, for example, by having students experiment with prompt writing or using text generation tools as part of their writing process. 

Learn more about productively using generative AI tools to support student learning in your classes.

Peruse our selection of sample AI assignments.

Critical examination

AI stands to have a significant impact on how we live and work in coming years. One approach to addressing the existence of generative AI in our classes is to design assignments and activities that take AI as an object of critical inquiry.

Learn more about adopting AI as an object of inquiry

Creatively Work Around AI

If your goal is to disallow or discourage students from using generative AI tools, it will be necessary to design assessments and class activities that are either difficult to complete with the use of AI tools or that are completed in a context in which it is difficult to make use of these tools.

Learn more about strategies for creatively working around AI.

For additional examples of how Temple faculty members have integrated AI into their classes, see our Faculty Adventures in the AI Learning Frontier EDvice Exchange series.

What should I do if I think students are using A.I. inappropriately?

There may be instances where you strongly suspect a student has inappropriately used generative AI to complete work for your course. In such cases:

  • Don’t take it personally! Cheating can often feel like a personal attack and a betrayal of all the work you’ve put into your teaching. Remember that a student’s decision to use AI to take shortcuts is probably about them, not about you. 
  • Check your biases. Is your suspicion of your student’s work well-founded? Would you have the same concerns if the work had been handed in by other students? 
  • Beware of falsely accusing students outright. Our ability to accurately identify the use of AI generative tools at present is quite weak.  
  • Ask the student to meet with you. Simply say something like “I have some concerns about your assignment. Please come to see me.” 
  • Use developmental language. Remember that your student may have used generative AI without realizing it is considered cheating, or there may have been factors that made them feel that they needed to cheat. A conversation with your student may serve a metacognitive function, helping them reflect upon whether their engagement with AI tools actually fosters deeper learning.
  • When you meet with the student, try not to be confrontational (remember that you may not be certain they used AI in an unauthorized manner). Instead, start by asking them questions that will give them a moment to tell the story of their writing process, such as: How were you feeling about the assignment? What do you think was challenging about it? Why don’t you tell me what your process was for getting it done. If there is research involved, you can ask what research they used. If they were writing on something they were supposed to read or visit (an art exhibit, for instance), ask pointed questions that get at whether they actually engaged in that activity.  
  • Then state your concerns: I’m concerned because the writing in this assignment doesn’t seem to match the writing in your other assignments, and the AI detector tool said that it is AI written. Point out any inconsistencies, odd language, repetition, or hallucinated citations with the student.  
  • Use developmental language. Remember that your student may have used generative AI without realizing it is considered cheating, or there may have been factors that made them feel that they needed to cheat. A conversation with your student can be a learning opportunity for them. 
  • Discuss with the colleagues in your department what a reasonable penalty might be for unauthorized use of generative AI. Consider also when it might be necessary to contact The Office of Student Conduct and Community Standards. (Remember, however, that speaking with your student is always the first step before taking further action.) If your conclusion is that the student cheated, you’ll have to decide whether you allow them to complete the assignment again on their own (perhaps with a penalty) or whether you’ll give no options to right the ship. Consider that we are in a developmental stage with these tools and it might be good to give the do-over if the student owns up to it.
  • Self-reflect. Given that students often take shortcuts for reasons related to the course structure, review our blog post on academic integrity and AI in order to take steps to promote academic integrity and consider whether your course is designed to reflect these best practices. In addition, consider whether there are ways to more effectively communicate your AI policy so that students understand why using AI may not be in their best interests for your particular course or for the activity in question..
What is the University’s policy regarding generative A.I.?

In July of 2024, the Office of the President issued Guidelines for Generative Artificial Intelligence

Additionally, Provost Mandel has stated that it will be important for faculty to determine the appropriate use of AI tools in their classrooms. He has provided a blanket policy for Temple University stating that the use of generative AI tools is prohibited for students unless an instructor explicitly grants permission. Students are informed of this policy before the start of the semester, and Temple’s revised Student Code of Conduct now explicitly defines unauthorized use of generative AI tools as academic misconduct. In addition, the provost has advised faculty to add a statement to syllabi outlining the stance on students’ potential use of such tools and discuss your decision with your students. These model syllabus statements, which can be adopted or modified as needed, have been developed to assist you in articulating this decision. 

Temple has acquired a university-wide license for Turnitin’s AI Detector tool, designed to detect AI-generated text. This AI-detection tool should be used with great caution, as it is often inaccurate and does not allow the ability to verify its results.  Turnitin’s AI detector tool will be available for faculty who wish to use it. Faculty must request access and complete a brief asynchronous training before the tool will be enabled.  Please login at TUhelp, click on the “Request Help” button and select the "Turnitin AI Detector" option to submit your request.” 

As generative AI keeps evolving, there is increasing interest in using AI Notetaker tools. In response, in September of 2024, Temple Information Technology Services (ITS) issued guidelines specifically addressing AI Notetakers

If you have questions about these policies, or if you would like to learn more about generative AI in the teaching environment, please visit the resources available on the AI Faculty Guide website and feel free to reach out to Stephanie Fiore at with questions. 

How can I use A.I. in my work?

AI has the potential to assist with everyday administrative tasks. You can save time on routine tasks and focus more on engaging with students and advancing your research. For example, AI can aid you with drafting emails, announcements, and letters of recommendation. It may assist with developing your course materials and aid in ideation for other professional ventures. It can help draft agendas for department meetings or committees, and it could generate a list of frequently asked questions and answers for your course. It can even translate documents or communication materials.

However, there are a few considerations to be made when using AI. We must be careful about privacy and data security, the financial and environmental costs of using AI, and the risk of bias in the system. It’s also important to maintain meaningful personal connections between students and faculty and to be mindful of how AI might limit in-person interactions, which may hamper student learning. Finding the right balance will help ensure that AI enhances rather than overshadows our administrative and educational efforts.  As with any technology, AI or predictive text tools, you will ultimately want to verify all output and reword to personalize for the task.

Library Guide: Generative AI and Chatbots

Library Guide: AI Tools for Research

How can I use A.I. in a Zoom session?

Temple University is enabling certain functions of Zoom AI Companion. This tool can generate a detaIled summary of your meeting, organize your meeting summary by topic, and more. It is important, however, that faculty and staff who wish to use the tool do so responsibly following university guidelines, which are clarified in the document linked below. Also addressed are alternative AI note-taking tools such as Otter.ai and Reader.ai, which are prohibited at Temple University.