Engaging with Large Language Models

Home »  » Creative Pedagogy and AI

Course Administration

Large Language Models have significant generative power, including an ability to create dozens or hundreds of requested outputs, refine those outputs to specifications, and repeat the process as often as you prefer. Educators have been using LLMs to supplement their assessment library, creating practice questions and case studies for student evaluation.

Prompting

The design of LLMs makes the prompt output relatively variable, though there are some common practices that encourage the most acceptable responses. One method is to clarify five variables you want in the output:

  1. Persona – Give it a role, whether a specific profession or a named individual.
  2. Task – What will you be asking it to do?
  3. Steps to Complete – Is this a multi-stage process?  Will it need to produce some language and then evaluate it for a second or third output? 
  4. Contex/Constraints – Are there elements that you want to exclude or highlight in importance?
  5. Goal – What is the output supposed to look like?  What kind of audience should the output target?

LLMs also respond well to follow-up questions and clarification.  You can keep prompting it to refine the output until you are satisfied with the response. Here is one example of “follow-up and refine”

Prompting “Enhancement” Strategies

While the skill of “prompt engineering” is likely overstated in its long-term utility, early experimentation has revealed that certain prompting strategies result in more robust responses.

  1. Tell it to go “step by step” or “show your work” – Prompt Example
  2. Give it stakes: “This is important to my career”
  3. Have it follow “metacognitive steps” – Prompt Example

Building Practice Assessment Questions

An LLMs particular strength is the nearly unlimited length and number of potential outputs, as well as its willingness to offer different outputs each query. You can leverage this to create many multiple choice practice questions in a short amount of time.  You can also create self-testing examples with a question prompt for students following explanation of a concept.

Modifying Lectures

Our professions require us to operate regularly at the leading edge of our discipline and scholarly expertise. It can be challenging to transition our understanding to introductory levels with younger learners who may not already understand disciplinary jargon or ways of thinking. LLMs can revise the content of lectures with a particular lens or for a particular audience.


Engaging with Students

We discussed in our overview section how transparency and clarity about your philosophy around LLMs and how it might be used for student learning. If you are opposed to any use of generative machine tools, be sure to express your reasons to your students and ways in which you expect them to adhere to the course philosophy. If you do not plan on a complete prohibition, you might consider exploring these tools in concert with students.

This approach is aided by working with your students as partners in the learning experience. Ask them to explain their perceptions of LLMs and the benefits or detriments to their college learning experience. Gauge their use of the tools and under what circumstances they find them appropriate or inappropriate. 

Purdue has recently released a secure version of CoPilot, a Microsoft interface that utilizes ChatGPT-4. You and your students can access it here by logging in with your campus-specific Purdue account. More information available here.

Below we list several examples of active engagement with LLMs and students from university instructors.

1) Illusion of Explanatory Depth

Since LLMs can generate significant text that is informed from popular usage, the output might lack explanatory depth. This allows an expert lens to be focused on the output, which provides a critical thinking exercise for your students. Consider generating an output that satisfies a basic explanation of a disciplinary phenomena, but would omit specific context or analytical approaches that an expert might identify. In effect, show students how the explanatory depth of a lay explanation might be shallow.

2) Explore Rhetoric and Argument

Consider asking the LLM to take sides on an issue and make a strong or weak argument for the same concept. Analyze those arguments with students, and identify potential perspectives that might complicate the original positions. Since an LLM output differs each time it is asked, you might have 50 students in a class who will generate 50 related, but distinct outputs to evaluate. You can also ask the LLM to envision arguments between historical authors or contemporary theorists, or generate multiple perspectives from the same author.

3) Example Topics for Evaluation

Curating our own examples and case studies can be time consuming. Consider working with students and prompting an LLM to generate historical or local real-world events whose exploration would align with course learning outcomes.

4) Critically Examine the LLM Itself

OpenAI, Anthropic, Microsoft, and Google are all highly secretive about their source code and the specific socio-cultural contexts that influence their outputs. As we discussed in our overview, there are ethical and environmental costs in using these tools and societal biases that are reinforced in their output.  Consider asking students to critically analyze the terms of service, privacy policy, and algorithmic processes of each LLM. Or you can ask the LLM to start that work for you.

5) Getting Started with Writing

There is value in writing as thinking, even if an LLM can replicate the expected output for short, written assignments. But even if we want students to learn disciplinary and critical thinking, rhetoric and persuasion through the craft of writing, an LLM can help with grammar, diction and clarity, as well as generate starting points for student arguments. One Purdue faculty member, Kate Zipay, in the Daniels School of Business, says:

“students cannot use AI for any more than 10% of any deliverable {graded assignment}. They can use it to get started. I tend to think of it like Google or Grammarly; a decent tool for part of the assignment, but not the whole thing. Anything they use that is copy-and-pasted must be red text, and footnoted, and they must include the prompt used in the footnote. As they edit, they can change over the font color, but the footnote remains.”

APA and MLA both have processes for citing generative AI

Practical Student Use

1) Thesaurus – Since LLMs are based on word connections, they are a very strong, interactive thesaurus tool for students. Students can not only ask for multiple synonyms, but they can also guide towards synonyms with a certain tone or subject matter.

2) Audio Transcription – Otter.ai is a meeting transcription tool that works well as a voice recorder/transcriber. The ChatGPT app has a functionality called “Whisper” that records up to 60 minutes of audio, including filtering out noise and disruptions. The latter will also translate into non-English language with striking accuracy. Both of these are excellent supplemental tools for English language learners to revisit our class lectures and discussions.

3) Additional Group Member – If you have student groups, consider asking the students to think through a concept, document their thoughts and working assumptions, and then ask an LLM to critique their thoughts or offer additional ideas they hadn’t considered, ensuring that the LLM justifies why such an idea would be beneficial.

Purdue Supported Tools

There are many software products that are actively leveraging or claim to leverage generative AI. Purdue does not centrally support any LLM or AI tools. However, when Purdue faculty and staff innovate with AI tools, we hope to share their successes.

Currently, several Purdue faculty have pioneered a natural language processing tool called Course Mirror. This tool was supported by a Purdue Innovation Hub grant and allows for quick and useful summaries of aggregated student comments and feedback. In short, it can help you diagnose hundreds of student concerns, questions or misconceptions very quickly. ​​​​​​​

If you have further questions about using Artificial Intelligence creatively in your course assessments or assignments, please contact innovativelearningteam@purdue.edu.

Module Navigation

Leave Your Feedback