Application of the Artificial Intelligence (AI) Use Policy

Supplemental Document: AI Use Policy in Relation to Other University Policies and Standards
Responsible Executive: Vice President for Information Technology and Chief Information Officer
Responsible Office: Office of the Vice President for Information Technology
Date Issued: 4/13/2026
Date Last Revised: N/A

Contacts

Clarification of Supplemental Material

Purdue Systems Security | itpolicyanswers@purdue.edu

Statement of Supplemental Material

Purdue encourages the responsible use of generative artificial intelligence (GenAI) and large language model (LLM) tools and technologies, such as OpenAI's ChatGPT, Microsoft’s CoPilot and Google Gemini, in research, education, and administrative work.

As an education and research institution, Purdue supports responsible, measured experimentation with and use of new technologies. These tools have the potential to increase productivity and allow the university to create new value for the world. However, information entered into tools that Purdue has not established an appropriate agreement with may become the property of the tool provider and used in training the services it provides to others. Responsible use of AI is essential to maintain academic integrity, data security, and institutional standards.

GenAI and LLM technologies evolve rapidly. The AI Use policy (VII.A.5) includes requirements for responsible use of AI tools. As AI tool capability and university use evolve and mature, the policy will also be evaluated and changed as needed.

AI Use Policy in Relation to Other University Policies and Standards

The AI Use policy (VII.A.5) outlines certain limitations. These limitations are applicable to other policies and standards as listed below.

Limitation: Sensitive and Restricted Data must not be entered into AI tools without prior approval.

This includes testing and training AI tools. Approval must be granted for the tool itself and for the specific usage case. Refer to policy VII.A.5 for information on obtaining approvals.

The Acceptable Use of IT Resources and Information Assets (VII.A.4) policy provides definitions for Sensitive Data and Restricted Data. Purdue must safeguard data and comply with all relevant laws, regulations, and policies. Uploading sensitive or restricted data into unapproved GenAI or LLMs poses significant security, confidentiality, and institutional integrity risks. These systems may retain, process, and inadvertently expose sensitive information to unauthorized individuals.

The policies and standards below address types of data that are protected in some way and therefore considered sensitive or restricted:

 

Limitation: AI must not be the sole factor used in making personnel, award or disciplinary decisions.

Relying on AI-generated content for critical decisions like hiring, promotion, performance reviews, awards, or investigations could introduce bias and undermine established evaluation standards. Unit heads are responsible for determining the extent to which AI may be used to inform decision-makers in their respective areas.

The policies and standards below address instances that involve various types of decision-making.

Employment and Performance

Admissions and Tuition

Disputes and Investigations

Benefits and Recognition

Agreements, Partnerships, Contracts

Delegated Authority – Authority and responsibility may not be delegated to an AI system.

Other Related Policies

History and Updates

4/13/2026: New supplemental material published in support of the Artificial Intelligence Use Policy (VII.A.5).