Artificial Intelligence Responsible and Acceptable Use
Purpose
West Texas A&M University supports the thoughtful, ethical, and innovative use of artificial intelligence (AI) to advance teaching, learning, research, and administrative operations. AI technologies are increasingly embedded in instructional tools, research workflows, administrative systems, and student-facing services, and can provide meaningful benefits when used responsibly. This document establishes acceptable use expectations for AI tools at West Texas A&M University. It is intended to enable responsible innovation while ensuring compliance with applicable laws, Texas A&M University System regulations, University policies, and institutional data-protection requirements.
Authority
This guidance is issued under the authority of West Texas A&M University Information Technology and campus leadership to operationalize Texas A&M University System Regulation 29.01.05, Artificial Intelligence, and applicable State of Texas requirements. Compliance with this guidance is required for all AI-related activities conducted in connection with University systems, services, or data. This document operates in conjunction with, and does not replace, formal University or System policies and procedures.
Scope
This guidance applies to all faculty, staff, students, contractors, vendors, affiliates, and third parties who use, develop, procure, manage, configure, or oversee AI or AI-like technologies in connection with:
-
University-owned or managed information resources
-
University data, regardless of classification
-
Instructional, research, administrative, or student-facing activities
This includes AI tools that are institutionally procured, departmentally acquired, grant-funded, or individually accessed.
Regulatory and Policy Framework
The acceptable use of AI at West Texas A&M University is governed by:
-
Texas A&M University System Regulation 29.01.05, Artificial Intelligence
-
Texas Government Code Section 2054.601, Use of Next Generation Technology
-
Texas Government Code Section 2054.623, Automated Decision Systems Inventory Report
-
West Texas A&M University Rule 29.01.99.W1, Information Resources
-
West Texas A&M University Data Classification Standard
-
West Texas A&M University Acceptable Use Standards
This guidance also reflects recognized best practices, including the NIST Artificial Intelligence Risk Management Framework.
Guiding Principles
The use of AI at West Texas A&M University must:
-
Support the University’s mission of teaching, research, learning, and public service
-
Comply with applicable data privacy, cybersecurity, accessibility, and information security requirements
-
Protect confidential, controlled, and proprietary information
-
Promote transparency, accountability, fairness, and human oversight
-
Preserve academic and professional integrity
AI is intended to augment human decision-making, not replace human judgment or accountability.
Acceptable Uses of AI
AI tools may be used to support University activities such as:
-
Teaching and learning support
-
Research, analysis, modeling, and discovery
-
Administrative workflows and operational efficiency
-
Student-facing communications and services
-
Accessibility and assistive technologies
All use must align with this guidance and applicable University and System policies.
Data Classification and AI Use Controls
AI tools may be used with public or published University data without restriction, subject to general acceptable use requirements. Data classified as Confidential or Controlled under the University Data Classification Standard may only be used with AI tools that:
-
Are covered by an approved University or Texas A&M University System contract, and
-
Provide appropriate data-protection safeguards, including restrictions on data retention, training, sharing, and external access.
AI tools that lack an approved contract or required safeguards—including free, personal, or non-University-managed tools—are not approved for use with Confidential or Controlled University information.
Approved AI Tools and Institutional Oversight
Before using AI tools with University data, users are expected to:
-
Follow Information Technology guidance regarding approved AI tools and permitted data use, and
-
Coordinate with Information Technology when AI use involves non-public data, integration with University systems, or operational decision-making.
Information Technology coordinates AI governance and may engage Information Security, Legal Affairs, Compliance, Procurement, Academic Leadership, or Institutional Review Boards as appropriate.
Prohibited Uses of Artificial Intelligence
To protect University data, individuals, and institutional integrity, the following uses of AI are not permitted:
Unauthorized AI Tools
AI tools that lack an approved University or System contract and require data-sharing and data-protection controls are not approved for use with Controlled or Confidential University information. This includes AI tools, whether free, personal, or non-University-managed, that do not provide contractual assurances protecting University data (such as restrictions on data retention, training, sharing, or external access). AI detection tools are subject to the same requirements and may not be used with student records or other Confidential or Controlled information unless explicitly approved and properly contracted.
Non-Public Outputs
AI tools may not be used to generate non-public outputs without authorization, including but not limited to:
-
Unpublished or proprietary research
-
Legal advice or legal analysis
-
Personnel decisions or evaluations
-
Grading or academic assessment without approval
-
Non-public instructional materials not authorized for AI use
Illegal or Policy-Violating Use
AI tools may not be used for illegal, fraudulent, deceptive, or policy-violating activities.
Contracts and Vendor Agreements
AI tools that require acceptance of vendor terms or agreements must follow University procurement and contracting processes. Individuals may not accept click-through agreements or enter into contracts on behalf of the University without delegated authority. Vendor agreements must include appropriate protections for University data and align with security and privacy requirements.
Human Oversight and Accountability
AI systems must remain subject to meaningful human oversight. Individuals and units remain accountable for outcomes in which AI assists, particularly in academic, employment, legal, or operational contexts.
Risk Management
AI risks—including bias, data quality, privacy, security, reliability, and ethical considerations—must be assessed and managed throughout the AI lifecycle. Risk management practices align with the NIST Artificial Intelligence Risk Management Framework and applicable institutional processes.
Collaboration and Support
West Texas A&M University views responsible AI use as a shared institutional effort. Faculty, staff, and students are encouraged to explore AI thoughtfully, ask questions, and engage collaboratively with Information Technology and campus leadership. When uncertainty exists regarding acceptable use, data handling, or compliance requirements, individuals are expected to consult with Information Technology before using AI tools.
Enforcement and Compliance
Violations of this guidance may result in corrective action under applicable University policies, including Acceptable Use Standards, information resources governance, and applicable student or employee conduct processes.
Related Policies and References
-
Texas A&M University System Regulation 29.01.05, Artificial Intelligence
-
Texas Government Code Sections 2054.601 and 2054.623
-
West Texas A&M University Rule 29.01.99.W1, Information Resources
-
West Texas A&M University Data Classification Standard
-
West Texas A&M University Acceptable Use Standards
-
NIST Artificial Intelligence Risk Management Framework
Artificial Intelligence Acceptable Use FAQ
1. Is AI allowed at West Texas A&M University?
Yes. West Texas A&M University supports the responsible and ethical use of artificial intelligence to enhance teaching, learning, research, and administrative operations. AI tools may be used when they comply with University policies, data-protection requirements, and the acceptable use expectations outlined in the AI policy.
2. Is ChatGPT banned at WT?
No. AI tools such as ChatGPT are not categorically banned. However, free, personal, or non-University-managed AI tools may not be used with Confidential or Controlled University data unless the tool is covered by an approved University or Texas A&M University System contract and provides required data protections. Use of AI tools with public or non-sensitive information is generally permitted.
3. What makes an AI tool “unauthorized”?
An AI tool is considered unauthorized for certain uses if it lacks an approved University or System contract and does not provide required data-protection safeguards, such as restrictions on data retention, training, or sharing. Authorization is based on data risk, not whether the tool is free or paid.
4. What is considered Confidential or Controlled University data?
Examples include, but are not limited to, student education records protected by FERPA, health or medical information, personnel or employment records, proprietary or non-public institutional information, and research data not approved for public release. When in doubt, treat the data as non-public and consult Information Technology.
5. Can I use AI tools with student data?
Only if all of the following are true: the AI tool is covered by an approved University or System contract, the tool is approved for the relevant data classification, and the use complies with FERPA and University policy. Personal or free AI tools may not be used with student records or other Confidential or Controlled information.
6. Can I use AI to help with grading or evaluating students?
AI tools may not be used to generate grading decisions or academic evaluations unless explicitly approved and consistent with University academic policies. Faculty remain responsible for academic judgment and student assessment.
7. Can I use AI to write or analyze unpublished research?
Use of AI tools to generate or analyze unpublished or proprietary research requires caution and, in many cases, coordination with Information Technology. AI tools without approved contracts may not be used with non-public research data.
8. Are AI detection tools allowed?
AI detection tools are subject to the same data-protection requirements as other AI tools. They may not be used with student records or other Confidential or Controlled information unless the tool is properly contracted, approved, and used in accordance with University academic and privacy requirements.
9. Can I sign up for an AI tool using a click-through agreement?
No. Individuals may not accept click-through agreements or enter into contracts on behalf of the University without delegated authority. AI tools that require acceptance of vendor terms must go through University procurement and contracting processes.
10. Who is responsible if AI produces an error or biased output?
Humans are always responsible. AI tools are intended to support, not replace, human judgment. Faculty, staff, and administrators remain accountable for decisions, outcomes, and actions in which AI is used.
11. What if I’m not sure whether my AI use is allowed?
If you are uncertain about data classification, whether an AI tool is approved, or whether your use case is appropriate, you are expected to consult with Information Technology before using the tool. Asking early is encouraged and supported.
12. Does this policy limit innovation or academic freedom?
No. The policy is designed to enable innovation within responsible guardrails. It focuses on protecting people, data, and institutional trust rather than restricting exploration or creativity.
13. What happens if someone violates the AI acceptable use policy?
Violations may result in corrective action under applicable University policies, including Acceptable Use Standards, information resources governance, and applicable student or employee conduct processes.
14. Where can I get help or more information?
Questions, ideas, or concerns about AI use should be directed to Information Technology. IT works collaboratively with faculty, staff, and students to support responsible, secure, and innovative uses of AI.
Quick Summary
AI is allowed and encouraged when used responsibly.
Data sensitivity determines what tools may be used.
Free or personal AI tools may not be used with non-public data.
Contracts and safeguards matter more than the tool name.
When in doubt, consult with Information Technology before using AI.