By TIASIA SAUNDERS
Capital News Service
The rapid advancement of artificial intelligence technologies has prompted universities across Maryland to adopt AI policies quickly. An examination of academic integrity policies shows that enforcement may be inconsistent, with instructors given significant discretion in applying these guidelines, according to experts.
While many universities acknowledge that AI detection tools are unreliable, fewer clearly define what constitutes evidence of AI misconduct, interviews with campus officials show.
โBecause AI is a new and evolving technology, the larger challenge we have experienced has been when faculty have been unclear or vague in their messages around usage of AI tools, leading to a gray area where students may have needed to make assumptions,โ Pavan Purswani, interim assistant dean of students at the University of Baltimore, said.
At several Maryland Universities, including the University of Maryland, University of Maryland, Baltimore, University of Maryland Eastern Shore, Loyola University Maryland, University of Baltimore and Morgan State University, guidelines caution against relying on AI detection tools as definitive proof of misconduct, according to guidelines and policies reviewed by CNS.
Instead, the universities advise that such tools be used only as indicators and not as the sole basis for disciplinary decisions, emphasizing that instructors should consider additional context and communicate clearly with students about how AI tools are evaluated.
Across the Maryland university policies reviewed, AI-related cases are generally addressed under broader academic integrity frameworks rather than AI-specific standards, with determinations about sufficient evidence often left to faculty judgment.
As a result, the type and threshold of evidence can vary significantly from case to case
โWe found it was really kind of a losing battle to define what constituted AI misconduct, and that what we needed was a much broader reckoning of how we define misconduct to begin with,โ Katie Grantz, the provost and dean of faculty at St Mary’s College of Maryland, said.
She added that St. Maryโs now requires every syllabus to include an AI policy, emphasizing that expectations may vary by instructor and discipline, but that students must be clearly informed of those rules in advance.

The reliance on instructor discretion is reflected across multiple Maryland universities, where policies often grant professors broad authority to define acceptable AI use and determine whether a violation has occurred.
A review of academic integrity policies across Maryland universities shows that in some cases, faculty may resolve concerns informally with students; in other cases, they may be escalated through formal misconduct processes, creating a system where similar behavior can result in different outcomes.
Craig Farmer, the assistant director of student conduct at Johns Hopkins University, explained that when students engage in similar behavior, how a case is initially handled can vary widely depending on the instructor. Some faculty may treat a violation as minor and assign a single charge, while others may pursue multiple charges or formal action.
โItโs quite possible that if two students do the same thing, one might receive one charge while another receives three,โ Farmer said, adding that their office works to ensure outcomes are ultimately consistent.
At St. Mary’s College of Maryland, Loyola University Maryland and Johns Hopkins University, faculty are generally expected to report or initiate misconduct proceedings when violations are identified formally. In contrast, at Towson University, Bowie State University, and Frostburg State University, policies allow instructors greater discretion, enabling them to address concerns directly with students or to decide whether to escalate cases to formal misconduct processes.
All of the schools reviewed have published AI guidelines to provide suggestions on how to navigate using generative AI for schoolwork.
The University of Maryland requires instructors to define how AI can be used in their courses, and students are expected to cite the use of AI tools properly. The university also emphasizes transparency and human oversight when using generative AI tools.
โOur code of academic integrity does not have a rule saying that AI use is prohibited,โ said James Bond, assistant dean of students and director of student conduct. โOur code speaks to five different types of violations: cheating, facilitation of academic misconduct, fabrication, plagiarism and self-plagiarism.โ
Inconsistent classroom policies can create uncertainty for students about what is permitted across courses and may lead to different interpretations of similar behavior, said Jessica Stansbury, founding director of the Center for AI Learning and Community-Engaged Innovation at the University of Baltimore.
โThis inconsistency creates confusion of expectations for students, and more importantly, a stigma of AI use,โ she said, adding that conflicting classroom rules can discourage open discussion about how students use the tools.
At some Maryland colleges, such as St. Mary’s College of Maryland and Salisbury University, faculty have discussed creating standardized frameworks to define and evaluate AI use in academic work.
These approaches include developing universal scales to distinguish between acceptable use and misconduct, aiming to reduce ambiguity across courses.
โWeโre looking at adopting a universal AI scaleโlike a zero-to-six or red-to-green systemโthat would be task-specific and allow instructors to choose different levels of use,โ Grantz said.
These conversations reflect a broader shift in how colleges are approaching AI in education, moving away from rigid prohibitions and toward more adaptive, guidance-based systems. As AI tools become increasingly embedded in everyday academic work, universities are being pushed to rethink not only how misconduct is defined, but how learning itself is assessed.
โWe as universities should accept the fact that now AI tools are ubiquitous. Theyโre everywhere. I believe we should be teaching students how to use AI responsibly. We should be finding different ways to integrate AI into the lesson planned while also being creative and strategic with how we are challenging our students to think critically as well,โ Farmer said.
