Walking the Line Between Innovation and Integrity: AI, Higher Education, and the Future of Institutional Effectiveness

Posted by Jayme Kerr on May 12, 2026 3:14:42 PM
Jayme Kerr

Artificial Intelligence is no longer some distant concept hovering on the horizon of software development. It is here, and it is quickly becoming a staple expectation across nearly every industry. From streamlining workflows to enhancing user experiences, AI is actively reshaping what software platforms are expected to offer. In many ways, this evolution is exciting. Institutions are increasingly looking to their technology partners not only to keep pace, but to thoughtfully navigate what comes next.

At SPOL, we recognize that expectation.


As a software platform serving higher education institutions, we understand that staying current with emerging technologies is not optional. Our partners trust us to provide innovative, forward-thinking solutions that improve efficiency, reduce administrative burden, and support stronger decision-making. AI, in many respects, has the potential to contribute meaningfully to that mission.


But higher education has always required a different level of care. The data housed within Institutional Effectiveness (IE) and Institutional Research (IR) systems is not simply operational. It reflects student outcomes, institutional priorities, accreditation efforts, strategic planning, and the decisions that ultimately shape the futures of students, faculty, and institutions alike. This work is deeply important, and with that importance comes a responsibility to move carefully.


That is why caution is not only understandable, but also necessary. Many of our partners are absolutely right to walk the AI line thoughtfully. Questions surrounding privacy, security, ethical implementation, and data integrity are not secondary concerns. They are central. Institutions deserve confidence that the technologies they adopt will protect the integrity of their work rather than compromise it.


We share that perspective, while also recognizing that as a software platform, we too must walk a line of balance between honoring that caution and meeting the evolving expectations of modern technology. Innovation matters. Remaining stagnant is not a strategy. But neither is moving so quickly that we lose sight of what higher education truly requires from us.


So the question is not simply whether AI belongs in higher education software. The real question is where it belongs, how it should be used, and perhaps most importantly, where clear boundaries should remain.
One of the areas where we most frequently receive requests for AI assistance is around the assessment process. On paper, this makes sense. Assessment can be time-intensive, repetitive, and often administratively heavy. AI may absolutely offer opportunities to support organization, surface patterns, or reduce certain burdens.


However, assessment is not simply a process to complete. At its core, it is one of the most important spaces in higher education for critical thought, reflection, and intentional decision-making. It is where faculty and institutional leaders ask difficult but necessary questions about whether students are truly learning, whether curriculum decisions are effective, and whether institutional strategies are serving their intended purpose.
Because this process intersects so directly with the students we collectively serve, it is also one of the spaces where we believe caution matters most. When you lose the engagement component of assessment, you risk losing the very critical thinking skills that help ensure the decisions being made are the right ones for students.


This does not mean AI has no role in assessment. It very well may. But there is a meaningful difference between AI as an assistant and AI as a replacement. Using AI to reduce administrative friction is very different from allowing it to diminish the human thought, expertise, and collaborative dialogue that make assessment meaningful in the first place.
That distinction is where this conversation becomes so important.
As we continue thinking through what AI could and should mean for higher education, our goal is not to rush toward every new innovation simply because it exists, nor is it to resist progress out of fear. Our responsibility is to think critically about the lines we intend to draw, and the ones we do not.


We must continue asking:
•    Does this enhance human engagement, or diminish it?
•    Does it protect institutional data with the seriousness it deserves?
•    Does it preserve the integrity of educational decision-making?
•    Does it ultimately serve students first?


This blog is not meant to suggest that we already have every answer or a finalized roadmap in hand. Rather, it is intended to open the door to a larger, ongoing conversation.


At SPOL, we have always believed that the strongest innovations come not from making decisions in isolation, but from putting the voices, thoughts, and opinions of our partners at the forefront of our production cycles. That philosophy has long shaped how we build, refine, and evolve our platform, and it will continue to shape how we approach AI.


As we continue walking this line between innovation and integrity, we invite both our current and prospective user-base to think alongside us. This summer, through upcoming AI feedback sessions, we look forward to hearing your perspectives, your questions, your concerns, and your ideas.
Because in higher education, and especially in the work we do together, the future of AI should not simply be built for our community. It should be built with it.