AI in higher education and the ‘erosion’ of learning

Prof Nir Eisikovits and Jacob Burley of the University of Massachusetts Boston discuss the ethics of AI in higher education and the role of technology in ‘cognitive overload’.
A version of this article was originally published by The Conversation (CC BY-ND 4.0)
The public debate about artificial intelligence in higher education largely revolves around a common concern: cheating. Will students use chatbots to write essays? Can teachers say? Should universities ban technology? Did you accept it?
This concern is understandable. But focusing too much on cheating misses the big change that is already underway, which goes beyond student misbehavior and even the classroom.
Universities are embracing AI in many areas of campus life. Other uses are less obvious, such as systems that help allocate resources, flag ‘at-risk’ students, improve course planning or make general administrative decisions. Other uses are more obvious. Students use AI tools to summarize and study, teachers use them to create assignments and syllabi, and researchers use them to write codes, scan books and compress hours of tedious work into minutes.
Humans can use AI to cheat or skip assigned tasks. But the many uses of AI in higher education, and the changes they reflect, beg a deeper question: As machines become more powerful for research and learning, what happens to higher education? What purpose does the university serve?
For the past eight years, we have been studying the ethical implications of full interaction with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the stakes of AI use are on the rise, as are its potential outcomes.
As these technologies become better at producing knowledge work – designing classes, writing papers, researching suggestions and summarizing complex texts – it doesn’t just make universities more productive. They risk shutting down the learning and teaching ecosystem these institutions are built on, and rely on.
Nonautonomous AI
Consider three types of AI programs and their impact on university life.
AI-powered software is already being used across higher education in admissions reviews, purchasing, academic advising and institutional risk assessment. These are considered ‘non-autonomous’ systems because they perform tasks automatically, but the human is in the ‘loop’ and uses these systems as tools.
This technology may compromise student privacy and data security. They can also be biased. And they are often not transparent enough to determine the sources of these problems. Who has access to student data? How are ‘risk scores’ generated? How do we prevent systems from reproducing inequality or treating certain students as problems to be treated?
These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities often have compliance offices, institutional review boards and governance mechanisms designed to help address or mitigate these risks, even if they sometimes fall short of these goals.
Hybrid AI
Hybrid systems include a variety of tools, including AI-assisted tutoring chatbots, personalized feedback tools and automated writing support. They often rely on generative AI technology, especially large language models. Although human users set overall goals, the intermediate steps the system takes to meet them are often not specified.
Hybrid systems are increasingly forming part of everyday academic work. Students use them as writing companions, tutors, conversation partners and on-demand commentators. Faculty use them to create rubrics, draft lectures and design syllabi. Researchers use them to summarize papers, comment on drafts, design experiments and generate code.
This is where the ‘fake’ discussion is for you. As students and teachers alike increasingly rely on technology for help, it’s reasonable to wonder what kinds of learning might be lost along the way. But hybrid systems also raise complex ethical questions.
A person is related to transparency. AI chatbots provide natural language communication that makes it difficult to tell when you are interacting with a human and when you are interacting with an automated agent. That can alienate and disturb those they communicate with. A student reviewing test materials should be able to tell if they are talking to their teaching assistant or a robot.
A student reading an answer to a term paper needs to know if it was written by his teacher. Anything less than transparency in such situations will alienate everyone involved and will shift the focus of the educational interaction from learning to learning methods or technologies. Researchers at the University of Pittsburgh have shown that these changes produce feelings of uncertainty, anxiety and distrust in students. These are problematic results.
A second ethical question relates to accountability and intellectual credit. If a teacher uses AI to write an assignment and a student uses AI to write an answer, who is assessing, and what exactly is being assessed? If the answer is partially machine-generated, who is responsible when it misleads, discourages or embeds bias? And when AI contributes more to the synthesis of research or writing, universities will need clear norms about creativity and responsibility – not only for students, but also for faculty.
Finally, there is the important question of psychological loading. AI can reduce anger, and that’s not inherently bad. But it can also distract users from skill-building parts of learning, such as generating ideas, struggling through confusion, revising awkward drafts, and learning to spot human errors.
Independent agents
The most important changes may come with systems that look less like assistants and more like agents. While truly autonomous technology remains aspirational, the dream of the researcher ‘in a box’ – a functional AI system capable of conducting studies on its own – is increasingly becoming a reality.
Agent tools are expected to ‘free up time’ for work that focuses on more human capacities such as empathy and problem solving. In teaching, this may mean that faculty may teach from a theoretical perspective, but much of the day-to-day work of teaching can be transferred to systems optimized for efficiency and scale. Similarly, in research, the trajectory points to the processes that may create the research cycle itself. In some domains, they already look like robotic laboratories that work continuously, automating large parts of the tests and selecting new tests based on previous results.
At first glance, this may sound like a welcome improvement in productivity. But universities are not knowledge factories; they are applications. They rely on a pipeline of graduate students and early academics who learn to teach and research by participating in the same project. If private agents take on many of the ‘normative’ burdens that have historically served as the vanguard of academic life, the university may continue to produce courses and publications while quietly reducing the opportunity structures that support technology over time.
The same variable applies to undergraduate students, although in a different register. When AI systems can provide explanations, drafts, solutions and learning programs on demand, the temptation is to outsource the most challenging parts of learning. In an industry pushing AI into universities, it may seem like this kind of work is ‘inefficient’ and students would be better off letting a machine handle it. But it is the very nature of that struggle that creates lasting understanding. Cognitive psychology has shown that students grow intellectually by doing the work of writing, revising, failing, trying again, dealing with confusion and revising weak arguments. This is an activity to learn how to learn.
Taken together, these developments suggest that the greatest risk posed by automation in higher education is not just the substitution of certain machines for certain jobs, but the erosion of the broader ecosystem of practice that has long centered teaching, research and learning.
An uncomfortable inflection point
So what purpose do universities serve in a world where knowledge work is increasingly automated?
One possible answer treats the university as an engine for the production of credentials and knowledge. There, the main question is: Do students graduate? Is the paperwork and discovery done? If private programs can deliver those results more effectively, then the institution has every reason to adopt them.
But another response sees the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model gives intrinsic value to a pipeline of opportunities where novices become experts, mentoring structures where judgment and responsibility are cultivated, and an educational design that encourages productive struggle rather than distancing it. Here, what matters is not only how knowledge and degrees are produced, but how they are produced and what kind of people, skills and communities are created in the process. In this version, the university is intended to function as a non-ecosystem that builds human expertise and honest judgment.
In a world where knowledge work itself is evolving automatically, we think universities should ask what higher education owes to its students, its alumni and the community it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes.
![]()
By Prof Nir Eisikovits and Jacob Burley
Nir Eisikovits is a professor of philosophy and founding director of the Applied Ethics Center at the University of Massachusetts Boston. Eisikovits’ research focuses on the conduct of war and the conduct of technology and he has written numerous books and articles on these topics.
Jacob Burley is a junior researcher at the University of Massachusetts Boston, focusing on the principles of emerging technologies. His work examines how artificial intelligence is reshaping human decision-making, responsibility and knowledge processes, with particular attention to the common and epidemic challenges posed by increasingly autonomous systems.
Don’t miss out on the information you need to succeed. Sign up for Daily BriefSilicon Republic’s digest of must-know sci-tech news.

