The use of remote proctoring has surged as colleges shifted to remote instruction during the pandemic. But not everyone is on board with the practice. Many students and even some schools have pushed back against the practice, especially objecting to automated services that rely on algorithms that watch students via their webcam and look for suspicious patterns of behavior—sending clips of questionable moments to professors for later review.
The problem, critics say, is that the systems often lead to false-positives, add stress to the test-taking process and invade privacy. Darker skin tones can prove especially tricky for algorithms.
This week one large provider of proctoring services, ProctorU, took the unusual step of announcing that it would no longer sell an AI-only proctoring product. Instead, it will focus on its longer-standing approach of having human proctors watch exams via webcam in real time.
“We believe that only a human can best determine whether test-taker behavior is suspicious or violates test rules,” said Scott McFarland, CEO of ProctorU, in a statement issued by the company. “Depending exclusively on AI and outside review can lead to mistakes or incorrect conclusions as well as create other problems.”
Officials for the company also said they found that reviewing suspicious clips sent by the AI system was a burden that professors did not have adequate time for.
A spokesperson for Proctorio, another widely-used remote-proctoring service, argued that human proctors are more of an invasion of privacy than algorithmic systems.