AI systems should act as tools to help us make better, more informed and more efficient decisions. But at the same time, AI systems and their recommendations can also have a negative impact on our ability to think independently, make decisions, or take responsibility for those decisions. Therefore, it is important to think proactively about issues relating to the impact on human autonomy or the degree of human oversight when developing and deploying AI systems. Human supervision is thus one of the key requirements for trustworthy artificial intelligence. It helps to ensure that AI systems do not undermine but instead support human autonomy and decision-making abilities.
We should ask to what extent humans are allowed to supervise the functioning of AI systems, whether it is sufficiently clear at all times when a human is interacting with an AI system, or whether the system has been designed (either intentionally or inadvertently) to create strong emotional attachments to, or even dependence on, AI systems. However, it is also important to take care that the deployment of a given AI system does not lead to over-reliance on the decisions of the AI system, or even risk losing one’s own judgement and weakening important skills, such as those needed to perform a given job or profession. All of this can have a significant impact on whether such a system will be considered trustworthy.
Depending on the severity of the impacts, but also, for example, on the specific domain of deployment, we are aware of different mechanisms for the supervision of AI systems. These approaches include varying degrees and degrees of involvement of human supervision in the decision cycles of an AI system, ranging from the possibility to intervene in absolutely every step of the decision-making process, to more general interventions such as the possibility to reverse the original decision generated by the AI system, or the possibility not to use the AI system in a specific situation and to choose instead a non-automated approach.
In general, important decisions should be made by, or under the complete supervision of, a human. AI systems are there for us and not the other way around. On the other hand, the less supervision a human can exercise over an AI system, the more continuous monitoring and control of its outputs is needed. The requirement for human oversight and autonomy thus also has a significant impact on other requirements for trustworthy AI, such as those related to transparency and explainability, or the societal impacts of AI systems.