The question of responsibility in artificial intelligence is one of the most difficult. It includes not only the complexity of systems and processes, the creation of which involves various roles and which lead to confusion about who is responsible for what (the so-called “many hands” problem), but also the degree of autonomy that AI systems have. Determining responsibility in artificial intelligence therefore requires the existence of mechanisms to verify how and why artificial intelligence systems work in different situations and what their consequences are. These mechanisms should be available even before the deployment of artificial intelligence systems, during their operation, but also after they stop being used. Responsibility in artificial intelligence is closely linked to other requirements, such as the principle of justice, which requires that artificial intelligence systems do not discriminate and protect the rights of all affected persons.
Auditability includes the ability to assess algorithms, data and processes that are part of artificial intelligence systems. Auditability does not mean that all information about AI systems must be public, but that it should be accessible for review by responsible parties such as internal and external auditors, regulators or courts. The verifiability of AI systems thus contributes to their credibility and transparency by enabling their independent control, especially for applications that may have a significant impact on our fundamental rights, life or security.
Minimizing negative impacts and reporting them addresses the possibility of identifying, evaluating and reporting possible negative effects of artificial intelligence systems, especially for those who are directly or indirectly affected by them. This aspect also includes the protection and support of whistleblowers who raise legitimate concerns about artificial intelligence systems. This also includes the use of various tools to assess the impact of AI systems, which help predict and mitigate the risks associated with these systems.
In cases where it turns out that artificial intelligence systems may have an adverse effect on data subjects, it is important to have mechanisms in place to provide adequate redress, or adjustment mechanisms in the system itself. Trade-offs are an integral part of the development, deployment and use of new technologies, including artificial intelligence systems. It is necessary to be aware that not all requirements, whether of a technical or non-technical nature, can be met at the same time and that some situations may require concessions in the technical solution in favor of increased protection of the persons concerned. These trade-offs should be clearly justified and documented, taking into account the ethical and social risks that the given decision may bring. In the event that no morally acceptable compromise can be found, it is important to be prepared for a last resort and to be able to refrain from developing, deploying or using AI systems in such problematic situations.