AI Awards:Requirement #4 is about transparency

The fourth requirement requires principles of transparency in the process of development, deployment and use of AI systems.


One of the key requirements for the development, deployment and use of trustworthy artificial intelligence concerns transparency. Current artificial intelligence systems, especially those that use the principles of deep learning, are often referred to as black boxes. This means that even the creators of such systems themselves may not be able to determine exactly how their AI system arrived at a given output. This can be problematic in cases where such outcomes can have a major impact on our lives. It is therefore of the utmost importance that the principles aimed at achieving an adequate level of transparency are followed in the process of developing, deploying and using such systems.

One of the layers of transparency is traceability, when we are interested in how the data and processes that lead to specific decisions of AI systems are documented. Thanks to this, we can then more easily trace the basis of which the system reached its conclusions and, in the case of undesirable results (such as various discriminatory or harmful decisions), analyze back where exactly the system failed and try to prevent such behavior in the future.
The second important aspect is explainability. This refers to the ability to be able to clearly explain the way in which the outputs of artificial intelligence systems are generated, or the resulting human decisions. The goal of explainability is to provide an understandable explanation for why the system decided the way it did. Such an explanation should be appropriate to the situation and tailored to the expertise of the affected party, be it a professional user, domain expert, regulator or researcher. However, better explainability can lead to limited accuracy, so it is essential to be able to communicate to the affected parties the compromises that had to be reached for the sake of higher explainability.

A key element in achieving higher transparency is therefore an appropriately chosen communication. It starts with a simple and clear naming of the situations in which users come into contact with the artificial intelligence system. Also, their creators should not give users the false impression that there are other human beings on the other side. Also, users should be able to refuse interaction with the machine and request human intervention, especially in cases where there may be fundamental impacts on their lives, health, or safety.

Other news

Join us

Register for Hopero

Sign up to Hopero and tell us about your current situation by filling in a short questionnaire. We will prepare a tailor-made Hopero service offer for you.