If we talk about trustworthy artificial intelligence, we cannot forget the aspects related to diversity, non-discrimination and justice.
Today we already know that the data used in training artificial intelligence systems can contain various social and historical distortions, which can subsequently lead to various forms of prejudice and discrimination. These distortions cannot always be avoided, and therefore it is important to know how to address these discriminatory elements already in the system design phase, or during the collection and annotation of input data. However, our personal attitudes and prejudices are also to blame for the unfair outputs of AI systems. Therefore, it is good to pay attention to the diversity of opinions, cultural and social views and to maintain a free exchange of opinions in teams.
Artificial intelligence systems are there for humans, not the other way around. It is therefore important to pay attention to communication with users and other affected persons, regardless of their age, gender, or professional skills. Last but not least, we mustn’t forget people with disabilities, who also shouldn’t be discriminated against when using AI systems.
In order for artificial intelligence systems to be in line with the requirements of fairness and justice, it is necessary to involve multiple groups of stakeholders in communication throughout the entire life cycle of the system. Consultations with those affected by the system should be regular and systematic. In addition, it is useful to ask for feedback even after the system has been put into production. The long-term and continuous involvement of all concerned parties can bring not only new perspectives on the added value of AI systems, but also increase their sensitivity to specific needs and limitations.