The human mind, susceptible to cognitive errors, has the tendency to favour information that is agreeable with our worldview or provides answers to the problem accordingly to how it was formulated. The algorithms of that kind are free of imperfections. It seems that, contrary to humans, they are objective. There is even a belief that science can save us from political disagreements. We owe it to Robert Boyle, a 17th century British natural philosopher who believed that an impartial method leading to the discovery of unquestionable truths would be the cure. But is it really so? Algorithms are written by people. And here arises the question of algorithmic prejudices and the question whether we can separate algorithms from our worldview. Is it possible to ensure that our prejudices do not permeate the algorithms we create?
Examples of the discriminatory use of algorithms can be multiplied. Let us present some of the most famous ones. A camera from a well-known company with blink detection is not able to take pictures of some Asian users because the system does not read their eyes as fully open. The google translator translate gender forms of nouns, so all the translations are "gender-neutral", which is basically male. The face recognition system for black users' faces sees the faces of gorillas. The security system detects the faces of black citizens as potentially threatening security twice as often as white citizens.
Why is this happening? Algorithms work on the basis of the knowledge gathered in databases. The databases and the information stored in them are created by humans. Therefore if we have a million photos of bicycles, then, when classifying the next bicycle the algorithm will compare it to that million.. Voice recognition systems work similarly, by comparing words with millions of words already spoken. If the database is incomplete, the algorithm contains dead points. These points can be a problem. This raises the most important question: do people who create algorithmic systems do so in the name of equality, diversity and pluralism, which are one of the basic features of democracy, or in the name of their own prejudices?
There is always a person or a group of people who decide what goes into the database from which the algorithm learns. Thus, the company producing cameras teaches the algorithm to recognize the open eye as the norm. That is why Asians will not fit into this standard. Behind each algorithm there are their values, views, culture, and the whole context of their life experience. The prejudices of algorithms may be created unconsciously, because culture influences the way of thinking of its representatives so much that they do not realize that they can think differently; they treat it as a social norm. However this is only one side of the coin. In the same way, one can write algorithms to influence human choices and decisions in order to achieve certain benefits. This gives great power, as we could see in the recent presidential elections in the USA or the Brexit campaign in the UK.
Therefore, algorithms are political and will be political as long as while using them we will consider who creates and controls them, and for what purpose. Such and similar issues concerning the social environment of science are dealt with at the Open Scientific Seminars Man – Business – Technologies, to which we invite to.