AI Principles
Since Artificial Intelligence (AI) is a rapidly growing and much-discussed technology that is having an increasing impact on our daily lives, the AI principles were created. Indeed, self-learning algorithms are increasingly making (business) decisions on their own and invading our private domain more and more. Governments, financial institutions, and credit card companies base decisions on algorithms, and "computer says no" usually means end of discussion without further explanation. Not surprisingly, somewhere at the interfaces between IT, law, and ethics, things start to chafe and societal discussions arise. After all, what about ethics, morality and ethical conduct in the algorithm era?
What AI principles are there?
AI principles are ethical principles for the use of AI and algorithms. There are 5 of them, namely:
AI principle 1: AI is not biased.
Biases (biases) can be deeply embedded in people, data and algorithms, usually in that order. The data we store is not as objective as thought and reflects our human values and biases. This results in the presence of these biases in big data. For example, algorithms would exclude people with a foreign last name or a certain zip code from jobs or loans. However, human decisions are also not free of biases and are difficult to control, as people can lie or unknowingly discriminate. Andrew McAfee of MIT states, "If you want to eliminate bias, bring in the algorithms."
AI principle 2: AI is good for people & planet.
AI should benefit as many people as possible. In the 1990s, the three P's (People, Planet and Profit) became popular, with the earth taking pride of place. In 2020, the number of "AI for Good" initiatives is countless. Companies such as Google, Microsoft, IBM, the United Nations, the European Commission and governments all pledge to use AI only for the good of humanity and a sustainable society
AI principle 3: AI does not harm citizens.
AI in general and algorithms in particular must be not only fair, but also reliable, consistent and correct. Humans must be enabled to critically check and verify algorithms for these properties.
AI principle 4: AI serves humans.
People should at all times be able to choose for themselves whether and how they want to transfer parts of their decision-making power to an AI system and what goals they want to achieve by doing so. It's about preserving autonomy, or the power to decide.
AI principle 5: Explainable AI.
A small group of professionals are involved in developing complex algorithms that have a major impact on the lives of many people around the world. These algorithms, such as Google's PageRank, are difficult for users to understand. This increases the demand for transparency, accountability, interpretation and understandability when it comes to algorithms and ethics. Open source can help bring more transparency.