Download this PDF for the Dutch translation
Humankind always has their eyes on moving forward. How can we improve and reach new heights? What is the ‘next big thing’? In recent years, due to large advancements in processing power and computing capabilities, the ‘next big thing’ is Artificial Intelligence (AI), especially Machine Learning (ML). Those two might be excessively popular, but it’s important to realize that AI and ML have impactful pitfalls. In this blog I will provide a few ‘Ethical Artificial Intelligence’ points to keep in mind.
Reusing Artificial Intelligence
AI is a commonly used buzzword that will turn heads across all areas (and fields ) of society; from corporate to NGO and from startup to government. Everybody wants to be using AI, in some way, shape or form. But do we actually know what AI is? Unfortunately, there is no widely accepted definition of Artificial Intelligence. Instead, we use it as an umbrella term that encompasses a variety of computational techniques and associated processes.
The dedication of all these techniques and processes is to improve the ability of machines to perform actions that require ‘intelligence’: pattern recognition, computer vision and language processing . However, the definition of AI will change as new technologies appear. Formerly innovative will become routine and lose their categorization as AI, and new technologies with expanded capabilities will take their place .
Ups and downs
The difficulty is to remain objective when faced with the promise of AI. We all know that the improvements AI could bring are enormous. AI based systems are already outperforming specialists in complex fields, for example medical diagnostics or due diligence for lawyers. It essentially allows institutions to achieve more while spending less, and to enjoy the naturally following benefits by opening the door to all kinds of services of AI.
And that’s why corporations in every industry are racing to integrate AI in their products and insights. It is a matter of ‘adapt-AI-or-die’. However, the lack of a clear definition of Artificial Intelligence, coupled with the frantic desire to incorporate it within every industry, has brought to light an important challenge. Every ‘next big thing’ has had to face this challenge at some moment: ‘How to handle the potential download if AI is not used in a responsible and ethical manner?’.
‘‘What happens to my personal data?’’
This question has also become a growing concern in the public’s eye. It is not surprising as AI systems depend on huge quantities of (personal) data that have had questionable impacts on the right to privacy (and so rolled in the GDPR). A commonly mentioned acronym that has been called for is Fairness, Accountability, and Transparency in Machine learning (FAT/ML), which essentially tackles ethics and its introduction in the AI framework where ‘Black Box’ algorithms are concerned. This ensures that the community addresses challenges for securing non-discrimination (Fairness), due process (Accountability) and understandability in decision making (Transparency).
Tips and tricks for ethical Artificial Intelligence
Unfortunately, AI ethics is still in a bourgeoning phase and therefore requires critical thinking. However, there are a few basic best practice points that can be mentioned to keep in mind when thinking about ethical Artificial Intelligence. Not only within your team, but also within your organization as a whole.
1. Hire ‘real’ data scientists
Due to the explosion of demand in the Machine Learning sectors, the term ‘Data Scientist’ is being attached to a broad framework of responsibilities, expectations, and definitions . Make sure that the people you select are qualified to execute the tasks you will be assigning them, and make sure that your organization puts forward a certain ethical standard when dealing with models and clients.
2. Understand the difference between correlation and causality
ML models make predictions based on correlations/patterns that are extracted from the data that it observes. However, this makes it difficult to ensure that the model will be robust enough to handle changes in data. Causal inference explores a cause-effect relationship and considers what might have happened when faced with a lack of information.
Currently, Machine Learning focuses on prediction (using correlational patterns) and less on causality. Therefore it is important to remember that correlation and causality are different and limit the conclusions you can draw from the results of your Machine Learning model. An example: postal codes are often a proxy for ethnic background. If your model used this information to make decisions, it may result in a bias for different ethnic groups.
3. Take biases into account
In relation to the above point, data scientists should ensure that the system/model takes into account potential biases from the past. These biases are often stemming from policy changes, economic fluctuations and people’s behavior, so the model does not learn from inherently biased data and situations.
This is a step forward in creating more robust and generalizable models. For example, if all people committing a crime have attended a certain meeting, as we have not determined a causal link between the crime and the meeting, it might be prudent to exclude this information from the model or to saturate it. It might be possible that another individual committing the same crime attends another meeting, but is not detected by the system as the model is hence inherently biased.
4. Explain, and be transparent
Be clear about possible limitations and pitfalls to make sure that everyone is on the same page on what to expect and look out for. This mode of communication is critical when the predictions or outcomes of a Machine Learning algorithm will be implemented by other individuals than those who created it (which is often the case). It is important to develop a protocol, or at least discuss the interpretation and use of the predictions in an unbiased manner.
5. Develop or implement an auditing platform
The above points can seem abstract if there is not a concrete framework to follow, which is why the development or use of an auditing platform for your system can come in handy. It would set forward a certain standard by which to certify your models, from a technical, communicative and uniform perspective. For example, Totta data lab and Verdonck, Klooster & Associates (VKA) have developed het algoritmekeurmerk; an auditing tool to measure the reliability and effectiveness of algorithms.
6. Educate yourself continuously
Finally, the world of data science is evolving at an ever increasing pace, and its users have a responsibility to keep up to date and educated in its developments. This includes educating yourself (and each other!) on new discoveries in the field of Machine Learning and Artificial Intelligence, and developments in explainability and causality. This often leads to another layer of transparency between data scientists, their models, and their clients who use the models. Totta data lab has recently developed an Open AI version of their fraud model within the social benefit system that gives more insight into the decision making from the model.
Keep your (ethical) eyes open
Machine Learning and Artificial Intelligence systems are used in projects that will have a lasting impact, and the above points can steer the result in an ethically guided way. Remember: there is no ‘one size fits all’ answer to developing and ensuring ethical Artificial Intelligence. It requires critical thinking, open mindedness, and teamwork from all different fields. The above points are only a few considerations. Conclusion: not only keep those things in mind, but also: keep your mind open!
 See figure 1.
 Sutart J. Russel and Peter Norvig, Artificial Intelligence: A Modern Approach (1995).
 Pamela McCorduck, Machines who think: A personal inquiry into the history and prospects of Artificial Intelligence (2004).
 Den Daas, S. (2019, May 27). Wat Zoek Jij In Een Data Scientist? [Blog post]. Retrieved from https://www.tottadatalab.nl/2019/05/27/wat-zoek-jij-data-scientist/.