It’s here. The European Union (EU) has officially declared AI as an important constituent of their economic growth and have put forth rules and guidelines to “trust” the AI system. Does this mean that the EU is trying to instill “human values” into AI? I don’t think so. The requirements that are defined for AI ethics are still pretty much system oriented.
Can We Really Trust AI?
Consider this problem:
A car, which is controlled by AI, is coming to a sudden halt when the brake pads don’t work. There are six people ahead crossing the street and only one human being in the car. Will the car swerve to the side and sacrifice its human passenger or continue along the path and hit the six people? Is there a third option? What would AI choose?
I’ve read so many articles and blogs, but still couldn’t find an answer to this potential problem.
A common machine learning/deep learning thinking that a data scientist follows is called heuristic thinking. It is a way in which a data scientist solves a problem with a solution that is good enough in a reasonable time frame. The problem with that is the focus is too narrow and we get stuck into one aspect of a problem ignoring others. This will result into solutions with cognitive biases. If you have read or listened to the fictional medical thriller called “The Cell” by Robin Cook, you’ll notice that the deep learning AI called iDoc uses heuristic learning and makes its own decisions based on data. While that is an extreme fictional story, there are elements we can apply to real life that give us ground rules while developing AI solutions that will help the greater good of human kind.
AI Ethics According to The EU
Below is a summary of the seven golden rules of ethical AI according to the EU guidebook. The European Commission states that these AI ethics guidelines must be met in order for AI systems to be deemed trustworthy.
- Human Agency and Oversight: While AI can provide a supportive structure to humans to increase efficiency and effectiveness, humans should always be in the loop and on command during decision making process.
- Technical Robustness and Safety: AI needs to be reliable, recoverable, explainable, and reproducible/auditable in its decision making process. I have explained in my prior blogs about the importance of explainability in decision making for AI.
- Privacy and Data Governance: Being a data governance practitioner myself, I crave customer discussions that include topics such as data quality, lineage, data integrity, and auditability of data. AI data sets should be no different given that AI lives in the world of big data.
- Transparency: There are few transparency attributes you need to consider. Think about data model, AI model, and human interaction and know that they are talking to AI and think about AI limitations.
- Diversity and Fairness: Like I had mentioned earlier, there should be no unfair or bias in any decision making for AI. This can be done in a couple of ways. You can provide as much data to AI before taking it mainstream knowing well that AI needs to be trained, and provide adequate amount of testing for AI solutions which will enable unbiased user acceptance testing to run through multiple known scenarios.
- Societal and Environmental well-being: AI is no longer a buzz word. It has been mainstream for a while now and it’s on us (the developers) to know that AI should be created for a good cause to serve mankind and not to misuse the power.
- Accountability: Auditability, explainability, and accountability are three main pillars of AI to ensure responsible outcome and appropriate assessment of accuracy of algorithms.
Trustworthy AI will Create Growth
With EU already underway and President Trump’s Executive Order on Maintaining American Leadership in Artificial Intelligence, AI promises to drive growth of the economy, embrace human emotions, have an impact of security, and improve our quality of life.
I LOOK FORWARD BEING A PART OF THIS INDUSTRIAL REVOLUTION!