AI explainability or the art of explaining why a certain AI model makes the predictions it makes is all the buzz these days. This is because unlike traditional algorithms we don't really understand what goes on inside AI systems and while we could peer into these models as they operate or log their actions in many cases we cannot really explain exactly why they make the decisions they do.
As modern AI matures and we apply it to many mission-critical systems, the need to understand these systems is growing because nobody wants to hand over critical decision making to some poorly understood black box and indeed this is very understandable.
If you are building a nuclear reactor, you want software that is provably correct at least 99% of the time. You don't want a system that "maybe" acts correctly. You want a system that is always correct because of the critical nature of your task.
When it comes to AI models its almost impossible to prove that these systems are correct because their actions are not very predictable. Even when we tune models to bring down training error and test error, it is impossible to know how these systems will behave when faced with some new test case they have not encountered before and are more likely to encounter when deployed in the wild.
There are two ways we can think about AI explainability.
1. Why do we need to burden AI systems with such a requirement?
2. If AI systems are meant to augment human decision making why do we place more responsibility on AI systems than we place on human decision making?
Starting with No. 1, I will ask again why do we need to burden AI systems with explainability? Having the right perspective on the true capabilities of AI systems will relieve these systems from the requirement of explainability.
AI, as it is practised now, is more like engineering, but not the kind of engineering that is required to build bridges which is a very critical system that has reached the status of a complete science. But the kind of engineering that AI represents is the kind of engineering we expect from our flat-screen TVs. No body dies if your flat screen TV blacks out, you just return it to the store from which you purchased it and either get a new one or a refund, at least this happens in developed countries.
Even though our flatscreen TVs are sophisticated pieces of engineering they have to function without the burden of criticality. But when it comes to bridges we want them to function well all the time, the burden of criticality is upon bridges or nuclear reactors and rightfully so.
AI should be viewed more like we would view a flat-screen TV, sophisticated but not critical. When we try to view AI systems as critical because we want to apply them to domains that they are not well suited for then we have the problem of explainability. If we apply AI to the appropriate domains, then no one needs to find a reason for why they make certain decisions.
The second reason why we shouldn't burden AI systems with explainability is that we do not ask people to explain their decision-making processes. Humans like AI systems observe data and make decisions but they do not know how or why they made these decisions even though some people pretend they do.
If you ask someone about the process behind their decision-making process they could come up with some kind of detailed explanation about their process but this is really some murky made-up explanation no matter how rigorous this person tries to make their decision-making process look like.
In reality, human decision-making process involves so many variables, both those for which the individual is conscious and unconscious of, such that there is no way the individual can reveal all the variables and mental processes that lead to them making a certain decision.
Individuals could come up with some kinds of a story which summarizes their decision-making processes and actually believe the story but what they do not understand is that even the creation of the story and its believability to themselves is determined by a brain for which they have absolutely no control of and for which they are merely representatives being dictated stories and what to believe by the automatic brain.
Same with AI systems, they deal with so many variables it will be impossible for sometime to really explain how they came about the decisions they made so they will remain black boxes for a while.
If we really place AI where it belongs, as something that is meant to enhance the human cognitive process and aid the discoverability of knowledge, then we will not have so much need to explain how the AI comes up with its decisions but we will be content to know that it was tested on sufficient test data and the error rate is very low.
We will trust the AI system to do what it does best, assist a human being who is skilled in the domain that the AI is being applied to. And not try to take the human out of the equation and put all the work on the AI system.
So, in the final analysis, we should just focus on making AI systems better, getting more data, trying not to game test results or overhype the capabilities of the AI systems. Explainability is a philosophical question and not really necessary to spend so much effort on as it concerns this deep current deep learning systems.
As modern AI matures and we apply it to many mission-critical systems, the need to understand these systems is growing because nobody wants to hand over critical decision making to some poorly understood black box and indeed this is very understandable.
If you are building a nuclear reactor, you want software that is provably correct at least 99% of the time. You don't want a system that "maybe" acts correctly. You want a system that is always correct because of the critical nature of your task.
When it comes to AI models its almost impossible to prove that these systems are correct because their actions are not very predictable. Even when we tune models to bring down training error and test error, it is impossible to know how these systems will behave when faced with some new test case they have not encountered before and are more likely to encounter when deployed in the wild.
There are two ways we can think about AI explainability.
1. Why do we need to burden AI systems with such a requirement?
2. If AI systems are meant to augment human decision making why do we place more responsibility on AI systems than we place on human decision making?
Starting with No. 1, I will ask again why do we need to burden AI systems with explainability? Having the right perspective on the true capabilities of AI systems will relieve these systems from the requirement of explainability.
AI, as it is practised now, is more like engineering, but not the kind of engineering that is required to build bridges which is a very critical system that has reached the status of a complete science. But the kind of engineering that AI represents is the kind of engineering we expect from our flat-screen TVs. No body dies if your flat screen TV blacks out, you just return it to the store from which you purchased it and either get a new one or a refund, at least this happens in developed countries.
Even though our flatscreen TVs are sophisticated pieces of engineering they have to function without the burden of criticality. But when it comes to bridges we want them to function well all the time, the burden of criticality is upon bridges or nuclear reactors and rightfully so.
AI should be viewed more like we would view a flat-screen TV, sophisticated but not critical. When we try to view AI systems as critical because we want to apply them to domains that they are not well suited for then we have the problem of explainability. If we apply AI to the appropriate domains, then no one needs to find a reason for why they make certain decisions.
The second reason why we shouldn't burden AI systems with explainability is that we do not ask people to explain their decision-making processes. Humans like AI systems observe data and make decisions but they do not know how or why they made these decisions even though some people pretend they do.
If you ask someone about the process behind their decision-making process they could come up with some kind of detailed explanation about their process but this is really some murky made-up explanation no matter how rigorous this person tries to make their decision-making process look like.
In reality, human decision-making process involves so many variables, both those for which the individual is conscious and unconscious of, such that there is no way the individual can reveal all the variables and mental processes that lead to them making a certain decision.
Individuals could come up with some kinds of a story which summarizes their decision-making processes and actually believe the story but what they do not understand is that even the creation of the story and its believability to themselves is determined by a brain for which they have absolutely no control of and for which they are merely representatives being dictated stories and what to believe by the automatic brain.
Same with AI systems, they deal with so many variables it will be impossible for sometime to really explain how they came about the decisions they made so they will remain black boxes for a while.
If we really place AI where it belongs, as something that is meant to enhance the human cognitive process and aid the discoverability of knowledge, then we will not have so much need to explain how the AI comes up with its decisions but we will be content to know that it was tested on sufficient test data and the error rate is very low.
We will trust the AI system to do what it does best, assist a human being who is skilled in the domain that the AI is being applied to. And not try to take the human out of the equation and put all the work on the AI system.
So, in the final analysis, we should just focus on making AI systems better, getting more data, trying not to game test results or overhype the capabilities of the AI systems. Explainability is a philosophical question and not really necessary to spend so much effort on as it concerns this deep current deep learning systems.
Comments
Post a Comment