paint-brush
Explainability of AI Systems Without Responsibility and Accountability is Not Sufficient by@makc
399 reads
399 reads

Explainability of AI Systems Without Responsibility and Accountability is Not Sufficient

by MakCApril 25th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The explainability of an artificial intelligence system is an essential first step in establishing confidence in the system. Some may argue, though, that it is the only requirement of an AI system to be considered practical and applicable to real life.
featured image - Explainability of AI Systems Without Responsibility and Accountability is Not Sufficient
MakC HackerNoon profile picture
The explainability of an AI system refers to the understanding of how an AI system gave the results that it did with the provided inputs. Explainability includes understanding the mathematical operations that the system applied to the information and explaining the real-world significance of a series of such operations. As discussed in "Transparency and Explanation in Deep Reinforcement Learning Neural Networks" by Leibner et al., the explainability of an artificial intelligence system is an essential first step in establishing confidence in the system. Some may argue, though, that it is the only requirement of an AI system to be considered practical and applicable to real life. But is the explainability of an artificial intelligence system without responsibility and accountability sufficient to be appropriate in our daily lives and make decisions on its own?


When we think of AI systems that inflict damage to society, two prime examples come to mind: the Therac-25 and the COMPAS algorithm. Therac-25 was a computer system used to treat cancer patients through radiation therapy. It was launched in Canada in 1982 and was one of the prime examples of a computerized system benign used to treat patients. However, the story ended tragically when it was later revealed that the system overdosed on patients at least six times, often leading to severe injuries and even death.


According to the commission that was later responsible for investigating the incident, the "primary cause was attributed to general poor software design and development practices rather than single-out specific coding errors. In particular, the software was designed so that it was realistically impossible to test it in a rigorous, automated way".


Meanwhile, COMPAS is a more recent example of a failed artificial intelligence system that was used by the US criminal justice system to predict how likely a defendant was to cause harm to society. Even though the model was highly explainable and it provided factors based on which each decision it output was taken, it was later found that the model was heavily biased against Black defendants. Hence notwithstanding the explainability of the system, the COMPAS algorithm was able to inflict damage.


As we have seen in both the examples mentioned above, explainability alone cannot be considered to be the safety threshold for real-life applications. Since the system is not a living being, it has no incentive to provide responsible answers. If at least some entity is held responsible/accountable for the answers that an AI system provides, then there is an incentive for the entity to make sure that the AI system does not make decisions that may harm society. For example, if a system is unsure about a certain decision, then it should answer, so instead of taking a decision and hoping that it turns out correct. "Without accountability, the AI system has no incentive to provide responsible answers, and it may take decisions that have negative consequences for society."


In the paper Algorithmic Accountability and Public Reason, it was discussed that accountability of the system could often become a hurdle for an AI system's widespread adoption. Even though this limits the development of technology and causes a significant challenge for the developers, it also protects the consumers of the system from unintended consequences. In the majority of cases, the consumer is not aware of the harm that a system being used by him/her (not necessarily an AI system) can cause. As a result, special regulatory organizations in the government make sure that the user is protected to a certain extent. If similar practices of creating a protected environment are followed in multiple fields, then why not in the field of AI systems?


To conclude, it is safe to say that explainability alone is not sufficient for an artificial intelligence system to be used widely. There must be at least a solitary level of verification done on top of the outputs produced by the AI system.
바카라사이트 바카라사이트 온라인바카라