Driving Machine Learning Solutions to Success Through Model Interpretability

A data science project’s success or failure can rest on a few key factors. Discover how to increase your chances of success.

Despite the improvements the field of data science (DS) has made in the last decade, Gartner has estimated that almost 85 percent of all data science projects fail. Further, only 4% of data science projects are considered ‘very successful’. Among the major drivers of data science project failure are poor data quality, lack of technical skill or business acumen, lack of deployment infrastructure and lack of adoption.

The last of these, model adoption by users, can “make or break” the entire project, but can be overlooked in project planning under the assumption that adoption will follow, as long as the model helps the business. Unfortunately, the observed ground reality is not that simple. The key reasons for low adoption of data science models are a lack of trust and understanding of the model output.

Many machine learning models operate as a “black-box”, where they take a series of inputs to give a series of outputs but do not offer any insight into what factors in the input drove those outputs, be it classification or regression outputs. Nor do they provide any rationale about how an undesired output can be changed to a desired outcome in the future for a similar case by impacting the input.

Explanations about which input variable impacted the output in what manner is critical for efforts to influence the key underlying metrics that may be being tracked for that product/process. The success of a data science model largely depends on how well the model is adopted and used by these consumers of the model outputs.

Frequently, the adoption fails to generate traction because the end users do not understand why the model generated a given prediction. In most cases, the responsibility of identifying the drivers of the prediction falls on the product owners or business analysts who use their experience and tribal knowledge to make assumptions about the reason behind the predictions. This necessarily relies on subjectivity and human bias and may or may not align with the true underlying data patterns the model uses to make its prediction. This problem is particularly acute when the model predictions are not aligned with end users’ tribal knowledge or gut instincts.

Likewise, user trust also gets affected when the model predicts an incorrect output. If the end user were able to see why the model made a particular decision, it can mitigate the ensuing trust erosion, restore trust and also elicit feedback for the model’s improvement. In the absence of trust restoration, the lack of trust may precede a gradual fall back to the old way of doing things, leading to the DS project’s failure without clear feedback to the developers about why the model was not adopted.

Adding interpretability and explanations for predictions can increase user confidence in the data science solutions and drive its adoption by end users. A key learning from our work in increasing and maintaining data science adoption is that explainability and interpretability are significant factors in driving success of data science solutions.

Even as machine learning solutions are touted as the next best thing for making better and quicker decisions, the human component of these systems is still what eventually influences their success or failure. As the advancements in artificial intelligence come progressively quicker, it is the solutions that incorporate this human component along with the cutting-edge algorithms that will rise to the top while the ones that ignore it at their own peril, will see themselves left behind.

Not sure where to start? A successful use case detailing how explainable AI was used in a real-world ML product at Dell can be found in a whitepaper here.

Konark Paul

About the Author: Konark Paul

Konark Paul is a Senior Data Scientist in the Services Operations Applied Science organization. He has developed and deployed multiple data science and machine learning solutions for Services, in the focus areas of case management, enterprise deployment and customer support, to name a few. Konark is passionate about solving business problems in collaboration with cross functional teams using data science. He is well versed in balancing business perspectives with the technical rigor of Data Science. Konark’s key strengths are in building and scaling ML and AI proof-of-concepts to production solutions with a focus on generating actionable tasks in addition to model predictions using interpretable machine learning concepts. His research and interest areas are Natural Language Processing / Understanding , Explainable AI, Knowledge Distillation, Applied acoustics, Adverserial attacks and its defense. He has published works in esteemed journals like Neurocomputing, Applied Acoustics and prestigious machine learning conferences like IEEE International Joint Conference on Neural Networks (IEEE IJCNN) and INTERSPEECH. Konark is a two-time recipient of the coveted Services President’s Award and has also filed a patent application with the US patent office for Dell.