showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

From Big Data To Big Decisions: How Machine Learning Is Revolutionising Industries

The boom in machine learning (ML) has transformed the tools used across industries, and businesses are compelled to keep up with the ever-evolving economy, where agility and adaptation are key for survival. The global ML market size, valued at approximately US$38.11 billion in 2022, is projected to reach US$771.38 billion by 2032. As SMU Professor of Computer Science Sun Jun puts it, the ubiquity of ML across sectors can be attributed to “their seemingly unlimited capacity in discovering complicated patterns in big data that can effectively solve a variety of problems”.

But the power of ML is fettered by the complexity of the model; as the demands of the task increase, the number of dials to twiddle to fine-tune the algorithm explodes. For instance, state-of-the-art models such as language model ChatGPT has 175 billion weights to calibrate, while weather forecast model Pangu-Weather has 256 million parameters.

To close the chasm between human understanding and decisions made by sophisticated ML models, a simple approach to quantify the difficulty of interpretation of these models is needed. In his paper, Which neural network makes more explainable decisions? An approach towards measuring explainability, Prof Sun — who is also Co-Director of the Centre for Research for Intelligent Software Engineering (RISE) — introduces a functional paradigm that organisations can take in selecting the right models for their business.

Machine learning: The good and the bad

In this digital era, the vast amount of data collected from millions of individuals represents a valuable resource for companies to tap into. However, processing this huge dataset and translating it into operationally ready strategies requires technical expertise and large time-investments. According to cognitive psychologist George A. Miller, the average number of objects an individual can hold in their working memory (short-term memory) is about seven—the limit of the capabilities of human workers.

Overcoming this limitation of the human faculty is where ML models shine: their ability to handle big data, spot subtle patterns, and solve challenging tasks help companies to allocate resources more effectively.

“ML models and techniques are increasingly used to guide all kinds of decisions, including those business- and administration-related ones, such as predictive analytics, pricing strategies, hiring and so on,” says Prof Sun.

SMU Professor of Computer Science Sun Jun; Co-Director, Centre for Research for Intelligent Software Engineering

Commercial executions of ML models are built around the neural network, an algorithm that mimics the architecture of the human brain. With many “neurons” woven into a vast interlinked structure, these models can quickly accumulate millions of parameters as neurons are added. The recent development of fast self-training algorithms has improved the accessibility of cutting-edge models to businesses and firms, enabling the algorithms to be deployed in many end-user applications without requiring a comprehensive understanding of the internal logics.

However, some sensitive, niche applications require the decisions made by these “black box” algorithms to be justified. For example, the General Data Protection Regulation (GDPR) addresses concerns surrounding automated personal data processing by granting European Union citizens the right to obtain an explanation behind the decision made by automated means in the context of Article 22. Similarly, if a customer is denied credit, the Equal Credit Opportunity Act (ECOA) in the United States mandates creditors to provide an explanation.

Beyond legal implications, Prof Sun also illustrates the necessity of explainability in building trust and assurance between customers and businesses deploying ML algorithms: “If a user sees that majority of the decisions can actually be explained in a language that he or she can understand, the user would have more confidence in these techniques and systems over time.”

A yardstick for explainability

For an intangible concept like explainability, designing a consistent and universal metric is not easy. On the surface, it seems impossible as explainability is subjective to the individual. Prof Sun dives directly into the practical approach: “Basically, we aim to answer one question. If we are given multiple neural network models to choose from, and we have reasons to demand a certain level of explainability, how do we make the choice?”

Prof Sun and his team chose to measure explainability of neural networks in the form of a decision tree: another common ML algorithm. In this model, the computer starts at the base of the tree and asks yes-or-no questions as it traverses its way up. The answers collected let the computer trace a path to a specific branch, which then dictates the actions to be taken. As the number of questions increases, the taller the tree must be to come to a decision. Compared to the intrinsic complexity of the neural network, the decision tree comes closer to how humans evaluate situations to make a choice.

By breaking down the choices made by a complicated neural network into a decision tree, and measuring the height of the tree, one can determine the explainability of an ML algorithm. For instance, an algorithm deciding on whether to bring an umbrella out for the day (“Is the sky cloudy? Did it rain yesterday?”) will have a smaller decision tree than an algorithm qualifying individuals for bank loans (“What is their annual income? What is their credit rating? Do they have an existing loan?”).

The novel paradigm for quantifying explainability closes the gap in the human-machine interface in translating state-of-the-art ML models to operational deployment in firms. “With our approach, we help business owners to choose the right neural network model,” highlights Prof Sun.

In light of their findings, the team is set to further their research in the practical utilisations of ML models, such as trustworthiness, safety, security, and ethics. Prof Sun hopes to develop practical techniques and tools that can make an ML-empowered world a better place.

Professor Sun Jun instructs CS612 AI Safety: Evaluation and Mitigation in SMU’s Master of IT in Business (MITB) programme. The course systematically addresses the practical aspects of deploying ML models, focusing on safety and security concerns, alongside methodologies for risk assessment and mitigation.