How to sharpen machine learning with smarter management of edge cases

Machine learning (ML) applications are transforming business strategy, popping up in every vertical and niche to convert huge datasets into valuable predictions that guide executives to make better business decisions, seize opportunities, and spot and mitigate risks.

While ML models are rife with potential, it’s quality data that allows them to become accurate and effective. Today’s enterprises are handling huge floods of data, including unstructured data, all of which needs annotating before ML models can produce dependable predictions.

Data processing is often under-scrutinised, but it’s crucial for accurate and relevant forecasts. If data is mislabeled or annotated incorrectly, all your predictions will be based on misconceptions, making them basically untrustworthy. What’s worse, you might not even realise it.

There’s no way to manage this much data with manual annotation alone. Humans can’t cope with the size of big data or speed of changes in streaming datasets, and algorithm creation and deployment is never a once-and-done activity.

Many models need to be constantly validated and retrained, but enterprises can’t afford a large human data-verification force for manual verification. You need automation, but contrary to wild fears about AI takeovers, you can’t entirely remove the human element either. AI can struggle to detect variations that humans can see in a glance.

Eran Shlomo, CEO of Dataloop, has a great deal to say about forging the perfect balance between machine autonomy and human intervention. To illustrate the limits of what we can expect of machines, Shlomo brings up questions about the very nature of consciousness. “The big difference is that we expect sanity from humans, and we can not expect sanity from AI,” he explains via email.

“But what is sanity?” Shlomo continues. “This is a very hard question, often debated in courts, and if we try to simplify it, it’s what we as society approve as acceptable – approval of which changes with time, nations, cultures and laws. In essence, every AI system should be connected to our public consensus and continuously reflect that consensus in its decisions.”

No ML algorithm, then, can exist truly independently, which is why human-in-the-loop, or HITL, combines human intelligence and AI to leverage the advantages of both modes of operation. However, enterprises still want to make the most of their human evaluators and annotators.

That’s why you use AI to process the flood of data, and send only edge cases to humans for manual validation, when AI has a low confidence rating. This is exactly where the Dataloop platform differs from others in its space – by offering data management tools in the same package as the annotation interface and developer tools for routing workflows.

In a nutshell, here is how using HITL for edge case validation improves ML projects.

Getting models right (almost) first time

The adage of “garbage in, garbage out” is just as true today as when it was coined in the 1960s. Accurate predictions need correct data, but edge cases that are wrongly classified impact on all subsequent automated data classification.

ML data annotation engines learn from the past, so mislabeled cases only lead to more mislabeled data.

If you only discover labeling mistakes after model deployment, or even after model training, it’s already too late. You’ll have to start model training from the beginning again, wasting both time and money. Additionally, if it happens more than once or twice, your data science teams will constantly be on the back foot, second-guessing data quality out of a desire to avoid having to repeat the model creation process once more.

Scaling up ML production

Production is when AI models prove their value, and as AI use spreads, it becomes more important for businesses to be able to scale up model production to remain competitive. But as Shlomo notes, scaling production is exceedingly difficult, as this is when AI projects move from the theoretical to the practical and have to prove their value.

“While algorithms are deterministic and expected to have known results, real world scenarios are not,” asserts Shlomo. “No matter how well we will define our algorithms and rules, once our AI system starts to work with the real world, a long tail of edge cases will start exposing the definition holes in the rules, holes that are translated to ambiguous interpretation of the data and leading to inconsistent modeling.”

That’s much of the reason why more than 90% of c-suite executives at leading enterprises are investing in AI, but fewer than 15% have deployed AI for widespread production. Part of what makes scaling so difficult is the sheer number of factors for each model to consider. In this way, HITL enables faster, more efficient scaling, because the ML model can begin with a small, specific task, then scale to more use cases and situations.

“This is the main slowness behind AI development, as every AI product has to launch and slowly reveal these issues,” says Shlomo, “leading to algorithm and data ontology changes that will manifest themselves on the next model version.”

With smart use of HITL, the ML data processing constantly learns to be more accurate, speeding up scaling, while having humans check only edge cases also cuts time spent on manual verification.

Stopping the avalanche before it begins

A small mistake in edge case annotations can escalate to cause a cascade of problems, ultimately leading to significant damage or loss from faulty decision-making.

ML project managers might find themselves overlooking a prime business opportunity or failing to spot a data breach while it’s still small. Even worse, the rise of “smart city” initiatives and autonomous vehicles means that a slight error could cause a fatal accident.

HITL for edge cases catches these minor mislabeling incidents, preventing a crisis.

This is one of the key advantages of using a single platform for annotation and data management. When it’s all run on a single system, you can set automatic confidence thresholds that ensure that edge cases are automatically and immediately prioritised in the queues of your human checkers.

Improving data and ML transparency

Transparency is becoming more important in the world of AI and ML as use cases expand, the general public becomes more aware of them, and, inevitably, as more mistakes and accidents occur.

In Shlomo’s words, “Since AI contains the biases, beliefs and judgement of its creators, every AI system is expected to establish trust with its users – trust in that the user accepts the AI’s behaviour as normal in the context of their own views and expectations.”

HITL edge case verification adds transparency, as well as giving companies and data science teams the confidence to stand by their models and predictions.

“If we add the fact that AI systems never work 100% and users expect 100% accuracy most of the time from technology products,” adds Shlomo, “every AI product is guaranteed to have a trust crisis with its users, and transparency is the first line of trust establishment.”

Your AI-powered competitive edge depends on effective edge case management

As ML adoption spreads, efficient data management and annotation become more important. A HITL data processing pipeline for edge cases combines artificial and human intelligence in the most effective manner, enabling teams to develop models with confidence, scale up production, increase transparency, and prevent the disasters that can result from small errors in model data.

Sadie Williamson

Sadie Williamson is the founder of Williamson Fintech Consulting. With over a decade in the fintech arena under her belt, she helps fintech firms to develop custom solutions targeting a variety of verticals. Her...