Overcoming Data Scarcity via Few-shot Medical Image Classification Using Meta Learning.

It has been said on a few occasions that the English language is essentially a long-tail dataset: A grouping of data with one section of dominant data points and another section of fewer, and ever-lessening, data points.

The image below shows a typical long-tail graph, with the green section representing the dominant data points and the yellow representing the long tail. 

Image courtesy Hay Kranen

Image courtesy Hay Kranen / PD.

In the English language’s case, dominant words (the words you’d find in the green section) include “the” and “of”. The long tail (yellow) section contains words used much less frequently, such as “would” and “so”.

Data scarcity and machine learning

The same long tail principle often applies to machine learning (ML) datasets: Many have plenty of good data for a few subtopics, but a scarce amount related to several others within the same dataset. 

This makes building models for specific edge cases very challenging because there isn’t enough reliable data within that long tail of data points.

But long-tail datasets aren’t the only data scarcity challenge for ML engineers and data scientists.

After all, most ML and deep learning models are data-hungry by nature. They must ingest large amounts of data to accurately train their algorithms and remove bias. That in itself, however, can be an issue for a few reasons:

  • It can be expensive. In most cases, especially image data, the data needs to be annotated by an expert before it can be used to train a model. But data annotation from most providers is often expensive, requiring expert human intervention. (CapeStart’s image and data annotation services are typically more affordable than most providers) 
  • In some cases, it can be impossible to find enough data. Many obscure topics, including those concerning rare diseases, don’t have enough data or radiological images to train models accurately. Other factors contributing to data scarcity around a topic include privacy issues and the time-consuming nature of data annotation.

For these and other reasons, few-shot learning (FSL) and classification using meta-learning have received plenty of attention in the research community as a potential solution to this problem.

What is few-shot learning?

Few-shot learning, also known as low-shot learning (LSL), is an ML method that requires far less training data than normal. Instead of feeding the model unending amounts of data to get results, FSL algorithms can spit out accurate results using far less training data. 

Although the technology is still relatively new, it’s being experimented with in applications such as computer vision, natural language processing (NLP), and audio processing. It has shown great promise in potentially reducing the turnaround time and cost of developing machine learning applications – especially for topics with little training data available.

Various FSL approaches have been tried by researchers, including transfer learning, meta-learning, and hybrid models combining multiple approaches. But it’s meta-learning that has, so far, proven the most valuable – and the most popular.

What is meta-learning?

ML experts Sabastian Thrun and Lorien Pratt said in 1998 that an algorithm is learning “if its performance at the task improves with experience.” 

The pair also said an algorithm is learning to learn if “its performance at each task improves with experience and with the number of tasks”.

The latter statement is an amazingly apt description of a meta-learning algorithm, considering it came nearly 25 years ago. Instead of just learning how to do one thing, meta-learning algorithms learn to learn – and like most ML models, the more they learn, the better they get.

Two popular meta-learning solutions are metric learning, such as the Matching Networks model, and model-agnostic meta-learning (MAML). 

Metric learning’s popular model, Matching Networks, has been shown to improve one-shot learning accuracy by seven percent on the Omniglot dataset and by more than five percent using the ImageNet image database, compared to other approaches.

 Few-shot learning in action

Two recent examples of FSL in action include MetaMed and Meta-DermDiagnosis. We’ll explain each in more detail below.

MetaMed (Singh et. al., 2021)

Designed to tackle long-tailed data distributions, specifically for rare diseases where, as we mentioned earlier, there are often not enough medical images for training traditional ML or deep learning algorithms. 

MetaMed was validated on three medical datasets – Pap smear, BreakHis, and ISIC 2018 – and used image augmentation techniques such as CutOut, MixUp, and CutMix to remove any issues around overfitting. The approach has shown an accuracy level of 70 percent and has outperformed transfer learning.

Meta-DermDiagnosis (Mahajan et. al., 2020)

This approach uses meta-learning-based few-shot techniques, including Reptile and Prototypical Networks. It identifies sometimes rare skin diseases from medical images. As the authors state, dermatology is a branch of medicine with many long-tail data distributions, including several obscure or new skin conditions. Annotating dermatological images is also often very challenging, even for experts.

The approach performed better than pre-trained models on the ISIC 2018 Skin Lesion dataset and Derm7pt. It improved accuracy even more when paired with Group Equivariant convolutions (G-convolutions), showing consistently better AUC and accuracy scores than the alternatives.

Scale your efficiency and innovation with CapeStart’s end-to-end data annotation, ML, NLP, and software development for the healthcare industry, including ML-aided systematic reviews and clinical evaluation reports. Contact us today to set up a brief discovery call with our experts.

Contact Us.