Publications

Pneumonia prediction

20 mai 2022

This use case highlights how the Saimple tool offers healthcare professionals the ability to understand the origin of the clinical decision made by the artificial intelligence

Objective

Pneumonia is an infection affecting part of the lungs. This disease is the leading cause of death in children under 5 years of age according to the World Health Organization (WHO). In fact, approximately 1.4 million children die of pneumonia each year.

To detect this disease, doctors perform a physical examination on patients with symptoms. They then perform a chest x-ray and make a diagnosis. However, in third-world countries, the tools needed to diagnose this disease are scarce and diagnoses are therefore often inaccurate.

The objective of this case study is therefore to help doctors to have a faster and more accurate diagnosis. Today, data collection is standardized but diagnostic applications are rare in the medical field. The interest is to develop an automated system using artificial intelligence, where Saimple, a tool developed by Numalis, would allow health professionals to understand the origin of the model's clinical decision.

Description of the dataset

This case study uses a dataset from a Kaggle competition:

https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia

This dataset contains about 6000 images of chest radiographs of children under 5 years old. The images are labeled according to two classes "NORMAL" and "PNEUMONIA".

It should be noted that in a radiograph, clarity corresponds to the black color and opacity to the white color.

It is important to remember that pneumonia is detected on a chest x-ray when an abnormal accumulation of fluid in the lungs is visible. To identify it, look for areas of opacity, specifically in the lung parenchyma (functional tissue of the lung). For example, in the x-rays below, the one on the right shows that water has accumulated in one of the lungs, while the one on the left does not.

 

Visualization of the dataset

Proportion of classes

Before starting the analysis, it is important to visualize the proportion of images in each class and in each dataset.

The above graphs show that the "PNEUMONIA" class is over-represented in the training and test sets. This seems consistent since, in reality, before performing a chest X-ray, the patient presents, most of the time, symptoms which increase the probability that the patient is sick. Yet, to improve the training of the model, we need to have a balanced training set, i.e., the same number of images for each class.

Data augmentation

In order to balance the dataset, we will use data augmentation, which is an effective technique to increase the number of images in the underrepresented class. To do this, several images from the underrepresented class are selected and will be transformed to create new images.

The transformation can correspond to:

- Rotate the image randomly,

- Resize vertically or horizontally,

- Crop the image,

- Zooming,

- Fill in pixels (i.e. fill in missing spaces in the resized image).

However, in reality, the images must meet medical quality criteria:

- Symmetry,

- Penetrance,

- Centering,

- Deep inspiration and clearance of the scapulae.

Thus, it is possible to determine that image rotation is not included in the data augmentation.

The four X-rays below are an example of data augmentation.

Indeed, from a single image, four new ones have been generated by cropping and filling the pixels of the missing areas.

 

Explanation of the model

For this case study, the prediction model is a convolutional neural network. It is a particular type of neural network used to process image data. The output value of this model is the probability of belonging to a class ("NORMAL" or "PNEUMONIA").

Both radiographs were well predicted: the radiograph of the healthy patient was predicted as "normal" and the radiograph of the patient with pneumonia was predicted as "pneumonia". A question then arises: what features of the image allowed the model to predict correctly?

Using of Saimple

Saimple is a neural network analysis tool offering the possibility to automatically measure and extract robustness and explainability elements of models. In this use case, Saimple allows identifying the areas of the image that were important in the prediction of the model and thus make it more understandable for health professionals.

Preprocessing

Before the Saimple tool can be used, it is necessary to resize the input images of the model. As can be seen below, in order to standardize the size of the radiographs, the radiograph has been resized and this transformation can be easily visualized.

Relevance analysis

Saimple identifies the important pixels that allowed the model to classify the image. A pixel is considered important when a value called "relevance" is higher than the average. The more this value tends towards red or blue, the more the pixel is considered important.

As a reminder, the relevance of an image classifier represents the influence of each pixel on the calculation of the model of the "NORMAL" class.

In the example below, the red pixels of the relevance correspond to the pixels of the input image that have positive effects on the membership of the output class.

The image, above, represents the input image of the model and the associated relevance. From the result, the relevance is very significant on the contour of the lungs represented by the black areas, which suggests that the model used the contour to compute its prediction. So the model predicts that this patient's x-ray is "normal" for what may be the right reasons.

On the image representing an X-ray of a patient with pneumonia, the relevance is also relevant to the contour of the lung. It is possible to see, on the right side of the image, that the relevance is not clear. In fact, the model might want to detect the black parts (representing the lungs), but the presence of liquid (and thus pneumonia) whitens the black areas and thus makes the lung disappear from the radiograph. Thus the algorithm would no longer detect the lung and would determine that the patient has pneumonia.

The relevance produced by Saimple allows us to understand how the network works. After having been able to recognize the elements of the image that allowed the prediction of the model, we must now ask ourselves other questions concerning the functioning of the model.

Is the model correct?

Presumably, it is possible to understand that the model detects if there is a problem with the lungs but not necessarily if it is pneumonia or some other pathology. Indeed, the marks obstructing the lung could come from cancer for example. A solution could be to improve the dataset by diversifying it. That is to say, by including chest X-rays of patients with various lung diseases. Then the algorithm might be able to detect and differentiate them. Saimple could monitor this experiment and help doctors verify that the detections are correct.

Conclusion

To conclude, this Saimple use case shows how artificial intelligence can assist doctors in making medical diagnoses. It has allowed highlighting the ability of Saimple to visualize the elements that have been judged relevant by the network during its analysis. By highlighting these elements, the user's attention is focused on a particular area of the image, thus providing additional assistance in his own decision-making. This highlighting can also be used during the design phase of the network by comparing the results of the analysis with experts to detect, for example, biases in the learning process.

Indeed, deep learning suffers from the "black box" effect, which excludes any validation of a trained model by direct analysis of its behavior. By now, only empirical validations of models, by measuring their performance on a set of validations, are most often performed. Saimple allows bringing elements of validation with formal methods, by decomposing and highlighting the elements of the input image taken into consideration by the network, which allows confronting the result obtained with the rational analysis that the designers or the users may have of the situation.

Thus, implementing AI to assist doctors in their tasks is important since this technology saves time and certainly efficiency for the medical staff, which is illustrated here in the fight against pneumonia. Thanks to Saimple, it is possible to develop a robust and explainable detection system for this disease.


We would like to thank Mrs. Marielle NGalo, head of the radiology department at the Avicenne Hospital in Bobigny, France, for her contribution to the development of this use case.

 

If you want to know more about this use-case or if you want a demo of Saimple,
contact us : support@numalis.com

 

Authors

Written by Noëmie RODRIGUEZ & Baptiste AELBRECHT

References

Numalis

We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us