CC BY-NC-ND 4.0 · International Journal of Nutrology 2021; 14(02): e55-e60
DOI: 10.1055/s-0041-1734014
Review Article

Smartphone-based photo analysis for the evaluation of anemia, jaundice and COVID-19

Análise de fotos pelo smartphone para avaliação de anemia, icterícia e COVID-19
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
2   Department of Computing, Universidade Federal de São Carlos, São Carlos, SP, Brazil.
,
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
,
Obeedu Abubakar
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
,
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
,
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
,
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
,
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
3   Department of General Surgery, Santa Casa de São Carlos, São Carlos SP, Brazil
4   Department of Education, Santa Casa de São Carlos, São Carlos SP, Brazil
,
Lucas Vinícius Domingues
2   Department of Computing, Universidade Federal de São Carlos, São Carlos, SP, Brazil.
,
2   Department of Computing, Universidade Federal de São Carlos, São Carlos, SP, Brazil.
,
1   Department of Medicine, Universidade Federal de São Carlos, São Carlos, SP, Brazil
› Author Affiliations
 

Abstract

Anemia and jaundice are common health conditions that affect millions of children, adults, and the elderly worldwide. Recently, the pandemic caused by severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2), the virus that leads to COVID-19, has generated an extreme worldwide concern and a huge impact on public health, education, and economy, reaching all spheres of society. The development of techniques for non-invasive diagnosis and the use of mobile health (mHealth) is reaching more and more space. The analysis of a simple photograph by smartphone can allow an assessment of a person's health status. Image analysis techniques have advanced a lot in a short time. Analyses that were previously done manually, can now be done automatically by methods involving artificial intelligence. The use of smartphones, combined with machine learning techniques for image analysis (preprocessing, extraction of characteristics, classification, or regression), capable of providing predictions with high sensitivity and specificity, seems to be a trend. We presented in this review some highlights of the evaluation of anemia, jaundice, and COVID-19 by photo analysis, emphasizing the importance of using the smartphone, machine learning algorithms, and applications that are emerging rapidly. Soon, this will certainly be a reality. Also, these innovative methods will encourage the incorporation of mHealth technologies in telemedicine and the expansion of people's access to health services and early diagnosis.


#

Resumo

Anemia e icterícia são condições comuns e acometem milhões crianças, adultos e idosos em todo mundo. Recentemente, a pandemia causada pela síndrome respiratória aguda grave-coronavírus 2 (SARS-CoV-2, na sigla em inglês), o vírus que causa a Covid-19, trouxe extrema preocupação mundial, com impacto na saúde pública, educação e economia, atingindo todas as esferas da sociedade. O desenvolvimento de técnicas para diagnósticos não invasivos e o uso de saúde móvel (mHealth) estão ganhando cada vez mais espaço. A análise de uma simples fotografia pelo smartphone pode permitir a avaliação do estado de saúde de uma pessoa. As técnicas de análise de imagens avançaram muito em pouco tempo. Análises que antes eram feitas manualmente, agora podem ser feitas de forma automatizada e com métodos de análises envolvendo inteligência artificial. Parece haver uma tendência na utilização de smartphone combinada com técnicas de aprendizado de máquina para análise de imagens (pré-processamento, extração de características e classificações) capazes de fornecerem predições com altas sensibilidade e especificidade. O que está sendo apresentado nesta revisão são alguns destaques da avaliação de anemia, icterícia e COVID-19 por análise de foto, enfatizando a importância do uso do smartphone, algoritmos de aprendizado de máquina e os aplicativos que estão surgindo como aposta e certamente serão uma realidade em breve, incorporando as tecnologias de mHealth na telemedicina, ampliando o acesso das pessoas aos serviços de saúde e ao diagnóstico precoce.


#

Introduction

The search for non-invasive technologies for diagnosis has expanded the research in this area. Point-of-care testing (POCT) has widespread use, and it is an important low-cost tool that allows screening in needy regions.[1] The use of telemedicine and the development of health applications for smartphones are growing, which allows mobile health (mHealth) to gain global recognition.[2] [3] The coronavirus disease 2019 (COVID-19) pandemic offers even further reasons for the development of techniques that do not require interpersonal contact and maintain effective health monitoring. The maintenance of the necessary social distance through the use of healthcare alternatives, such as telemedicine and mHealth applications, leads to the practice of remote monitoring.[4]

Image analysis techniques have made some significant advances in a short time. Analyses that were previously done manually can now be performed automatically with analytical methods involving artificial intelligence. Machine learning algorithms are increasingly being applied to predict and classify eye diseases using images of the eye in an automatic way.[5] [6] Also, skin diseases such as melanoma, eczema, and psoriasis, can be classified with high precision through photo and machine learning.[7] Here, we present some highlights of the evaluation of anemia, jaundice, and COVID-19 by photo analysis, emphasizing the importance of using the smartphone and machine learning algorithms.


#

Anemia

Anemia is a health condition that is characterized by a decrease in serum hemoglobin levels. A physical examination allows a subjective assessment of the decrease in hemoglobin levels through the observation of the skin, eyelid conjunctiva, oral mucosa, nails, and palms. Pallor signs of the skin, conjunctiva, nail beds, lips, oral mucosa, and palmar creases are signs of anemia that can be found during the physical examination.[8] [9] However, these observations are non-specific and less accurate methods of assessing anemia, because the quality of these findings will depend on the training of the clinician, and they only permit detection of a marked decrease in hemoglobin. The smartphone has been used for non-invasive assessment of hemoglobin levels. To provide quantitative results, the smartphone's camera can be used to take pictures of these regions for further analysis.[10]

Jain et al.[11] used an artificial neural network to detect anemia from photos of the palpebral conjunctiva of 48 anemic patients and 51 non-anemic patients. When there are few data samples available, it is necessary to implement data augmentation strategies to produce new artificial data from the original. Neural networks depend on big data to prevent the model from overfitting (memorizing instead of learning, failures in the training process, increase in classification errors and a super adapted model). The algorithms used in data augmentation techniques include geometric transformation (rotate, invert and reflect), a mixture of images, random deletion, adversary training, meta-learning, increased test time, size of the final dataset, etc. These algorithms increase the size and quality of the data, decrease failures in training, and improve deep learning.[12] Therefore, image augmentation was applied in the study conducted by Jain et al. by using mirroring, rotation, and translational algorithm processes, because the researchers obtained very few original pictures (99 images). Applying this procedure, 3,103 images (anemia images = 1,425, and non-anemia images = 1,678) were generated. A standard camera was used to capture these images. The pictures were also segmented, and the region of interest was selected. Image augmentation was used to produce the final dataset. The region of interest (ROI) was extracted (component separation in red, green, and blue [RGB]), and the characteristics were generated by calculating the mean intensity of the red and green components. Using neural networks constructed from 3 layers, this model achieved precision of 97.00%, sensitivity of 99.21%, and specificity of 95.42% in the created set data. The three layers are: an input layer (red and green components), a hidden layer, and an output layer, consisting of interconnected neurons that allow neural networks to learn patterns.[11]

A smartphone application (Selfienemia) was developed to estimate hemoglobin levels under controlled lighting conditions. After capturing and processing the photo, a colorimetric analysis is performed using a mathematical model from a smartphone cloud service. In this prototype, a custom camera outside the app was used for better control of exterior conditions. Sixty-four tongue images and 64 conjunctiva images were obtained. The results from the application were compared with conventional complete blood count (CBC), which is considered the gold standard test for anemia diagnosis. After evaluation of the tongue images, the results were 91.89% sensitive and 85.18% specific, and R2 = 0.69; analysis of conjunctival palpebra revealed 91.89% sensitivity and 70.34% specificity, and R2 = 0.57.[13]

Kasiviswanathan et al.[14] developed the Chromanalysis app, which allows prediction of hemoglobin levels from the photograph of the eyelid conjunctiva of an individual. A total of 1,212 images from 135 participants were obtained. Out of these, 44 were anemic (Hb < 11 g / dl) and 91 were non-anemic (Hb ≥ 11 g/dl). Different smartphones with distinct image resolutions were used to provide much reliable training and to prevent analysis from depending entirely on the quality of the camera or specific lighting conditions. Chromanalysis app is separated into two modules: front end and back end. The front end is the mobile application that interacts directly with the user, captures images and details of the individuals (age, height, weight, etc.) and stores and saves data in a cloud service. The back end, written in Python computer language, works on the “backstage” of the application, processes data and uses the Ridge regression algorithm to make the prediction and subsequently sends the message back to the mobile application (front end). These cloud computing services are increasingly being used, as they allow data verification and storage as well as an interface between the front end and back end components.[14] [15] A total of 1,115 images were utilized for ridge regression model training, and 87 pictures for testing. By comparing the prediction values with the measured hemoglobin values, a mean absolute error (MAE) = 0.99, a mean squared error (MSE) = 1.66, a root mean squared error (RMSE) = 1.29, and a Pearson correlation coefficient = 0.7222 were observed for the training. However, for testing, the MAE was 1.34, the MSE was 2.97, the RMSE was 1.72, and the Pearson correlation coefficient was 0.705.[14]

Mannino et al.[16] presented an intuitive application capable of evaluating anemic conditions from photographs of fingernails. The patient takes a picture of his fingernails, and the quantitative result on hemoglobin levels will be displayed on the smartphone's screen. To develop the application, photos of the fingernails from 337 individuals were obtained and divided into 2 groups, which consisted of 237 (discovery/training group) and 100 (test group) and also underwent CBC to cross-check the analysis. The images obtained were processed by a computer using MATLAB software (Mathworks, Natick, MA). The irregularities and interferences were filtered, ROIs were selected, and the color intensity was analyzed. To compare data obtained from the images with hemoglobin levels measured by CBC for each patient, a robust multilinear regression algorithm was used with a bi square weighting algorithm. The relationship (r) between the result of hemoglobin in the CBC and the one generated by the algorithm showed that, for the 100 individuals, r = 0.82, sensitivity = 92%, and specificity = 76%. The method exhibited an accurate result when the cut-off point for hemoglobin levels was < 11 g/dL, and the performance measured by area under the curve (AUC) was 0.88. When the cut-off was < 12.5 g/dL (World Health Organization hemoglobin mean cut-off for anemia), sensitivity of 97% was reached. Furthermore, as the algorithm for image analysis is independent of the device, this investigation can be performed remotely and the results can be sent to computers or cloud-based services. Therefore, this makes it a potential method to be implemented in telemedicine and in poor countries to promote non-invasive screening for anemia.


#

Jaundice

Jaundice is a condition presented by yellowish pigmentation of the skin, mucous membranes, and sclera, due to an increase in levels of total serum bilirubin (TSB), usually higher than 3 mg/dL. This problem can affect adults and newborns and can be caused by physiological processes (e.g., neonatal jaundice) or pathological problems (prehepatic, intrahepatic, or posthepatic causes).[17] Levels of total bilirubin in adults between 0.2 and 1.2 mg/dL are considered to be in the normal range. During a physical examination, observation of the sclera may reveal primary signs of jaundice, and the evaluation is subject to mistakes by the evaluating clinician.[18]

To create a prototype to be used in remote locations with few resources, a non-invasive method has been developed to determine serum bilirubin levels from pictures of the sclera. The more intense the yellowish color of the sclera, the higher the bilirubin level. Handmade goggles were created to prevent ambient light and to intensify the ROI. After measuring total bilirubin in 13 patients with liver diseases and 12 healthy patients, photographs of the sclera were taken with a webcam and with a smartphone camera. This was followed by segmentation steps (noise removal and ROI selection) and feature extraction (i.e., 50 random pixels from each segmented sclera and analysis of brightness and intensity of the pixels, using the histogram). The extracted data were used to train different models of regression-based machine learning algorithms to predict bilirubin level. The random forest algorithm presented the best result with the lowest absolute error (0.29), mean square error (0.12), and root mean square error (0.35). The other regression algorithms showed higher errors, decision tree (0.74, 1.75, 1.32 for absolute error, mean square error, and root mean square error respectively), linear regression (0.63,0.65 and 0.81), and k-nearest neighbor k-NN (0.53, 0.58, 0.76). The results were attractive; sensitivity = 90% and specificity = 82%, revealing that this method can be a non-invasive alternative to determine bilirubin levels and to allow the development of smartphone applications using this technique.[18]

Smartphone apps are being developed to evaluate skin and sclera coloration, aiming to make accurate jaundice predictions. One of these smartphone applications was developed by Padidar et al.[19] to process pictures of the skin taken from the forehead of 113 newborns to analyze the RGB components of the images and a calibration card. When bilirubin levels were lower than 10 mg/dL, test sensitivity was 68% and specificity 92.3%; however, when bilirubin levels were lower than 15 mg/dL, sensitivity and specificity reached 82.1% and 100%, respectively.

Outlaw et al.[20] developed the neoSCB (neonatal scleral-conjunctival bilirubinometer) application to analyze photos of sclera taken by a frontal camera with and without a flashlight to subtract ambient light. Photo processing and data analysis were performed taking into account the ROIs of the sclera and the median of RGB values using the MATLAB R2018a software (Mathworks, Inc., Natick, MA, USA). The pictures may present some interferences like blood capillary and cilia; so, the median was calculated and applied, unlike the average, to decrease the impact of these components from altering the results. In the present study, positive results were achieved from the image collection of the sclera from 51 neonates; the test was comparable to the transcutaneous bilirubin, presented a good accuracy (AUC = 0.86) measured by using the area under the receiver operating characteristic (ROC) curve, sensitivity of 100%, and specificity of 61% in newborns with total serum bilirubin (TSB) > 14.62 mg / dL. In newborns with TSB higher than 11.99 mg/dL, the results were as follows: 92% sensitivity, 67% specificity, and 0.85 for the AUC.


#

COVID-19

Several strategies are being developed to allow non-invasive diagnosis and assessment with focus on the evolution of COVID-19. Orofacial manifestations provide important information regarding the presence and advancement of the disease.[21] [22] [23] Among the main observations are ulcerative/aphthous-like lesions, vesiculobullous/macular lesions, oral dryness, atrophy of the surface of the tongue, fissures located in the dorsum of the tongue, and parotitis, which can present as first signs of COVID- 19.[22] [23]

In a published case report, photos of the oral mucosa of a patient diagnosed with COVID-19 revealed white plaques at the back of the tongue, small yellowish ulcers, elevated yellow-white halo (geographical tongue), nodule on the lower lip, and slightly erythematous tonsil. The authors reinforce that oral signs such as petechiae and other types of ulcerations can appear in patients with COVID-19. Discussions were made to clarify whether the inflammatory process is a first sign of COVID-19 because salivary gland and tongue's mucosa serve as reservoirs for the virus,[23] or the findings would be due to secondary manifestations of systemic conditions, such as recurrent infections of oral herpes simplex virus (HSV-1) and oral candidiasis.[21]

In one study, 109 images of the tongue were used. Out of these pictures, 13 were from patients diagnosed with COVID-19. The images were obtained by the smartphone and preprocessed (i.e., resizing, converting to grayscale, and proceeding noise removal). Furthermore, they went through segmentation processes that involve graph-based segmentation or Grab Cut, grouping and labeling similar pixels using foreground/background algorithm that allows separating the background from the image of interest. After segmentation, various characteristics that include geometric properties, color, and texture are extracted and classified. A healthy tongue is indicated by a reddish smooth texture; however, a whitish coloring, texture with white spots, may indicate the presence of COVID-19. For training, 70% of the data (76 images) were used, but for testing 30% (33 images) were considered. An accuracy of 60% was achieved by adjusting to a regression model logistics, considering a tongue characterized by a whitish color as the standard observation of COVID-19.[24]

[Fig. 1] represents, in a simplified way, the processes involved in image analysis for predictions of anemia, jaundice, and COVID-19.

Zoom Image
Fig. 1 A schematic representation of the steps used in smartphone-based photo analysis for evaluating anemia, jaundice, and COVID-19.

#

Final Considerations

Advances in non-invasive disease diagnosis are notable. There seems to be a trend in the use of smartphones combined with machine learning techniques for analyzing photos and being capable of making predictions with high sensitivity and specificity. We presented in the current review some highlights of photo analysis for the evaluation of anemia, jaundice, and COVID-19, emphasizing the importance of using the smartphone and machine learning algorithms for preprocessing, feature extraction, and prediction. Smartphone apps are emerging and incorporating mHealth technologies in telemedicine quite rapidly. This development is expanding people's access to health services and early diagnosis.


#
#

Conflict of Interests

The authors declare that there is no conflict of interests.


Address for correspondence

Dr. Thiago Mazzu-Nascimento, PhD
Department of Medicine, Universidade Federal de São Carlos
São Carlos, SP
Brazil   

Publication History

Received: 02 March 2021

Accepted: 25 May 2021

Article published online:
23 September 2021

© 2021. Associação Brasileira de Nutrologia. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commecial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Revinter Publicações Ltda.
Rua do Matoso 170, Rio de Janeiro, RJ, CEP 20270-135, Brazil


Zoom Image
Fig. 1 A schematic representation of the steps used in smartphone-based photo analysis for evaluating anemia, jaundice, and COVID-19.