eduzhai > Applied Sciences > Engineering >

Exploring Automatic Diagnosis of COVID-19 from Crowdsourced Respiratory Sound Data

  • Save

... pages left unread,continue reading

Document pages: 11 pages

Abstract: Audio signals generated by the human body (e.g., sighs, breathing, heart,digestion, vibration sounds) have routinely been used by clinicians asindicators to diagnose disease or assess disease progression. Until recently,such signals were usually collected through manual auscultation at scheduledvisits. Research has now started to use digital technology to gather bodilysounds (e.g., from digital stethoscopes) for cardiovascular or respiratoryexamination, which could then be used for automatic analysis. Some initial workshows promise in detecting diagnostic signals of COVID-19 from voice andcoughs. In this paper we describe our data analysis over a large-scalecrowdsourced dataset of respiratory sounds collected to aid diagnosis ofCOVID-19. We use coughs and breathing to understand how discernible COVID-19sounds are from those in asthma or healthy controls. Our results show that evena simple binary machine learning classifier is able to classify correctlyhealthy and COVID-19 sounds. We also show how we distinguish a user who testedpositive for COVID-19 and has a cough from a healthy user with a cough, andusers who tested positive for COVID-19 and have a cough from users with asthmaand a cough. Our models achieve an AUC of above 80 across all tasks. Theseresults are preliminary and only scratch the surface of the potential of thistype of data and audio-based machine learning. This work opens the door tofurther investigation of how automatically analysed respiratory patterns couldbe used as pre-screening signals to aid COVID-19 diagnosis.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×