eduzhai > Applied Sciences > Engineering >

ICAM Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

  • Save

... pages left unread,continue reading

Document pages: 22 pages

Abstract: Feature attribution (FA), or the assignment of class-relevance to differentlocations in an image, is important for many classification problems but isparticularly crucial within the neuroscience domain, where accurate mechanisticmodels of behaviours, or disease, require knowledge of all featuresdiscriminative of a trait. At the same time, predicting class relevance frombrain images is challenging as phenotypes are typically heterogeneous, andchanges occur against a background of significant natural variation. Here, wepresent a novel framework for creating class specific FA maps throughimage-to-image translation. We propose the use of a VAE-GAN to explicitlydisentangle class relevance from background features for improvedinterpretability properties, which results in meaningful FA maps. We validateour method on 2D and 3D brain image datasets of dementia (ADNI dataset), ageing(UK Biobank), and (simulated) lesion detection. We show that FA maps generatedby our method outperform baseline FA methods when validated against groundtruth. More significantly, our approach is the first to use latent spacesampling to support exploration of phenotype variation. Our code will beavailable online at this https URL.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×