eduzhai > Applied Sciences > Engineering >

Adversarial Attack Vulnerability of Medical Image Analysis Systems Unexplored Factors

  • Save

... pages left unread,continue reading

Document pages: 10 pages

Abstract: Adversarial attacks are considered a potentially serious security threat formachine learning systems. Medical image analysis (MedIA) systems have recentlybeen argued to be particularly vulnerable to adversarial attacks due to strongfinancial incentives. In this paper, we study several previously unexploredfactors affecting adversarial attack vulnerability of deep learning MedIAsystems in three medical domains: ophthalmology, radiology and pathology.Firstly, we study the effect of varying the degree of adversarial perturbationon the attack performance and its visual perceptibility. Secondly, we study howpre-training on a public dataset (ImageNet) affects the models vulnerabilityto attacks. Thirdly, we study the influence of data and model architecturedisparity between target and attacker models. Our experiments show that thedegree of perturbation significantly affects both performance and humanperceptibility of attacks. Pre-training may dramatically increase the transferof adversarial examples; the larger the performance gain achieved bypre-training, the larger the transfer. Finally, disparity in data and or modelarchitecture between target and attacker models substantially decreases thesuccess of attacks. We believe that these factors should be considered whendesigning cybersecurity-critical MedIA systems, as well as kept in mind whenevaluating their vulnerability to adversarial attacks.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×