eduzhai > Applied Sciences > Engineering >

Dense and Sparse Coding Theory and Architectures

  • Save

... pages left unread,continue reading

Document pages: 15 pages

Abstract: The sparse representation model has been successfully utilized in a number ofsignal and image processing tasks; however, recent research has highlighted itslimitations in certain deep-learning architectures. This paper proposes a noveldense and sparse coding model that considers the problem of recovering a densevector $ mathbf{x}$ and a sparse vector $ mathbf{u}$ given linear measurementsof the form $ mathbf{y} = mathbf{A} mathbf{x}+ mathbf{B} mathbf{u}$. Our firsttheoretical result proposes a new natural geometric condition based on theminimal angle between subspaces corresponding to the measurement matrices$ mathbf{A}$ and $ mathbf{B}$ to establish the uniqueness of solutions to thelinear system. The second analysis shows that, under mild assumptions andsufficient linear measurements, a convex program recovers the dense and sparsecomponents with high probability. The standard RIPless analysis cannot bedirectly applied to this setup. Our proof is a non-trivial adaptation oftechniques from anisotropic compressive sensing theory and is based on ananalysis of a matrix derived from the measurement matrices $ mathbf{A}$ and$ mathbf{B}$. We begin by demonstrating the effectiveness of the proposed modelon simulated data. Then, to address its use in a dictionary learning setting,we propose a dense and sparse auto-encoder (DenSaE) that is tailored to it. Wedemonstrate that a) DenSaE denoises natural images better than architecturesderived from the sparse coding model ($ mathbf{B} mathbf{u}$), b) training thebiases in the latter amounts to implicitly learning the $ mathbf{A} mathbf{x} + mathbf{B} mathbf{u}$ model, and c) $ mathbf{A}$ and $ mathbf{B}$ capture low-and high-frequency contents, respectively.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×