Whitehill, Jacob Richard
In this project we investigate the viability of collecting annotations for face images while preserving privacy by using synthesized images as surrogates. We compare two approaches: a deep learning model to render a detailed 3D reconstruction of the face from an input image; and a novel generative adversarial network architecture that extends BEGAN-CS to generated images conditioned on desired facial features. Using these two models, we conduct an experiment with crowdsourced workers to compare annotation quality of original face images and synthesized versions. Across 60 workers annotating a total of 180 images (60 of each version), we find that while original versions have the best accuracy (84.5%), the 3D (75.9%) and GAN (75.6%) versions show promising results.
Worcester Polytechnic Institute
Major Qualifying Project
All authors have granted to WPI a nonexclusive royalty-free license to distribute copies of the work, subject to other agreements. Copyright is held by the author or authors, with all rights reserved, unless otherwise noted.