TY - GEN
T1 - OGCTL
T2 - 2019 International Conference on Biometrics, ICB 2019
AU - Wu, Yuhang
AU - Kakadiaris, Ioannis A.
N1 - Funding Information:
This material is based upon work supported by the U.S. Department of Homeland Security under Grant Award Number 2017-ST-BTI-0001-0201. This grant is awarded to the Borders, Trade, and Immigration (BTI) Institute: A DHS Center of Excellence led by the University of Houston, and includes support for the project “EDGE” awarded to the University of Houston. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - Concatenation of the deep network representations extracted from different facial patches helps to improve face recognition performance. However, the concatenated facial template increases in size and contains redundant information. Previous solutions aim to reduce the dimensionality of the facial template without considering the occlusion pattern of the facial patches. In this paper, we propose an occlusion-guided compact template learning (OGCTL) approach that only uses the information from visible patches to construct the compact template. The compact face representation is not sensitive to the number of patches that are used to construct the facial template, and is more suitable for incorporating the information from different view angles for image-set based face recognition. Instead of using occlusion masks in face matching (e.g., DPRFS [38]), the proposed method uses occlusion masks in template construction and achieves significantly better image-set based face verification performance on a challenging database with a template size that is an order-of-magnitude smaller than DPRFS.
AB - Concatenation of the deep network representations extracted from different facial patches helps to improve face recognition performance. However, the concatenated facial template increases in size and contains redundant information. Previous solutions aim to reduce the dimensionality of the facial template without considering the occlusion pattern of the facial patches. In this paper, we propose an occlusion-guided compact template learning (OGCTL) approach that only uses the information from visible patches to construct the compact template. The compact face representation is not sensitive to the number of patches that are used to construct the facial template, and is more suitable for incorporating the information from different view angles for image-set based face recognition. Instead of using occlusion masks in face matching (e.g., DPRFS [38]), the proposed method uses occlusion masks in template construction and achieves significantly better image-set based face verification performance on a challenging database with a template size that is an order-of-magnitude smaller than DPRFS.
UR - http://www.scopus.com/inward/record.url?scp=85081048123&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081048123&partnerID=8YFLogxK
U2 - 10.1109/ICB45273.2019.8987272
DO - 10.1109/ICB45273.2019.8987272
M3 - Conference contribution
AN - SCOPUS:85081048123
T3 - 2019 International Conference on Biometrics, ICB 2019
BT - 2019 International Conference on Biometrics, ICB 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 June 2019 through 7 June 2019
ER -