TY - GEN
T1 - Equal Confusion Fairness
T2 - 22nd IEEE International Conference on Data Mining Workshops, ICDMW 2022
AU - Gursoy, Furkan
AU - Kakadiaris, Ioannis A.
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation under Grant CCF-2131504. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - As artificial intelligence plays an increasingly substantial role in decisions affecting humans and society, the accountability of automated decision systems has been receiving increasing attention from researchers and practitioners. Fairness, which is concerned with eliminating unjust treatment and discrimination against individuals or sensitive groups, is a critical aspect of accountability. Yet, for evaluating fairness, there is a plethora of fairness metrics in the literature that employ different perspectives and assumptions that are often incompatible. This work focuses on group fairness. Most group fairness metrics desire a parity between selected statistics computed from confusion matrices belonging to different sensitive groups. Generalizing this intuition, this paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness. To further analyze the source of potential unfairness, an appropriate post hoc analysis methodology is also presented. The usefulness of the test, metric, and post hoc analysis is demonstrated via a case study on the controversial case of COMPAS, an automated decision system employed in the US to assist judges with assessing recidivism risks. Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment, such as those based on the system accountability benchmark.
AB - As artificial intelligence plays an increasingly substantial role in decisions affecting humans and society, the accountability of automated decision systems has been receiving increasing attention from researchers and practitioners. Fairness, which is concerned with eliminating unjust treatment and discrimination against individuals or sensitive groups, is a critical aspect of accountability. Yet, for evaluating fairness, there is a plethora of fairness metrics in the literature that employ different perspectives and assumptions that are often incompatible. This work focuses on group fairness. Most group fairness metrics desire a parity between selected statistics computed from confusion matrices belonging to different sensitive groups. Generalizing this intuition, this paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness. To further analyze the source of potential unfairness, an appropriate post hoc analysis methodology is also presented. The usefulness of the test, metric, and post hoc analysis is demonstrated via a case study on the controversial case of COMPAS, an automated decision system employed in the US to assist judges with assessing recidivism risks. Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment, such as those based on the system accountability benchmark.
KW - algorithm audit
KW - algorithmic accountability
KW - artificial intelligence
KW - automated decision systems
KW - fairness
UR - http://www.scopus.com/inward/record.url?scp=85148427143&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85148427143&partnerID=8YFLogxK
U2 - 10.1109/ICDMW58026.2022.00027
DO - 10.1109/ICDMW58026.2022.00027
M3 - Conference contribution
AN - SCOPUS:85148427143
T3 - IEEE International Conference on Data Mining Workshops, ICDMW
SP - 137
EP - 146
BT - Proceedings - 22nd IEEE International Conference on Data Mining Workshops, ICDMW 2022
A2 - Candan, K. Selcuk
A2 - Dinh, Thang N.
A2 - Thai, My T.
A2 - Washio, Takashi
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 28 November 2022 through 1 December 2022
ER -