TY - GEN
T1 - NeuroView-RNN
T2 - 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
AU - Barberan, Cj
AU - Alemmohammad, Sina
AU - Liu, Naiming
AU - Balestriero, Randall
AU - Baraniuk, Richard
N1 - Funding Information:
This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047. We would like to thank Yehuda Dar, Hamid Javadi, Vishwanath Saragadam, and Fernando Gama for providing great comments and suggestions for this article.
Publisher Copyright:
© 2022 ACM.
PY - 2022/6/21
Y1 - 2022/6/21
N2 - Recurrent Neural Networks (RNNs) are important tools for processing sequential data such as time-series or video. Interpretability is defined as the ability to be understood by a person and is different from explainability, which is the ability to be explained in a mathematical formulation. A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner. We propose NeuroView-RNN as a family of new RNN architectures that explains how all the time steps are used for the decision-making process. Each member of the family is derived from a standard RNN architecture by concatenation of the hidden steps into a global linear classifier. The global linear classifier has all the hidden states as the input, so the weights of the classifier have a linear mapping to the hidden states. Hence, from the weights, NeuroView-RNN can quantify how important each time step is to a particular decision. As a bonus, NeuroView-RNN also offers higher accuracy in many cases compared to the RNNs and their variants. We showcase the benefits of NeuroView-RNN by evaluating on a multitude of diverse time-series datasets.
AB - Recurrent Neural Networks (RNNs) are important tools for processing sequential data such as time-series or video. Interpretability is defined as the ability to be understood by a person and is different from explainability, which is the ability to be explained in a mathematical formulation. A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner. We propose NeuroView-RNN as a family of new RNN architectures that explains how all the time steps are used for the decision-making process. Each member of the family is derived from a standard RNN architecture by concatenation of the hidden steps into a global linear classifier. The global linear classifier has all the hidden states as the input, so the weights of the classifier have a linear mapping to the hidden states. Hence, from the weights, NeuroView-RNN can quantify how important each time step is to a particular decision. As a bonus, NeuroView-RNN also offers higher accuracy in many cases compared to the RNNs and their variants. We showcase the benefits of NeuroView-RNN by evaluating on a multitude of diverse time-series datasets.
KW - Recurrent neural networks
KW - interpretability
KW - time series
UR - http://www.scopus.com/inward/record.url?scp=85133007281&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133007281&partnerID=8YFLogxK
U2 - 10.1145/3531146.3533224
DO - 10.1145/3531146.3533224
M3 - Conference contribution
AN - SCOPUS:85133007281
T3 - ACM International Conference Proceeding Series
SP - 1683
EP - 1697
BT - Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
PB - Association for Computing Machinery
Y2 - 21 June 2022 through 24 June 2022
ER -