Recognition of sleep dependent memory consolidation with multi-modal sensor data

Akane Sano, Rosalind W. Picard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Scopus citations

Abstract

This paper presents the possibility of recognizing sleep dependent memory consolidation using multi-modal sensor data. We collected visual discrimination task (VDT) performance before and after sleep at laboratory, hospital and home for N24 participants while recording EEG (electroencepharogram), EDA (electrodermal activity) and ACC (accelerometer) or actigraphy data during sleep. We extracted features and applied machine learning techniques (discriminant analysis, support vector machine and k-nearest neighbor) from the sleep data to classify whether the participants showed improvement in the memory task. Our results showed 60-70% accuracy in a binary classification of task performance using EDA or EDA+ACC features, which provided an improvement over the more traditional use of sleep stages (the percentages of slow wave sleep (SWS) in the 1st quarter and rapid eye movement (REM) in the 4th quarter of the night) to predict VDT improvement.

Original languageEnglish (US)
Title of host publication2013 IEEE International Conference on Body Sensor Networks, BSN 2013
DOIs
StatePublished - 2013
Event2013 IEEE International Conference on Body Sensor Networks, BSN 2013 - Cambridge, MA, United States
Duration: May 6 2013May 9 2013

Publication series

Name2013 IEEE International Conference on Body Sensor Networks, BSN 2013

Conference

Conference2013 IEEE International Conference on Body Sensor Networks, BSN 2013
Country/TerritoryUnited States
CityCambridge, MA
Period5/6/135/9/13

Keywords

  • actigraphy
  • classification
  • EDA
  • EEG
  • galvanic skin response
  • GSR
  • memory consolidation
  • visual discrimination task (VDT)

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Recognition of sleep dependent memory consolidation with multi-modal sensor data'. Together they form a unique fingerprint.

Cite this