Tuesday, October 20, 2020

Evolving NILM to NIAD: Non-Intrusive Activity Detection

Almost all documented practical use cases of load disaggregation rely on the analysis of appliance operational times and their impact on the monthly electricity bill. However, load disaggregation bears promising potential for other use cases. Recognizing user activities without the need to set up a dedicated sensing infrastructure is one such application, given that many household activities involve the use of electrical appliances. State-of-the-art disaggregation algorithms only provide support for the recognition of one appliance at a time, however. 

In collaboration with Andreas Reinhardt from TU Clausthal, we thus take load disaggregation to the next level, and present to what extent it is applicable to monitor user activities involving multiple appliances (operating sequentially or in parallel) using this technique. For the evaluation of our Non-Intrusive Activity Detection (NIAD), we synthetically generate load signature data to model nine typical user activities, followed by an assessment to what extent they can be detected in aggregate electrical consumption data. Our results prove that state-of-the-art load disaggregation algorithms are also well-suited to identify user activities, at accuracy levels comparable to (but slightly below) the disaggregation of individual appliances.

Our paper is to appear at the 2nd ACM Workshop on Device-Free Human Sensing (DFHS'20):

Andreas Reinhardt and Christoph Klemenjak. 2020. Device-Free User Activity Detection using Non-Intrusive Load Monitoring: A Case Study. In The 2nd ACM Workshop on Device-Free Human Sensing (DFHS ’20), November 15, 2020, Virtual Event, Japan.

 We are happily looking forward to pitching the concept of NIAD to the community!

Tuesday, October 13, 2020

Stop! Exploring Bayesian Surprise for Load Disaggregation

In our latest paper, which is the result of the ongoing collaboration between our lab and SFU's Computational Sustainability Lab, we bring the concept of Bayesian Surprise to NILM. When has enough prior training been done? When has a NILM algorithm encountered new, unseen data? We apply the notion of Bayesian surprise to answer these important questions for both, supervised and unsupervised algorithms.

"Bayesian surprise quantifies how data affects natural or artificial observers, by measuring differences between posterior and prior beliefs of the observers" - ilab.usc.edu

Bayesian Surprise is measured in "wow"
We compare the performance of several NILM algorithms to establish a suggested threshold on two combined measures of surprise: postdictive surprise and transitional surprise.

We provide preliminary insights and clear evidence showing a point of diminishing returns for model performance with respect to dataset size, which can have implications for future model development, dataset acquisition, as well as aiding in model flexibility during deployment.

The paper is to appear at the 5th International Workshop on Non-Intrusive Load Monitoring (NILM'20): 

Richard Jones, Christoph Klemenjak, Stephen Makonin, and Ivan V. Bajić. 2020. Stop! Exploring Bayesian Surprise to Better Train NILM. In The 5th International Workshop on Non-Intrusive Load Monitoring (NILM ’20), November 18, 2020, Virtual Event, Japan.

An author's copy can be obtained from Christoph's personal website

We are looking forward to discussing this novel approach for NILM.