In our latest paper, which is the result of the ongoing collaboration between our lab and SFU's Computational Sustainability Lab, we bring the concept of Bayesian Surprise to NILM. When has enough prior training been done? When has a NILM algorithm encountered new, unseen data? We apply the notion of Bayesian surprise to answer these important questions for both, supervised and unsupervised algorithms.
"Bayesian surprise quantifies how data affects natural or artificial observers, by measuring differences between posterior and prior beliefs of the observers" - ilab.usc.edu
Bayesian Surprise is measured in "wow" |
We provide preliminary insights and clear evidence showing a point of diminishing returns for model performance with respect to dataset size, which can have implications for future model development, dataset acquisition, as well as aiding in model flexibility during deployment.
The paper is to appear at the 5th International Workshop on Non-Intrusive Load Monitoring (NILM'20):
Richard Jones, Christoph Klemenjak, Stephen Makonin, and Ivan V. Bajić. 2020. Stop! Exploring Bayesian Surprise to Better Train NILM. In The 5th International Workshop on Non-Intrusive Load Monitoring (NILM ’20), November 18, 2020, Virtual Event, Japan.
An author's copy can be obtained from Christoph's personal website
We are looking forward to discussing this novel approach for NILM.
No comments:
Post a Comment