“Continual Learning in Deep Neural Network by using a Kalman Optimizer” is an article with strong ties to the IoTCrawler published by Honglin Li., Shirin Enzhaifar, Frieder Ganz and Payam Barnaghi from University of Surrey
Learning and adapting to new distributions or learning new tasks sequentially without forgetting the previously learned knowledge is a challenging phenomenon in continual learning models (e.g. machine learning models to process IoT and smart city data). Most of the conventional deep learning models are not capable of learning new tasks sequentially in one model without forgetting the previously learned ones. We address this issue by using a Kalman Optimiser.
The Kalman Optimiser divides the neural network into two parts: the long-term and short-term memory units. The long-term memory unit is used to remember the learned tasks and the short-term memory unit is to adapt to the new task.
These units define which parameters are restricted to be changed by the Kalman update procedure (i.e. long-term memory). This update procedure adds an adjustment and control mechanism to allow the model to learn new tasks without significantly forgetting the previously learned ones.
Read more about the Kalman optimising procedure in this article available in our repository.