[go: nahoru, domu]

Skip to content

braceletboy/skylark-test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

skylark-test

The entry point for the code is in run.py. The training code is in the function train and the evaluation is in the function test in main.py

My submission for the Internship Application at SkyLark Labs. The following is a summary of my approach:

  1. I use the memory augmented neural network (MANN) described in the paper, "One-shot Learning with Memory-Augmented Neural Networks".
  2. I use the MNIST dataset for my test because its the simplest dataset. In my formulation I split the dataset into two parts. One part is with labels 0,1,2,3,4 and the other part is 5,6,7,8,9.
  3. The following is the formulation of my training methodology:
    1. I train the MANN on the first task until convergence using the --labels option to specify which labels are part of the first task. If we have to load a pretrained model we use the --pretrained_model option.
    2. Next, I train the MANN on the second task until convergence using the --labels option again to specify which labels are part of the second task. Here, I will also use the --pretrained_model option to load the model from the previous task. This leads to trasfer of learning. Additionally, I will also use the --no_write option. This allows us to only train the weights of the convolutional neural network while keeping the memory matrix unchanged.
    3. Repeat steps i. and ii. until no learning is happening in either of the tasks - END

I believe that the asymmetry in the i. and ii. steps allows us to train for both the tasks while also overcoming catastrophic forgetting. Thus achieving continual learning.

The following is a schematic of my approach: Schematic of my approach

PS: While we have used external memory matrix to address catastrophic forgetting, I believe we can refer to more mordern methods where tasks are defined using prompts. For example, see this recent paper by Facebook AI. Adopting such methods in addition to the augmented memory might be an interesting idea to try.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages