As a machine learning engineer, I am passionate about meta-learning, particularly in combining it with visual self-supervised learning. I am interested in developing intelligent agents that can rapidly acquire new skills and adapt to novel environments with minimal supervision.
Research
Regularized Meta-Learning for Neural Architecture Search
Rob van Gastel, Joaquin Vanschoren
AutoML-Conf Late-Breaking Workshop 2022
paper / code
We apply regularization techniques to the inner-loop neural architecture search to improve meta-learning, adapting to new tasks more quickly.
Projects
Removing Bias by Post-Training Pretrained Encoders
Testing Franca's "Removal of Absolute Spatial Attributes" (RASA) post-training method to debias pretrained ViTs is simple and effective, and also works for other pretrained backbones like DINOv2 and DINOv3, improving downstream tasks with just one additional hour of training.
Exploring Meta In-Context Learning
Testing the mechanic that allows LLMs to adapt their predictions by using context, in-context learning, on a smaller scale. I test the capabilities of meta in-context learners to solve out-of-domain tasks.
Finetuning Transferable Vision Transformer Weights
Finetuning the encoder weights of the self-supervised learning method DINOv2 using a simple 1x1 convolution encoder and Low-Rank Adaptation (LoRA) allows for adaptation to the Pascal VOC and ADE20k datasets within a few epochs.
Meta-Reinforcement Learning Algorithms
Implementations of Meta-Reinforcement Learning algorithms designed to quickly adapt policies to new, related tasks within a few episodes.