top of page

2023

Meta-Learning with Adaptive Weighted Loss for Imbalanced Cold-Start Recommendation
   M Kim, Y Yang, J Ryu, T Kim

   CIKM 2023
Complementary Domain Adaptation and Generalization for Unsupervised Continual Domain Shift Learning

   W Cho, J Park, T Kim

   ICCV 2023

UOTA: Unsupervised Open-Set Task Adaptation Using a Vision-Language Foundation Model

   Y Min, K Ryoo, B Kim,  T Kim

   ICML Workshop 2023 on Efficient Systems for Foundation Models

Uncertainty-Guided Online Test-Time Adaptation via Meta-Learning

   K Chae, T Kim

   ICML Workshop 2023 on Spurious Correlations, Invariance, and Stability

Meta-Learning with a Geometry-Adaptive Preconditioner

   S Kang, D Hwang, M Eo, T Kim, W Rhee

   CVPR 2023

Flexible Model Aggregation for Quantile Regression

   R Fakoor, T Kim, J Mueller, AJ Smola, RJ Tibshirani

   JLMR vol24.

2022

Adaptive Interest for Emphatic Reinforcement Learning

   M Klissarov, R Fakoor, J Mueller, K Asadi, T Kim, A Smola

   NeurIPS 2022

Faster Deep Reinforcement Learning with Slower Online Network

   K Asadi, R Fakoor, O Gottesman, T Kim, ML Littman, AJ Smola

   NeurIPS 2022

FAD-X: Fusing Adapters for Cross-lingual Transfer to Low-Resource Languages

   J Lee, S Hwang, T Kim

   AACL 2022
Efficient Task Adaptation by Mixing Discovered Skills

   E Yang, J Rhim, T Kim

   Pre-training Workshop at ICML 202

Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline

   M Caccia, J Mueller, T Kim, L Charlin, R Fakoor

   arXiv preprint arXiv:2205.14495

bottom of page