top of page

Research

Advancing AI through representation learning and transfer learning,

enabling effective data utilization for real-world applications

Transfer Learning

Transfer learning involves leveraging knowledge gained from a pre-trained model on one task to improve the performance on a different, but related task. This is particularly useful in situations where data for the target task is scarce.

CLIP

A model that learns to associate images and text by training on a large dataset of image-text pairs. It enables powerful zero-shot learning capabilities for various vision-language tasks.

Meta-learning

Meta-learning, or “learning to learn,” focuses on training models to quickly adapt to new tasks with limited data by identifying patterns across multiple tasks.

PEFT

Parameter-Efficient Fine-Tuning (PEFT) refers to techniques that adapt large pre-trained models by updating only a small subset of parameters, reducing computational costs and memory usage while retaining performance.

Continual Learning

Continual learning allows models to learn continuously from new data without forgetting previously acquired knowledge, addressing the issue of catastrophic forgetting.

Test-time Adaptation

Test-time adaptation refers to adapting a pre-trained model during inference, using only the input data to adjust the model without retraining, allowing the model to generalize better to new, unseen environments or tasks.

Others

Additional methods or extensions within transfer learning, such as domain adaptation and few-shot learning, which focus on further improving generalization across diverse scenarios.

Representation Learning

Representation learning focuses on automatically discovering useful features or representations from raw data, often for the purpose of improving downstream tasks such as classification or prediction.

Multi-modal Learning

Multi-modal learning integrates and processes data from multiple modalities (e.g., text, images, audio) to improve understanding and predictions, enabling models to make use of complementary information from different data types.

Self-supervised Learning

Self-supervised learning allows models to learn from unlabeled data by generating pseudo-labels, enabling efficient learning without requiring extensive human-annotated datasets.

Object-centric Learning

Object-centric learning emphasizes learning representations based on objects in the environment, enabling models to focus on key objects and their interactions, which is useful for tasks like scene understanding and manipulation.

Others

Techniques related to representation learning, such as contrastive learning, which identifies meaningful differences between data points.

Time-series Representation

Time-series representation learning aims to capture meaningful patterns and temporal dependencies in sequential data, allowing for better predictions and analyses in applications like forecasting and anomaly detection.

Real-world Applications

Applying the research methods and models to real-world tasks and challenges, translating theoretical insights into practical solutions.

Low-resource Scenarios

Low-resource applications involve adapting machine learning models to work effectively in scenarios with limited data or computational resources, often requiring creative techniques like transfer learning or data augmentation.

Open-set Real-world Scenarios

In open-set scenarios, models must handle unknown or unseen classes during inference, necessitating robust handling of unfamiliar inputs.

Imbalanced and Long-tailed Distributions

Scenarios involve datasets where certain classes are underrepresented, requiring specialized techniques to avoid biased predictions and improve generalization.

Distribution Shift

Distribution Shift referes to scenarios where the data distribution in deployment differs from the training data, requiring models to adapt to maintain performance.

LLM without Hallucination

This focuses on reducing hallucination in large language models, ensuring their outputs are factually accurate and grounded in reliable sources.

Others

Additional application challenges such as personalization, robustness, and fairness, which are critical for deploying models effectively in real-world settings.

서울특별시 관악구 관악로 1, 43동 4층 (서울대학교 데이터사이언스대학원) 
43-506, 1 Gwanak-ro, Gwanak-gu, Seoul, Republic of Korea

bottom of page