Yeongtak Oh

I am a third year Ph.D student at DSAIL Lab in Seoul National University, Seoul, South Korea. I am working on computer vision and machine learning.

I received a BS in mechanical engineering from Seoul National University, Seoul, Korea, in 2018. I received an MS in the same department at SNU, in 2020. I worked as a Military Science and Technology Researcher in 2021 at the Korea Military Academy AI R&D Center.

My current research topics include generative models, continual learning, and vision-language models.

Email  /  CV  /  Google Scholar  /  Github

profile photo

News

[2024.11.18] One paper got accepted to IJCV 2024 Journal!
[2024.07.19] One paper got accepted to BMVC 2024 Conference!
[2024.07.02] One paper got accepted to ECCV 2024 Conference

Conferences

b3do
ControlDreamer: Stylized 3D Generation with Multi-View ControlNet
Yeongtak Oh*, Jooyoung Choi*, Yongsung Kim, Minjun Park, Chaehun Shin, and Sungroh Yoon
* Equal Contribution
British Machine Vision Conference (BMVC), 2024
project page / arXiv

ControlDreamer enables high-quality 3D generation with creative geometry and styles via multi-view ControlNet.

b3do
Efficient Diffusion-Driven Corruption Editor for Test-Time Adaptation
Yeongtak Oh*, Jonghyun Lee*, Jooyoung Choi, Uiwon Hwang, Dahuin Jung, and Sungroh Yoon
* Equal Contribution
European Conference on Computer Vision (ECCV), 2024
project page / arXiv

We propose Decorruptor to enhance the robustness of the diffusion model and accelerate the diffusion-based image-level updates.

Journals

b3do
On Mitigating Stability-Plasticity Dilemma in CLIP-guided Image Morphing via Geodesic Distillation Loss via geodesic distillation loss
Yeongtak Oh, Saehyung Lee, Uiwon Hwang*, and Sungroh Yoon*
* Equal Corresponding
International Journal of Computer Vision (IJCV), IF: 11.6, 2024
project page / arXiv

We have enhanced a range of CLIP-guided image morphing baselines through the implementation of our proposed inter- and intra-modality regularization losses, effectively addressing the SP dilemma.

b3do
A deep transferable motion-adaptive fault detection method for industrial robots using a residualā€“convolutional neural network
Yeongtak Oh, Yunhan Kim, Kyumin Na, and Byeng D. Youn
ISA Transactions, IF: 5.9, 2022

We present a deep learning-based motion-adaptive fault detection method for industrial robots using torque ripples.

Preprints

b3do
Style-Friendly SNR Sampler for Style-Driven Generation
Jooyoung Choi*, Chaehun Shin*, Yeongtak Oh, Heeseung Kim, and Sungroh Yoon
* Equal Contribution
arxiv, 2024
project page / arXiv

We propose the Style-friendly SNR sampler, which aggressively shifts the signal-to-noise ratio (SNR) distribution toward higher noise levels during fine-tuning to focus on noise levels where stylistic features emerge.

Talks

[2023.08.25] Recent Trends of Generative models in 3D vision
[2024.11.27] Image-Inversion of Diffusion Models

This page is borrowed from Jon Barron's webpage.