Image of Viet Dung Nguyen

Hello, I am
Viet Dung Nguyen

PhD Candidate @ RIT

Hello, I am Viet Dung Nguyen

Viet Dung Nguyen is a Ph.D. Candidate at Rochester Institute of Technology. He is working with Dr. Alexander G. Ororbia in theNeural Adaptive Computing Lab (The NAC Lab), Dr. Reynold Bailey in theComputer Graphics & Applied Perception Lab., and Dr. Gabriel Diaz in thePerception For Movement (PerForM) Lab.

Research Interests

(Generative World Model + Active Inference) @ Autonomous Systems

My research interests encompass multimodal deep learning, computer vision, eye tracking, reinforcement learning, embodied robotics, and natural language processing. By integrating these interdisciplinary fields, my dissertation aims to develop an embodied neuro-robotic agent capable of solving real-world tasks and interacting with humans in daily life.

Active Inference

Active inference process theory is a mathematical framework which focus on modeling human perception and action (similar to model-based reinforcement learning and involves the use of a generative world model). I mainly focus on implementing and improving world model architecture to make active inference agents more robust. I also focus on integration of active inference in real-life autonomous systems such as robot arms, drones, and autonomous driving cars.

Keywords: Generative Model, World Model, Reinforcement Learning, Model-Based, Autonomous Systems, Robotics, Multimodal

Computer Vision

Compute vision encapsulates many different sub-fields such as image segmentation, recognition, and generation. I mainly focus on domain adaptation problem -- improving models' real-world inference accuracy while replacing real-world samples with synthetic samples.

Keywords: Sim2Real, Segmentation, GAN, Diffusion Models

Natural Language Processing

I mainly focus on the construction of robotic agents/autonomous systems that act based the instruction/guidance in the form of natural language. Besides that, I focus on improving hate-speech detection models through the use of large language model.

Keywords: LLM, Hate-Speech Detection, Multimodal, Transformer, Multimodal

News

Up-to-date

[Dec 1, 2024]

[Dec 1, 2024] Research Visit at VinUniversity

I'm happy to share that I will be doing research visit in VinUniversity, Vietnam.

[Nov 13, 2024]

[Nov 13, 2024] Talk: Dynamic Prior Preference Learning for Scalable, Robust Deep Active Inference

I'm happy to share my talk at the 4th Active Inference Symposium. The talk is about Dynamic Prior Preference Learning for Scalable, Robust Deep Active Inference. n this talk, we specifically analyze this issue and discuss a potential resolution for scaling the instrumental signal inherent to AIF by introducing the “contrastive recurrent state prior preference” (CRSPP) model learning framework. This methodology frames AIF agents in terms of progressively constructing and adapting a prior preference at each time step, facilitating the dynamic emission of a useful, dense instrumental signal.

[Oct 1, 2024]

[Oct 1, 2024] Research Visit at University College Dublin

I'm happy to share that I will be doing research visit in University College Dublin, Ireland.

[Aug 21, 2024]

[Aug 21, 2024] New Research Up on ArXiv

I'm happy to share that our work is now available on ArXiv! This research presents a novel active inference (AIF) framework designed to solve sparse-reward, image-based reinforcement learning (RL) tasks. We demonstrate that our agent outperforms state-of-the-art RL and AIF baselines.

Authors:  Viet Dung Nguyen,

 Zhizhuo Yang

,

 Christopher L. Buckley

,

 Alexander Ororbia

[Jun 9, 2024]

[Jun 9, 2024] Machine Learning Intern

Congratulation Viet Dung Nguyen on starting new intern position at Petanux GmbH.

[Jun 7, 2024]

[Jun 7, 2024] Best Paper Award

Congratulation Viet Dung Nguyen and his team on his best paper award at ETRA 2024.

[Jan 17, 2024]

[Jan 17, 2024] Paper Accepted to ETRA 24

Congratulation Viet Dung Nguyen and his team on their paper accepted to ETRA 2024. Project funded by: US National Science Foundation and Meta Reality Labs.

Authors:  Viet Dung Nguyen,

 Reynold Bailey

,

 Gabriel J. Diaz

,

 Chengyi Ma

,

 Alexander Fix

,

 Alexander Ororbia

[Oct 10, 2023]

[Oct 10, 2023] Seed Grant Accepted

Congratulation Viet Dung Nguyen on securing grant proposal in multimodal. Title: 'Deep Multimodal Active Inference for Robotic Arm Control.' Funding: $1,080

[April 15, 2023]

[April 15, 2023] Physics Olympiad

Congratulation Viet Dung Nguyen and his team on achieving Second College Physics Olympiad at RIT 2023

[Mar 26, 2023]

[Mar 26, 2023] Paper Accepted to AAAI 23

Congratulation Viet Dung Nguyen on paper accepted to AAAI 23.

Authors:  Viet Dung Nguyen,

 Quan H. Nguyen

,

 Richard G. Freedman

[Jan 26, 2021]

[Jan 26, 2021] Paper Accepted to AAAI 21

Congratulation Viet Dung Nguyen on paper accepted to AAAI 21.

Authors:  Viet Dung Nguyen,

 Dung Doan

,

 Todd W. Neller

Contact Me

Let's collaborate!