I am Yunfan Lu, a final-year Ph.D. student at HKUST (gz), supervised by Prof. Hui Xiong, and collaborating closely with Prof. Yusheng Wang at the University of Tokyo. My research centers on event-based vision and computational imaging, with a focus on challenges, e.g., HDR, low-light enhancement, deblurring, frame interpolation, rolling shutter correction, and super-resolution. My long-term objective is to develop imaging systems for robots that emulate the agility of biological vision by designing an integrated approach that includes optical structures, sensor technology, and advanced imaging algorithms.

I earned my master’s degree from the University of Chinese Academy of Sciences under the supervision of Prof. Jun Xiao, where I received the China National Scholarship (top 0.8%). I was also a visiting student at INRIA, Paris. During my bachelor’s at NUST, I was awarded a Silver Medal in the ACM/ICPC Asia Regional Contest. In industry, I gained experience at DJI under Dr. Menglong Zhu and at HONOR, working with Dr. Ye Gao, focusing on computational photography for mobile devices. My passion for imaging technology has driven my work across both academia and industry.

As I enter my final Ph.D. year in 2025, I seek visiting student opportunities and plan to pursue a postdoc in 2026.

🔥 News

  • 2024.06:  📰 I was interviewed by China Daily about computational imaging with event cameras. Read the article
  • 2024:  🎉🎉 Two papers published at ECCV 2024!
  • 2024:  🎉🎉 Two papers published at CVPR 2024, including one Oral 🎤!
  • 2023:  🎉🎉 One Paper published at CVPR 2023!
  • 2022:  🎉🎉 One Paper published at ECCV 2022!

🌟 Public Service

  • I have served as a reviewer for TPAMI, IJCV, TMM, CVPR, ICCV, ECCV, NIPS, ICLR, and ICASSP.

📝 Publications

ECCV 2024
sym

UniINR: Unifying Spatial-Temporal INR for RS Video Correction, Deblur, and Interpolation with an Event Camera

Yunfan LU, Guoqiang Liang, Yusheng Wang, Lin Wang, Hui Xiong

The first approach to recover arbitrary frame-rate sharp global shutter frames from rolling shutter blur frames and paired events.

Key Words: Rolling Shutter Correction, Deblurring, Frame Interpolation

📄 Paper, 💻 Code GitHub stars, 🎥 Video

ECCV 2024
sym

Revisit Event Generation Model: Self-Supervised Learning of Event-to-Video Reconstruction with Implicit Neural Representations

Zipeng Wang, Yunfan LU, Lin Wang

The first self-supervised approach to reconstruct high temporal resolution intensity frames from event data without training dataset.

Key Words: Implicit Neural Representations, Model-based Event-to-Video Reconstruction

📄 Paper, 💻 CodeGitHub stars

CVPR 2024 (Oral)
sym

Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach Oral 🎤

Guoqiang Liang, Kanghao Chen, Hangyu Li, Yunfan LU, Lin Wang

EvLight is a novel event-guided low-light image enhancement approach leveraging a new large-scale real-world event-image dataset, achieving superior performance through multi-scale fusion and SNR-guided feature selection.

Key Words: Low-light Enhancement, Real-world dataset

📄 Paper, 💻 Code GitHub stars , 💾 Dataset Baidu Pan,OneDrive

CVPR W 2024
sym

Event Camera Demosaicing via Swin Transformer and Pixel-focus Loss

Yunfan LU, Yijie Xu, Wenzong MA, Weiyu Guo, Hui Xiong

A Swin-Transformer-based model is proposed to tackle MIPI-2024 demosaicing challenges (Link) in the RAW domain of event cameras.

Key Words: Event Camera Demosaicing, RAW domain

📄 Paper, 📄 MIPI-2024 Workshop Report

CVPR 2023
sym

Learning Spatial-Temporal Implicit Neural Representations for Event-Guided Video Super-Resolution

Yunfan Lu, Zipeng Wang, Minjie Liu, Hongjian Wang, Lin Wang

This paper presents a novel framework for video super-resolution (VSR) at arbitrary scales using event cameras, leveraging implicit neural representations and spatial-temporal interpolation to achieve superior performance over prior methods.

Key Words: Video Super-Resolution, Implicit Neural Representations

📄 Paper, 💻 Code GitHub stars , 💾 Dataset pw: cvpr, 🎥 Video

ECCV 2022
sym

Efficient video deblurring guided by motion magnitude

Yusheng Wang, Yunfan Lu, Ye Gao, Lin Wang, Zhihang Zhong, Yinqiang Zheng, Atsushi Yamashita

Deblurring framework that uses motion magnitude prior to guide efficient video deblurring via pixel-wise blur detection and integration into a recurrent neural network.

Key Words: Video Deblurring, Motion Magnitude Prior

📄 Paper, 💻 Code GitHub stars

Sensor 2021
sym

Three-Dimensional Model of the Moon with Semantic Information of Craters Based on Chang’e Data

Yunfan Lu, Yifan Hu, Jun Xiao, Lupeng Liu, Long Zhang, Ying Wang

This paper proposes a framework to create a 3D model of the Moon with crater information using Chang’E data.

Key Words: Chang E Dataset, 3D model of the Moon, Crater Information

📄 Paper

📑 Underviewed Papers

under review
sym

HR-INR: Continuous Space-Time Video Super-Resolution via Event Camera

Yunfan Lu, Zipeng Wang, Yusheng Wang, Hui Xiong

HR-INR, a novel C-STVSR framework utilizing event cameras and implicit neural representation to effectively capture holistic dependencies and regional nonlinear motions for video super-resolution, achieving superior performance across multiple datasets.

Key Words: Continuous Space-time Video Super-resolution, Event Temporal Pyramid Representation

📄 Paper, 💻 Code

under review
sym

Low-Light Video Enhancement with an Event Camera: A Large-Scale Real-World Dataset, Novel Method, and More

Kanghao Chen, Guoqiang Liang, Hangyu Li, Yunfan Lu, Lin Wang

EvLight++ is an event-guided low-light video enhancement method using a large-scale event-video dataset, significantly improving enhancement quality and performance in semantic segmentation and depth estimation.

Key Words: Low-light Video Enhancement, Large-scale Dataset

📄 Paper, 💻 Code GitHub stars

under review
sym

Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames

Yunfan Lu, Guoqiang Liang, Lin Wang

A self-supervised framework for recovering arbitrary frame rate global shutter frames from rolling shutter frames using event camera guidance, leveraging displacement field estimation to correct distortion and achieve video frame interpolation.

Key Words: Rolling Shutter Correction, Event-guided Video Frame Interpolation

📄 Paper

under review
sym

Priors in Deep Image Restoration and Enhancement: A Survey

Yunfan Lu, Yiqi Lin, Hao Wu, Yunhao Luo, Xu Zheng, Hui Xiong, Lin Wang

Comprehensive survey on the role of priors in deep learning-based image restoration and enhancement, offering a theoretical analysis, taxonomy, discussion of applications, and future research directions..

Key Words: Survey, Image Restoration and Enhancement, Priors Analysis

📄 Paper, 📝 Page GitHub stars

under review
sym

Deep learning for event-based vision: A comprehensive survey and benchmarks

Xu Zheng, Yexin Liu, Yunfan Lu, Tongyan Hua, Tianbo Pan, Weiming Zhang, Dacheng Tao, Lin Wang

This paper presents a comprehensive survey of deep learning (DL) techniques for event-based vision, categorizing methods, benchmarking experiments, and discussing future challenges and opportunities in this emerging field..

Key Words: Survey, Event Cameras, Deep Learning, Vision and Robotics

📄 Paper, 📝 PageGitHub stars

under review
sym

Vetaverse: Technologies, applications, and visions toward the intersection of metaverse, vehicles, and transportation systems

Pengyuan Zhou, Jinjing Zhu, Yiting Wang, Yunfan Lu, Zixiang Wei, Haolin Shi, Yuchen Ding, Yu Gao, Qinglong Huang, Yan Shi, Ahmad Alhilal, Lik-Hang Lee, Tristan Braud, Pan Hui

Vetaverse, a concept blending the vehicular industry and Metaverse into a continuum, providing large-scale transportation system management (TS-Metaverse) and personalized immersive infotainment (IV-Metaverse), while discussing enabling technologies, challenges, and future research directions..

Key Words: Vehicular-Metaverse, Intelligent Transportation Systems, Immersive infotainment

📄 Paper

🎖 Honors and Awards

  • 2019 Scholarship of Summer School by INRIA and Institut Français.
  • 2018 China National Scholarship. (Top 0.8%).
  • 2017 Silver Medal, The ACM-ICPC Asia Regional Contest 2017 Beijing Site (rank: 34/168).
  • 2017 Bronze Medal, The CCF-CCPC China Contest 2017 Qianhuangdao Site (rank: 51/276).

💻 Work Experience

  • 2023.09 - 2024.12, HKUST (gz), Teaching Assistant for courses Data Mining (taught by Prof. Xiong) and Autonomous AI (taught by Prof. Junwei Liang).
  • 2021.04 - 2022.01, HONOR, Senior Algorithm Engineer in Beijing, China, Sensor and Algorithm Solutions Department, led by Dr. Ye Gao.
  • 2020.07 - 2021.04, DJI, Algorithm Engineer in Shenzhen, China, Machine Learning Department, led by Dr. Menglong Zhu.
  • 2019.05 - 2020.02, Tencent, Multimedia Laboratory, Research Intern in Beijing, China, led by Dr. Shan Liu.