PengyuZhang's Homepage
  • Home
  • Publications
  • Downloads

Pengyu Zhang (张鹏宇 in Chinese)

I am a postdoctoral fellow working with Prof. Li Cheng at Vision and Learning Lab, University of Alberta, Canada. Previously, I was a research fellow at Material Robotics Lab (MRL) at National University of Singapore (NUS), supervised by Prof. Shuzhi Sam Ge. I got my PhD degree from IIAU Lab, Dalian University of Technology (DUT), supervised by Prof. Huchuan Lu. My main reserach interests are Video Perception and Bio-Inspired Vision, especially in the tasks of Video Object Tracking and Segmentation, Sign Language Recognition and Translation, Event-based Vision, etc.
[Google scholar]/[Email]/[Github]/[Linkedin]
🌟Highlight: Please feel free to contact me if you are interested in the above topics and want to collaborate with me !
🎉 News 🎉
  • [Mar 2026] Vision and Learning Lab@UoA led by Prof. Li Cheng is recruiting graduates students and postdoc researchers.
  • [Dec 2025] One paper on video object segmentation is accepted to IEEE TPAMI.
  • [Sep 2025] Call For Participation: We organize the 1st Workshop and Competition on Multi-Modal UAV Vision: Robust Tracking and Segmentation in the Wild @ RISEx 2025. Welcome to particapte.
  • [Jun 2025] I was awarded the Liaoning Provincial Outstanding Doctoral Dissertation Award 2024 (2024年辽宁省优秀博士学位论文). Keep working!
  • [May 2025] Call for Paper: Our workshop on "Reliable and Interactive World Model" has been accepted to ICCV 2025. The website is available at RIWM. Your submission is welcome.
  • [Mar. 2025] I join Vision and Learning Lab@University of Alberta as a postdoctoral fellow, supervised by Prof. Li Cheng.
  • [Feb. 2025] One paper on video object segmentation is accepted to CVPR 2025.
  • [Dec. 2024] One paper on adversarial attack is accepted to ICASSP 2025.
  • [Sep. 2024] One paper on event-assisted blurry image unfolding is accepted to IEEE TNNLS.
  • [Aug. 2024] One paper on event-assisted blurry image unfolding is accepted to IEEE TIP.
  • [Jul. 2024] One paper on event-based sign language recognition and translation is accepted to ECCV 2024.
  • [Jun. 2024] One paper on video object tracking is accepted to IEEE TCSVT.
  • [Sep. 2023] One paper on video object tracking is accepted to Machine Intelligence Research (MIR).
  • [Jul. 2023] One paper on video object tracking is accepted to Journal of Image and Graphics (中国图象图形学报).
  • [Jun. 2023] One survey paper on RGB-T object tracking is accepted to Computational Visual Media (CVM).
  • [Aug. 2022] One paper on video object tracking is accepted to IEEE TPAMI.
  • [Jun. 2022] One paper on RGB-T object tracking is accepted to CVPR 2022.
  • [Aug. 2021] One paper on RGB-T object tracking is accepted to IJCV.
  • [Aug. 2021] One paper on Thermal object tracking is accepted to the 2nd Anti-UAV Workshop & Challenge on ICCV 2021.
  • [Feb. 2021] One paper on RGB-T object tracking is accepted to IEEE TIP.
  • [Aug. 2020] 🏆 Our JMMAC method got the 1st place on the public set of VOT 2020-RGBT Challenge.
  • [Aug. 2019] 🏆 Our JMMAC method got the 1st place on the public set of VOT 2019-RGBT Challenge.
Location
Contact
zpy.iiau@gmail.com (Primary) pz9@ualberta.ca (Work) pyzhang@mail.dlut.edu.cn
Copyright © 2024. All rights reserved. Powered by Domain.com.

We use cookies to enable essential functionality on our website and analyze website traffic. For more information, read our Cookies and Privacy Policy.

Your Cookie Settings

We use cookies to enable essential functionality on our website and analyze website traffic. For more information, read our our Cookies and Privacy Policy below.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites.

Analytics

These cookies collect information that is used in aggregate and in an anonymized form to help us understand how our website is being used and how effectively our site is performing.