CHEN Chaofeng (陈超锋)

Research Fellow @ S-Lab, NTU

Ph.D @ HKU; B.Eng. @ HUST
chaofenghust [at] gmail.com

 Github  Google Scholar  CV  LinkedIn

I am currently a postdoctoral research fellow at S-Lab in Nanyang Technological University, working with Prof. Weisi Lin and Prof. Kelly. I received my Ph.D. degree from Dept. of Computer Science at the University of Hong Kong in January 2021, advised by Dr. Kenneth K.Y. Wong. Before that, I received my B.Eng. from Huazhong University of Science and Technology.

My interests are centered around Computer Vision and Image Processing. Current research topics are: Low level vision, including image quality assessment, restoration and enhancement; Multi-modality generative models.

News


  • ⭐ Use pip install pyiqa to try our PyTorch toolbox for Image Quality Assessment.
  • ⭐ Find a comprehensive survey about Image Quality Assessment here: .
  • 2024-07: Four papers are accepted by ACM MM2024 with three Oral (3.97%) presentation, details coming soon!
  • 2024-07: Five papers are accepted by ECCV2024, details coming soon!
  • 2024-05: Q-Align is accepted by ICML2024!
  • 2024-02: Two papers (Co-authored) about IQA are accepted by CVPR2024!
  • 2024-01: Q-Bench is accepted as spotlight paper (4.96%) by ICLR2024!
  • 2024-01: TOPIQ is accepted by Transactions on Image Processing (TIP).
  • 2023-12: One paper about image super-resolution is accepted by AAAI2024.
  • 2023-10: We release Q-Instruct, a multi-modality dataset for low-level visual instruction tuning with large visual language models.
  • 2023-09: We release Q-Bench, a systematic benchmark for multi-modality LLMs (MLLMs) on low-level vision and visual quality assessment.
  • 2023-09: Extension of FAST-VQA (FasterVQA) get accepted by TPAMI.
  • 2023-07: One paper about video quality assessment is accepted by ACM MM 2023.
  • 2023-07: One paper about video quality assessment is accepted by ICCV 2023.
  • 2023-03: One paper about video quality assessment is accepted by ICME 2023.
  • 2023-02: One paper about video quality assessment is accepted by TCSVT 2023.
  • 2022-12: One paper about video prediction is accepted by AAAI 2023.
  • 2022-11: Our research team, NTU Visual Quality Assessment Group is created, which aims to build efficient and explainable Visual Quality Assessment approaches.
  • 2022-09: One paper is accepted by NeurIPS 2022.
  • 2022-07: Three papers have been accepted by ECCV2022.
  • 2022-06: Two papers, including QuanTexSR (renamed as FeMaSR) have been accepted by ACM MM2022 as Oral presentation (5.9%).
  • 2022-06: One paper, FFRNet about masked face recognition has been accepted by ICIP2022.
  • 2022-03: We release our work about blind image resolution, QuanTexSR, together with the codes in Github.
  • 2022-02: We release a PyTorch toolbox for IQA as well as a comprehensive survey .
  • 2021-07: One paper about HDR video reconstruction is accepted by ICCV 2021.
  • 2021-03: Our paper PSFR-GAN about face SR has been accepted by CVPR2021.
  • 2020-11: Our paper SPARNet about face SR has been accepted by TIP2020.

Experience


Sep 2021 - Present Postdoctoral research fellow at S-Lab in NTU, working with Prof. Weisi Lin and Prof. Kelly
Mar 2021 - Aug 2021 Research Assistant at GAP Lab CUHKSZ, worked with Dr. Xiaoguang Han
Nov 2019 - Mar 2021 Research Intern at Alibaba DAMO Academy, worked with Prof. Lei Zhang and Dr. Xiaoming Li
May 2019 - Oct 2019 Research Visitor at VLLab UC Merced, worked with Prof. Ming-Hsuan Yang
Jun 2018 - Mar 2019 Research Intern at Tencent AI Lab, worked with Prof. Zhifeng Li and Dr. Dihong Gong

Publications


Conference Papers

    Journal Papers

      Professional Activities

      Awards

      Teaching