We are very happy to announce the following keynote speakers for the
International Symposium on Computer Vision for Public Security (CVPS’22) :
Prof. Weisi Lin
FIEEE, FIET, CEng, Hon. FSIET
Associate Chair (Research)
School of Computer Science and Engineering
Nanyang Technological University
Lin Weisi is an active researcher in intelligent image
processing, perception-based signal modelling and assessment,
video compression, and multimedia communication. He had been the
Lab Head, Visual Processing, in Institute for Infocomm Research
(I2R),Singapore. He is a Professor in School of Computer Science
and Engineering, Nanyang Technological University, where he also
served as the Associate Chair (Research).
He is a Fellow of IEEE and IET, and has been awarded Highly
Cited Researcher 2019 and 2020 by Clarivate Analytics. He has
elected as a Distinguished Lecturer in both IEEE Circuits and
Systems Society (2016-17) and Asia-Pacific Signal and
Information Processing Association (2012-13), and given
keynote/invited/tutorial/panel talks in 30+ international
conferences. He has been an Associate Editor for IEEE Trans.
Image Process., IEEE Trans. Circuits Syst. Video Technol., IEEE
Trans. Multimedia, IEEE Signal Process. Lett., Quality and User
Experience, and J. Visual Commun. Image Represent. He also
chaired the IEEE MMTC QoE Interest Group (2012-2014); he has
been a TP Chair for IEEE ICME 2013, QoMEX 2014, PV 2015, PCM
2012 and IEEE VCIP 2017. He believes that good theory is
practical, and has delivered 10+ major systems and modules for
industrial deployment with the related technology developed.
Topic of
Keynote:
Deep-learnt
Features to Facilitate Image Compression and Computer Vision
with an Integrated Framework
Keynote
Abstract
:
It has been long for image (or video) compression and computer
vision to be two largely separate domains, and as a result, a
computer vision task typically can only start after a whole
image is decoded. This talk explores for intermediate
deep-learnt visual features (rather than whole image/video) to
be extracted and then coded, and this facilitates integration of
signal compression and computer vision, accurate feature
extraction, privacy preservation, flexible load distribution
between edge and cloud, and green visual computing. It is hoped
that the presentation can trigger more R&D in the related fields
due to the nature of fundamental paradigm-shift in the proposed
framework.
Prof. Nam Ling
IEEE Fellow, IET Fellow
Wilmot J. Nicholson Family Chair Professor
Chair, Department of Computer Science and Engineering
Santa Clara University
Nam Ling received the B.Eng. degree (Electrical Engineering) from the National University of Singapore and the M.S. and Ph.D. degrees (Computer Engineering) from the University of Louisiana, Lafayette, U.S.A. He is currently the Wilmot J. Nicholson Family Chair Professor (Endowed Chair) of Santa Clara University (U.S.A) (since 2020) and the Chair of its Department of Computer Science & Engineering (since 2010). From 2010 to 2020, he was the Sanfilippo Family Chair Professor (Endowed Chair) of Santa Clara University. From 2002 to 2010, he was an Associate Dean for its School of Engineering (Graduate Studies, Research, and Faculty Development). He is/was also a Distinguished Professor for Xi’an University of Posts & Telecommunications, a Cuiying Chair Professor for Lanzhou University, a Guest Professor for Tianjin University, a Chair Professor and Minjiang Scholar for Fuzhou University, a Guest Professor for Shanghai Jiao Tong University, a Guest Professor for Zhongyuan University of Technology (China), and a Consulting Professor for the National University of Singapore. He has more than 230 publications (including books) in video/image coding and systolic arrays. He also has seven adopted standards contributions and has been granted with more than 20 U.S./European/PCT patents. He is an IEEE Fellow due to his contributions to video coding algorithms and architectures. He is also an IET Fellow. He was named IEEE Distinguished Lecturer twice and was also an APSIPA Distinguished Lecturer. He received the IEEE ICCE Best Paper Award (First Place) and the IEEE Umedia Best/Excellent Paper Awards (three times). He received six awards from Santa Clara University, four at the University level (Outstanding Achievement, Recent Achievement in Scholarship, President’s Recognition, and Sustained Excellence in Scholarship) and two at the School/College level (Researcher of the Year and Teaching Excellence). He has served as Keynote Speaker for IEEE APCCAS, VCVP (twice), JCPC, IEEE ICAST, IEEE ICIEA, IET FC & U-Media, IEEE U-Media, Workshop at XUPT (twice), ICCIT, as well as a Distinguished Speaker for IEEE ICIEA. He is/was General Chair/Co Chair for IEEE Hot Chips, VCVP (twice), IEEE ICME, IEEE U-Media (five times), and IEEE SiPS. He was an Honorary Co-Chair for IEEE Umedia. He has also served as Technical Program Co Chair for IEEE ISCAS, APSIPA ASC, IEEE APCCAS, IEEE SiPS (twice), DCV, and IEEE VCIP. He was Technical Committee Chair for IEEE CASCOM TC and IEEE TCMM, and has served as Guest Editor/Associate Editor for IEEE TCAS I, IEEE J-STSP, Springer JSPS, Springer MSSP, and other journals. He has delivered more than 120 invited colloquia worldwide and has served as Visiting Professor/Consultant/Scientist for many institutions/companies. Recently, he organized and conducted an APSIPA panel on "The Future of Video Coding", with great response.
Topic of
Keynote: Visual Coding – From
Traditional Approach to Deep Learning Approach
Keynote
Abstract
:
In today’s Internet, visual data are everywhere. Eighty percent
of all Internet traffic are from video data. The immensity of
the size and amount of visual data dictates the need of
efficient coding technology to effectively compress them. From
the first video coding standard in the mid-1980s to the latest
VVC/H.266 in 2020, coding efficiency has improved a lot.
Traditional video codec has been based on a block-based hybrid
codec structure. With the advancement of deep learning
technology, video coding can be assisted by deep learning tools
or/and use deep learning-based neural network as the backbone.
The improvement in coding efficiency comes with huge
computational complexity associated with deep approaches and the
need of a more appropriate visual quality metric. Image coding
is similar. From the early JPEG to BPG to VVC intra, coding
efficiency has improved a lot. Deep learning approaches further
improve this but often come with a high computational
complexity. In this talk, we first discuss the key tools in the
block-based hybrid codec structure, then discuss deep
learning-based approaches, from autoencoder to the use of
generative adversarial network (GAN). Finally, we highlight a
couple of our on-going projects based on GANs, including the use
of GANs in image coding and in video coding.