About Me

Hey there!

This is Yongquan, but feel free to call me Owen instead β€” that’s my English nickname. I am cool with both ; ) A funny story: I owe "Owen" to a little mix-up during English class where my teacher took a creative leap from "Yong" to "On" and landed on "Owen."

Currently, I am currently a 3rd-year PhD student at the Computer Science and Engineering (CSE) School of the University Of New South Wales (UNSW), advised by Prof. Aaron Quigley and Prof. Wen Hu. I am also a member of HCI BoDi Lab, UNSW HCC Group and UNSW AI Institute.

Previously, I was a research assistant at the Pervasive HCI Group of Tsinghua University, advised by Prof. Chun Yu and Prof. Yuanchun Shi. Before that, I did a research internship at the HCI Lab (HCIL) of the University of Maryland, College Park, advised by Prof. Huaishu Peng. Also, I have participated in the exchange activity of the ME310 Global Collaborative Design Course, which was initiated by Prof. Larry Leifer of Stanford University. Moreover, I respectively acheieved my Master's degree at the University of Science and Technology of China (USTC) and Bachelor's degree at Donghua University (DHU).

About My Research

My research domain is Human-Computer Interaction (HCI) and my interests mainly lie in the intersections of Human-Computer Interfaces, Ubiquitous and Mobile Computing, and Human-AI Interaction. Generally, I develop intelligent prototypes for the integration of design patterns into engineering artifacts, and explore novel applications covering pysical, virtual and mixed world, in order to improve user experience. Specifically, I aim to build Vision-based Multimodal Interfaces (VMIs) for enhancing Context Awareness based on hardware (e.g., smartphone, diverse sensors) and software (e.g., Deep Learning, Large-Lanuage Models), for facilitating systems' applicability, interactivity, and accessibility. In my research, a question I've always been trying to wrestle with: As the primary source of information, how could the visual modality be fused, integrated, or collaborated with other modalities to promote low-cost, seamless and user-friendly interactions between machines and humans.

News
July 2024: Our MultiSurf-GPT submission to MobileHCI'24 was accepted!July 2024: Our MultiEEG-GPT submission to UbiComp/ISWC'24 was accepted!Jun 2024: Open to the job market and actively hunting job! πŸ™‹β€β™‚οΈJun 2024: Our full paper Speech2Pro was rejected by IMWUT and I am look for collaboration for further polish!Jun 2024: We submitted the MultiSurf-GPT to MobileHCI 2024 LBW!Jun 2024: We submitted the EmotionalDiffusion to MobileHCI 2024 LBW!Jun 2024: I submitted my PhD thesis proposal to MobileHCI 2024 Doctoral Consortium!Jun 2024: We submitted the MultiEEG-GPT to UbiComp/ISWC 2024 Workshop!Jun 2024: We submitted the Ambient2Hap full paper to VRST 2024 conference!Oct 2023: I presented our work in the ISMAR 2023 conference!Oct 2023: I presented GenAIR in the SUI 2023 conference!Sep 2023: I presented MicroCam in the IMWUT/ISWC 2023 conference!Apr 2023: We presented RadarFoot in the UIST 2023 conference!
Research & Work Experience
DingOS - Lenovo Inc.

DingOS - Lenovo Inc. (Sep 2021 - Mar 2022)

Visiting Scientist

Beijing, China

Developed Adaptive GUI in the Department of Design

Key Focus: Interface of Operating System, Ecological Interface Collaboration

Pervasive HCI Lab - Tsinghua Uni

Pervasive HCI Lab - Tsinghua Uni (Sep 20 2020 - Aug 21 2021)

Research Assistant

Beijing, China

Worked on collaborative projects

Key Focus: Ubiquitous/Pervasive Computing, Mobile Computing, Accessibility

HCIL Lab - Uni of Maryland

HCIL Lab - Uni of Maryland (Sep 2019 - May 2020)

Research Intern

College Park, Maryland, United States

Worked on wall-climbing robot project

Key Focus: Tangible Interaction, Digital Fabrication

iFLYTEK Co. Ltd.

iFLYTEK Co. Ltd. (Jun 2018 - Sep 2018)

Algorithm Engineer Intern

Hefei, Anhui, China

Developed an algorithm for object recognition and classification

Key Focus: Computer Vision, Deep Learning, Algorithm Development

SUGAR EXPO - Stanford Uni

SUGAR EXPO - Stanford Uni (Jun 9 2017 - Jun 23 2017)

Tutor

Stanford, California, United States

Course tutor and worked on projects for SUGAR EXPO

Key Focus: Global Design Innovation Course

ZTE Co. Ltd.

ZTE Co. Ltd. (Sep 2015 - Dec 2015)

Course Intern

Shanghai, China

Developed a smart cane based on Arduino for the blind

Key Focus: Embedded Development, Wireless Communication

Education
University Of New South Wales (UNSW)

University Of New South Wales (UNSW) (2021 - expected in 2025)

Doctor of Philosophy, Computer Science & Engineering

Sydney, Australia
University of Science and Technology of China (USTC)

University of Science and Technology of China (USTC) (2016 - 2019)

Master of Engineering, Information and Communication Engineering

Hefei, Anhui, China
Donghua University (DHU)

Donghua University (DHU) (2012 - 2016)

Bachelor of Engineering, Electronic and Information Engineering

Shanghai, China
Wuhu No.1 High School

Wuhu No.1 High School (2009 - 2012)

High School, Science Student

Wuhu, Anhui, China
Skills
Deep Learning & Machine Learning
Deep Learning & Machine Learning(Pytorch, TensorFlow, Keras)
Signal & Data Processing
Signal & Data Processing(Matlab, Python, Excel)
Scripting Frameworks
Scripting Frameworks(Python, Shell, Php)
Database Management System
Database Management System(SQL)
Embedded System Development
Embedded System Development(Arduino, C, C++)
XR (VR/AR/MR) Development
XR (VR/AR/MR) Development(Unity, C#)
Website Development
Website Development(Javascript, Typescript, Html)
Android App Development
Android App Development(Android Studio, Java)
Hardware Fabrication
Hardware Fabrication(Electronic Circuit, 3D Printing)
UI/UX Design
UI/UX Design(Adobe Illustrator)
Photography & Cinematography
Photography & Cinematography(Adobe Photoshop, Adobe Lightroom)
Photo & Video Post-Processing
Photo & Video Post-Processing(iMovie, Capcut, Final Cut Pro X)
Selected Awards/Honors
ShangHai Outstanding Graduates, Shanghai, China '16

1st Place

Mathematical Contest In Modeling, COMAP '15

Meritorious Winner

Annual Ten Students of DHU '15

DHU Highest Honor

East China division, Freescale Cup National College Student Smart Car Competition, MOE of China '15

Winning Prize

MathorCup Global Undergraduate Mathematical Modeling Challenge, CSOOPEM '15

Winning Prize

Shanghai Division, National Undergraduate Electronic Design Contest, MOE of China '15

3rd Place

Excellent Student Model, DHU '14 '15

2 Times, 1st Place

National TI Cup Shanghai Undergraduate Electronic Design Contest, MOE of China '14

3rd Place

Scholarships
CSE School Top-Up Scholarship, '22

UNSW

TFS Full PhD Scholarship, '21

UNSW

Postgraduate Study Scholarship, '16

1st Place, USTC

National Scholarship, DHU China '13 '14 '15

3 times, Top 1%, MOE of China

Enterprise Scholarship '15

DHU

Donghua Scholarship, '13 '14

2 times, Top 5%, DHU

Academic Services
Reviewer for VRST 2024

Academic paper review.

Reviewer for IMWUT 2024

Academic paper review.

Reviewer for MobileHCI 2024

Academic paper review.

Reviewer for HRI 2023

Academic paper review.

Student Volunteer for CHI 2021

Participate in the organization of the academic schedule of CHI 2021.

Projects
Peer-Reviewed Work
EmotionalDiffusion (on-going)

EmotionalDiffusion (on-going)

This project aims to facilitate content creation in projection scenarios by combining visual (depth dimension) and textual modalities.

#generative AI #context creation #projection

Survey on Radar Sensing in HCI (on-going)

Survey on Radar Sensing in HCI (on-going)

How is radar sensing employed in HCI? How is it different from vision-based sensing methods?

#radar #survey

MultiEEG-GPT (IMWUT' 24)

MultiEEG-GPT (IMWUT' 24)

This project aims to facilitate content creation in projection scenarios by combining visual (depth dimension) and textual modalities.

#generative AI #context creation #projection

MultiSurf-GPT (MobileHCI' 24)

MultiSurf-GPT (MobileHCI' 24)

The recent rapid development of generative AI allows us to see the possibility of applying it to real-world scenarios.

#surface sensing #deep learning

Speech2Pro (on-going)

Speech2Pro (on-going)

This project aims to facilitate content creation in projection scenarios by combining visual (depth dimension) and textual modalities.

#generative AI #context creation #projection

Ambient2Hap (on-going)

Ambient2Hap (on-going)

We propose a framework called Ambient2Hap to combine visual and auditory modalities for haptic rendering in VR environments.

#virtual reality #haptics

MicroCam (IMWUT' 23)

MicroCam (IMWUT' 23)

The recent rapid development of generative AI allows us to see the possibility of applying it to real-world scenarios.

#surface sensing #deep learning

RadarFoot (UIST' 23)

RadarFoot (UIST' 23)

AR Shuttle Door offers the opportunity to retouch the real world, but can it change the real object itself?

#radar #surface sensing

Employing GenAI in Projection (SUI' 23)

Employing GenAI in Projection (SUI' 23)

OptiTrack is famous for its extremely high motion tracking accuracy, we try to use the action sequences captured by its system as ground truth, but use other sensors to simulate this sequence of actions.

#optitrack #tracking

Motion-Adaptive GUI (IUI' 23)

Motion-Adaptive GUI (IUI' 23)

Aobile phone use is ubiquitous. However, most of the mobile phone interface designs only consider static scenes rather than dynamic (such as human movement) scenes at the beginning.

#computational GUI

GenAIR (ISMAR 2023)

GenAIR (ISMAR 2023)

We do a lot of regular activities every day, such as getting up, brushing our teeth and turning on the computer. The activities themselves are actually rich in information (reflecting our identities, preferences, etc.), and contextual information between activities is often inferred.

#implicit interaction #natural interaction #daily rountines

Exploring Design Factors of GenAI in AR (CHI' 22)

Exploring Design Factors of GenAI in AR (CHI' 22)

Using four images of the hairstyles to reconstruct the hairstyles as an 3D hair model.

#design factor #augmented reality

SmartRecorder (IUI' 23)

SmartRecorder (IUI' 23)

We present a novel tool named SmartRecorder that facilitates people, without video editing skills, creating video tutorials for smartphone interaction tasks.

#interactive systems and tools #tutorial creation

FootUI (CHI' 21)

FootUI (CHI' 21)

This research aims to recognize the walking surface using a radar sensor embedded in a shoe, enabling ground context-awareness.

#radar #foot interaction

Auth+Track (CHI' 21)

Auth+Track (CHI' 21)

We propose Auth+Track, a novel authentication model that aims to reduce redundant authentication in everyday smartphone usage. By sparse authentication and continuous tracking of the user's status, Auth+Track eliminates the β€œgap” authentication between fragmented sessions and enables "Authentication Free when User is Around".

#radar #foot interaction

Wallbot (CHI' 21)

Wallbot (CHI' 21)

We envision a future where tangible interaction can be extended from conventional horizontal surfaces to vertical surfaces; indoor vertical areas such as walls, windows, and ceilings can be used for dynamic and direct physical manipulation.

#radar #foot interaction

IEEE International Symposium on Circuits & Systems 2019 (ISCAS' 19)

IEEE International Symposium on Circuits & Systems 2019 (ISCAS' 19)

There is still a gap between existing objective Image Quality Assessment (IQA) methods and subjective evaluation of human beings. Therefore, the method based on Human-Machine Co-Judgment (HMCJ) is proposed to achieve accurate semantic recognition. Furthermore, due to the lack of recognized semantic dataset, our Surveillance Semantic Distortion (SSD) Dataset is build as the basis for IQA.

#semantic dataset

IEEE Conference on Automatic Face and Gesture Recognition 2018 (FG’ 18)

IEEE Conference on Automatic Face and Gesture Recognition 2018 (FG’ 18)

We propose a novel semanticlevel full-reference image quality assessment (FR-IQA) method named Semantic Distortion Measurement (SDM) to measure the degree of semantic distortion for video encryption. Then, based on a semantic saliency dataset, we verify that the proposed SDM method outperforms state-of-the-art algorithms.

#semantic encryption #video encoding

Non-Archived Work
Video Hierarchical Encryption Based on Quantum Key Distribution (National Research Project)

Video Hierarchical Encryption Based on Quantum Key Distribution (National Research Project)

The amount of data generated by the current most advanced technique of quantum key distribution is very short. So, in such a situation, we can only extract a very small portion of the most important information for encryption, which we calle semantic information.

#semantic encryption #quantum encryption

Touch to See (Stanford EXPO Project)

Touch to See (Stanford EXPO Project)

There are very few pictures appear in blind books. So our project presented a Braille graphics standard library, and designed a new way to translate the ordinary 2D picture into a touchable 3D picture based on Image Caption ,a method of Artificial Intelligence and a touchable ink invented by Samsung in Thailand.

#touchable book #accessibility

An Optimal Dynamic Network Planning Model for Eradication of Ebola (Meritorious Winner Award of MCM/ICM)

An Optimal Dynamic Network Planning Model for Eradication of Ebola (Meritorious Winner Award of MCM/ICM)

We have built a realistic, sensible, and useful model that considers not only the spread of the disease, the quantity of the medicine needed, possible feasible delivery systems, locations of delivery, speed of manufacturing of the vaccine or drug, but also any other critical factors to optimize the eradication of Ebola.

#mathematical modeling

A New Remote Interactive Experimental Platform (National Uni Student Innovation Project)

A New Remote Interactive Experimental Platform (National Uni Student Innovation Project)

This work implemented the design of FM and frequency demodulation circuits centered on MC1648, MC1496, and realized the circuit performance test by the virtual instruments embed in the NI ELVIS β…‘ +.

#RF circuit #experimental platform

Remote Monitoring and Anti-Collision of the Mobile Robot Basd on Network Technology (Wining Prize of Smart Car Fressscale Cup)

Remote Monitoring and Anti-Collision of the Mobile Robot Basd on Network Technology (Wining Prize of Smart Car Fressscale Cup)

We collected multiple data parameters of the mobile robot through our Web Monitoring Tool, and also can change the robot operating state according to our practical requirement by changing these parameters.

#automatic driving

The Design and Implementation of RF Amplifier in the Internet of Things (National Uni Student Innovation Project)

The Design and Implementation of RF Amplifier in the Internet of Things (National Uni Student Innovation Project)

This project focuses on the design of low-noise RF amplifier and the hardware implementation, and uses the vector network analyzer to test the circuit in order to get better related performance.

#RF amplifier

Resume
Yongquan Hu's Resume on Print

If you are interested in me and have any opportunity for internship placement, please feel free to get in touch with me to get the resume. Thx!

Contact
Send Message
More Info

It is my great honor to have worked with those awesome people from different affiliations, and I am always eager to look for potential collaboration opportunities. 😊

If you are interested in working with me together, please feel free to contact me!!! 🧐