Hey there!
This is Yongquan, but feel free to call me Owen instead β thatβs my English nickname. I am cool with both ; ) A funny story: I owe "Owen" to a little mix-up during English class where my teacher took a creative leap from "Yong" to "On" and landed on "Owen."
Currently, I am currently a 3rd-year PhD student at the Computer Science and Engineering (CSE) School of the University Of New South Wales (UNSW), advised by Prof. Aaron Quigley and Prof. Wen Hu. I am also a member of HCI BoDi Lab, UNSW HCC Group and UNSW AI Institute.
Previously, I was a research assistant at the Pervasive HCI Group of Tsinghua University, advised by Prof. Chun Yu and Prof. Yuanchun Shi. Before that, I did a research internship at the HCI Lab (HCIL) of the University of Maryland, College Park, advised by Prof. Huaishu Peng. Also, I have participated in the exchange activity of the ME310 Global Collaborative Design Course, which was initiated by Prof. Larry Leifer of Stanford University. Moreover, I respectively acheieved my Master's degree at the University of Science and Technology of China (USTC) and Bachelor's degree at Donghua University (DHU).
About My Research
My research domain is Human-Computer Interaction (HCI) and my interests mainly lie in the intersections of Human-Computer Interfaces, Ubiquitous and Mobile Computing, and Human-AI Interaction. Generally, I develop intelligent prototypes for the integration of design patterns into engineering artifacts, and explore novel applications covering pysical, virtual and mixed world, in order to improve user experience. Specifically, I aim to build Vision-based Multimodal Interfaces (VMIs) for enhancing Context Awareness based on hardware (e.g., smartphone, diverse sensors) and software (e.g., Deep Learning, Large-Lanuage Models), for facilitating systems' applicability, interactivity, and accessibility. In my research, a question I've always been trying to wrestle with: As the primary source of information, how could the visual modality be fused, integrated, or collaborated with other modalities to promote low-cost, seamless and user-friendly interactions between machines and humans.
Developed Adaptive GUI in the Department of Design
Key Focus: Interface of Operating System, Ecological Interface Collaboration
Worked on collaborative projects
Key Focus: Ubiquitous/Pervasive Computing, Mobile Computing, Accessibility
Worked on wall-climbing robot project
Key Focus: Tangible Interaction, Digital Fabrication
Developed an algorithm for object recognition and classification
Key Focus: Computer Vision, Deep Learning, Algorithm Development
Course tutor and worked on projects for SUGAR EXPO
Key Focus: Global Design Innovation Course
Developed a smart cane based on Arduino for the blind
Key Focus: Embedded Development, Wireless Communication
1st Place
Meritorious Winner
DHU Highest Honor
Winning Prize
Winning Prize
3rd Place
2 Times, 1st Place
3rd Place
UNSW
UNSW
1st Place, USTC
3 times, Top 1%, MOE of China
DHU
2 times, Top 5%, DHU
Academic paper review.
Academic paper review.
Academic paper review.
Academic paper review.
Participate in the organization of the academic schedule of CHI 2021.
This project aims to facilitate content creation in projection scenarios by combining visual (depth dimension) and textual modalities.
#generative AI #context creation #projection
How is radar sensing employed in HCI? How is it different from vision-based sensing methods?
#radar #survey
This project aims to facilitate content creation in projection scenarios by combining visual (depth dimension) and textual modalities.
#generative AI #context creation #projection
The recent rapid development of generative AI allows us to see the possibility of applying it to real-world scenarios.
#surface sensing #deep learning
This project aims to facilitate content creation in projection scenarios by combining visual (depth dimension) and textual modalities.
#generative AI #context creation #projection
We propose a framework called Ambient2Hap to combine visual and auditory modalities for haptic rendering in VR environments.
#virtual reality #haptics
The recent rapid development of generative AI allows us to see the possibility of applying it to real-world scenarios.
#surface sensing #deep learning
AR Shuttle Door offers the opportunity to retouch the real world, but can it change the real object itself?
#radar #surface sensing
OptiTrack is famous for its extremely high motion tracking accuracy, we try to use the action sequences captured by its system as ground truth, but use other sensors to simulate this sequence of actions.
#optitrack #tracking
Aobile phone use is ubiquitous. However, most of the mobile phone interface designs only consider static scenes rather than dynamic (such as human movement) scenes at the beginning.
#computational GUI
We do a lot of regular activities every day, such as getting up, brushing our teeth and turning on the computer. The activities themselves are actually rich in information (reflecting our identities, preferences, etc.), and contextual information between activities is often inferred.
#implicit interaction #natural interaction #daily rountines
Using four images of the hairstyles to reconstruct the hairstyles as an 3D hair model.
#design factor #augmented reality
We present a novel tool named SmartRecorder that facilitates people, without video editing skills, creating video tutorials for smartphone interaction tasks.
#interactive systems and tools #tutorial creation
This research aims to recognize the walking surface using a radar sensor embedded in a shoe, enabling ground context-awareness.
#radar #foot interaction
We propose Auth+Track, a novel authentication model that aims to reduce redundant authentication in everyday smartphone usage. By sparse authentication and continuous tracking of the user's status, Auth+Track eliminates the βgapβ authentication between fragmented sessions and enables "Authentication Free when User is Around".
#radar #foot interaction
We envision a future where tangible interaction can be extended from conventional horizontal surfaces to vertical surfaces; indoor vertical areas such as walls, windows, and ceilings can be used for dynamic and direct physical manipulation.
#radar #foot interaction
There is still a gap between existing objective Image Quality Assessment (IQA) methods and subjective evaluation of human beings. Therefore, the method based on Human-Machine Co-Judgment (HMCJ) is proposed to achieve accurate semantic recognition. Furthermore, due to the lack of recognized semantic dataset, our Surveillance Semantic Distortion (SSD) Dataset is build as the basis for IQA.
#semantic dataset
We propose a novel semanticlevel full-reference image quality assessment (FR-IQA) method named Semantic Distortion Measurement (SDM) to measure the degree of semantic distortion for video encryption. Then, based on a semantic saliency dataset, we verify that the proposed SDM method outperforms state-of-the-art algorithms.
#semantic encryption #video encoding
The amount of data generated by the current most advanced technique of quantum key distribution is very short. So, in such a situation, we can only extract a very small portion of the most important information for encryption, which we calle semantic information.
#semantic encryption #quantum encryption
There are very few pictures appear in blind books. So our project presented a Braille graphics standard library, and designed a new way to translate the ordinary 2D picture into a touchable 3D picture based on Image Caption ,a method of Artificial Intelligence and a touchable ink invented by Samsung in Thailand.
#touchable book #accessibility
We have built a realistic, sensible, and useful model that considers not only the spread of the disease, the quantity of the medicine needed, possible feasible delivery systems, locations of delivery, speed of manufacturing of the vaccine or drug, but also any other critical factors to optimize the eradication of Ebola.
#mathematical modeling
This work implemented the design of FM and frequency demodulation circuits centered on MC1648, MC1496, and realized the circuit performance test by the virtual instruments embed in the NI ELVIS β ‘ +.
#RF circuit #experimental platform
We collected multiple data parameters of the mobile robot through our Web Monitoring Tool, and also can change the robot operating state according to our practical requirement by changing these parameters.
#automatic driving
This project focuses on the design of low-noise RF amplifier and the hardware implementation, and uses the vector network analyzer to test the circuit in order to get better related performance.
#RF amplifier
It is my great honor to have worked with those awesome people from different affiliations, and I am always eager to look for potential collaboration opportunities. π
If you are interested in working with me together, please feel free to contact me!!! π§