This is Yongquan Hu. You can also call me as Owen, which is my English nickname. I'm cool with both :-) A fun fact, here is how my english name comes: " Yong --> On --> Owen ".
I am currently a PhD student at CSE school of the University Of New South Wales (UNSW), advised by Prof. Aaron Quigley. This is our HCI BoDi Lab : the word ''BoDi'' is an abbreviation for "Border of Digital Interaction", and also refers to our enthusiasm for leveraging machines and various parts of the human body to form multiple interfaces. Our vision is to promote Human-Computer Interaction (HCI) research through advanced human interface technologies to bridge the divide between the physical world we live in and the digital world where the power of computing currently resides.
Previously, I was a research assistant at the Pervasive HCI Group of Tsinghua University, advised by Prof. Chun Yu and Prof. Yuanchun Shi. Before that, I did a research internship at the HCI Lab (HCIL) of the University of Maryland, College Park, advised by Prof. Huaishu Peng. Also, I have participated in the exchange activity of the ME310 Global Collaborative Design Course, which was initiated by Prof. Larry Leifer of Stanford University. Moreover, I respectively acheieved my Master's degree at the University of Science and Technology of China (USTC) and Bachelor's degree at Donghua University (DHU).
Generally, my research interests mainly lie in the application layer, aiming to create novel multimodal interfaces and build interactive systems to provide support for applicability, interactivity, and accessibility. In particular, based on hardware (such as smartphone, smartwatch, microcontroller) and software (such as deep learning, machine learning), I primarily study interactions of human-object/object-environment/human-environment that typically happens with rountine activities, develop intelligent prototypes for the integration of design patterns into engineering artifacts, and explore novel applications of various techniques for the improvement of user experience.
As a member of the design depart, I participated in the development and research of novel AI-based GUI interfaces, and helped promote the ecological synergy of Lenovo's multi-terminal smart products
Key Focus: Interface of Operating System, Ecological Interface Collaboration
As one of the researchers in the lab, my research focuses on ubiquitous computing and mobile computing.
Key Focus: Ubiquitous/Pervasive Computing, Mobile Computing, Accessibility
As a research intern, I joined in the development of a wall-climbing intelligent robot as a prototype of vertical tangible interaction
Key Focus: Tangible Interaction, Digital Fabrication
DHU Highest Honor
2 Times, 1st Place
1st Place, USTC
3 times, Top 1%, MOE of China
2 times, Top 5%, DHU
Academic paper review.
Participate in the organization of the academic schedule of CHI 2021.
The recent rapid development of generative AI allows us to see the possibility of applying it to real-world scenarios.
In theory, with sufficient accuracy, the signal can detect any physical fluctuation. We try to expand the use of radar signals and apply them to some new application scenarios.
OptiTrack is famous for its extremely high motion tracking accuracy, we try to use the action sequences captured by its system as ground truth, but use other sensors to simulate this sequence of actions.
AR Shuttle Door offers the opportunity to retouch the real world, but can it change the real object itself?
Aobile phone use is ubiquitous. However, most of the mobile phone interface designs only consider static scenes rather than dynamic (such as human movement) scenes at the beginning.
We do a lot of regular activities every day, such as getting up, brushing our teeth and turning on the computer. The activities themselves are actually rich in information (reflecting our identities, preferences, etc.), and contextual information between activities is often inferred.
#implicit interaction #natural interaction #daily rountines
We present a novel tool named SmartRecorder that facilitates people, without video editing skills, creating video tutorials for smartphone interaction tasks.
#interactive systems and tools #tutorial creation
This research aims to recognize the walking surface using a radar sensor embedded in a shoe, enabling ground context-awareness.
#radar #foot interaction
We propose Auth+Track, a novel authentication model that aims to reduce redundant authentication in everyday smartphone usage. By sparse authentication and continuous tracking of the user's status, Auth+Track eliminates the “gap” authentication between fragmented sessions and enables "Authentication Free when User is Around".
#radar #foot interaction
Using four images of the hairstyles to reconstruct the hairstyles as an 3D hair model.
#radar #foot interaction
We envision a future where tangible interaction can be extended from conventional horizontal surfaces to vertical surfaces; indoor vertical areas such as walls, windows, and ceilings can be used for dynamic and direct physical manipulation.
#radar #foot interaction
There is still a gap between existing objective Image Quality Assessment (IQA) methods and subjective evaluation of human beings. Therefore, the method based on Human-Machine Co-Judgment (HMCJ) is proposed to achieve accurate semantic recognition. Furthermore, due to the lack of recognized semantic dataset, our Surveillance Semantic Distortion (SSD) Dataset is build as the basis for IQA.
We propose a novel semanticlevel full-reference image quality assessment (FR-IQA) method named Semantic Distortion Measurement (SDM) to measure the degree of semantic distortion for video encryption. Then, based on a semantic saliency dataset, we verify that the proposed SDM method outperforms state-of-the-art algorithms.
#semantic encryption #video encoding
The amount of data generated by the current most advanced technique of quantum key distribution is very short. So, in such a situation, we can only extract a very small portion of the most important information for encryption, which we calle semantic information.
#semantic encryption #quantum encryption
There are very few pictures appear in blind books. So our project presented a Braille graphics standard library, and designed a new way to translate the ordinary 2D picture into a touchable 3D picture based on Image Caption ,a method of Artificial Intelligence and a touchable ink invented by Samsung in Thailand.
#touchable book #accessibility
We have built a realistic, sensible, and useful model that considers not only the spread of the disease, the quantity of the medicine needed, possible feasible delivery systems, locations of delivery, speed of manufacturing of the vaccine or drug, but also any other critical factors to optimize the eradication of Ebola.
This work implemented the design of FM and frequency demodulation circuits centered on MC1648, MC1496, and realized the circuit performance test by the virtual instruments embed in the NI ELVIS Ⅱ +.
#RF circuit #experimental platform
We collected multiple data parameters of the mobile robot through our Web Monitoring Tool, and also can change the robot operating state according to our practical requirement by changing these parameters.
This project focuses on the design of low-noise RF amplifier and the hardware implementation, and uses the vector network analyzer to test the circuit in order to get better related performance.
It is my great honor to have worked with those awesome people from different affiliations, and I am always eager to look for potential collaboration opportunities. 😊
If you are interested in working with me together, please feel free to contact me!!! 🧐