Patents by Inventor Chun Yat Frank Li
Chun Yat Frank Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230388784Abstract: The present disclosure provides systems and methods for wirelessly coupling one or more accessories with a host device. The one or more accessories may be classified by the host device as primary, saved, or unsaved devices. Primary and saved devices may automatically couple to the host device via a short range wireless communications interface. Thus, the host device may simultaneously be coupled to one or more accessories. The host device may output first content to a first accessory, such as a saved accessory. The host device may receive second content and output the second content to a second accessory while simultaneously outputting the first content to the first accessory.Type: ApplicationFiled: August 10, 2023Publication date: November 30, 2023Applicant: Google LLCInventors: Basheer Tome, Sandeep Singh Waraich, Chun Yat Frank Li
-
Publication number: 20210314768Abstract: The present disclosure provides systems and methods for wirelessly coupling one or more accessories with a host device. The one or more accessories may be classified by the host device as primary, saved, or unsaved devices. Primary and saved devices may automatically couple to the host device via a short range wireless communications interface. Thus, the host device may simultaneously be coupled to one or more accessories. The host device may output first content to a first accessory, such as a saved accessory. The host device may receive second content and output the second content to a second accessory while simultaneously outputting the first content to the first accessory.Type: ApplicationFiled: April 1, 2020Publication date: October 7, 2021Inventors: Basheer Tome, Sandeep Singh Waraich, Chun Yat Frank Li
-
Patent number: 10599259Abstract: A method that includes employing several sensors associated with a handheld controller, where each of the sensors is made of one of a hover, touch, and force/pressure sensor, and generating, by one or more of the sensors, sensor data associated with the position of a user's hand and finger in relation to the handheld controller. The method continues with combining the sensor data from several sensors to form aggregate sensor data, sending the aggregate sensor data to a processor, and generating an estimated position of the user's hand and fingers based on the aggregate sensor data.Type: GrantFiled: August 24, 2018Date of Patent: March 24, 2020Assignee: GOOGLE LLCInventors: Debanjan Mukherjee, Basheer Tome, Sarnab Bhattacharya, Bhaskar Vadathavoor, Shiblee Imtiaz Hasan, Chun Yat Frank Li
-
Patent number: 10592048Abstract: A method for aligning an image on a mobile device disposed within a head-mounted display (HMD) housing includes: detecting a request to align an image on a touchscreen of a mobile device; detecting, on the touchscreen, a first detected location corresponding to a first touchscreen input event; determining a first displacement of the first detected location with respect to a first target location of the first touchscreen input event; and transposing the image on the touchscreen based on the first displacement. A virtual reality system includes: a mobile device having a touchscreen configured to display an image; and a HMD housing having a first contact configured to generate a first input event at a first location on the touchscreen when the mobile device is disposed within the HMD housing.Type: GrantFiled: May 3, 2017Date of Patent: March 17, 2020Assignee: GOOGLE LLCInventors: Chun Yat Frank Li, Hayes S. Raffle, Eric Allan MacIntosh
-
Patent number: 10545584Abstract: A controller configured to control a pointer in a virtual reality environment includes a multi-axis magnetic field sensor, a multi-axis accelerometer, a gyroscope, a touchpad, and a wireless communications circuit. The controller can also include a processor and a memory storing instructions that when executed by the processor, cause the processor to obtain geomagnetic field data from the multi-axis magnetic field sensor, obtain acceleration data describing a direction and a magnitude of force affecting the controller from the multi-axis accelerometer, and obtain angular velocity data describing a rotational position of the controller from the gyroscope. The processor can communicate movement data to a computing device configured to generate a rendering of the virtual reality environment, the movement data describing an orientation of the controller wherein the movement data is based on at least one of the geomagnetic field data, the acceleration data, or the angular velocity data.Type: GrantFiled: May 17, 2017Date of Patent: January 28, 2020Assignee: GOOGLE LLCInventors: Basheer Tome, Hayes S. Raffle, Chun Yat Frank Li
-
Patent number: 10509487Abstract: A system for combining a gyromouse input with a touch surface input in an augmented reality (AR) environment and/or a virtual reality (VR) environment, a virtual display of virtual items and/or features may be adjusted in response to movement of the gyromouse combined with touch inputs, or touch and drag inputs, received on a touch surface of the gyromouse. Use of the gyromouse in the AR/VR environment may allow touch screen capabilities to be accurately projected into a three dimensional virtual space, providing a controller having improved functionality and utility in the AR/VR environment, and enhancing the user's experience.Type: GrantFiled: December 15, 2016Date of Patent: December 17, 2019Assignee: GOOGLE LLCInventors: David Dearman, Chun Yat Frank Li
-
Patent number: 10475254Abstract: Systems, devices, methods, computer program products, and electronic apparatuses for aligning components in virtual reality environments are provided. An example method includes detecting a first input from a handheld controller of a virtual reality system, responsive to detecting the first input, instructing a user to orient a handheld controller in a designated direction, detecting a second input from the handheld controller; and responsive to detecting the second input, storing alignment data representative of an alignment of the handheld controller.Type: GrantFiled: January 4, 2019Date of Patent: November 12, 2019Assignee: Google LLCInventors: David Dearman, Chun Yat Frank Li, Erica Morse
-
Publication number: 20190155439Abstract: A method that includes employing several sensors associated with a handheld controller, where each of the sensors is made of one of a hover, touch, and force/pressure sensor, and generating, by one or more of the sensors, sensor data associated with the position of a user's hand and finger in relation to the handheld controller. The method continues with combining the sensor data from several sensors to form aggregate sensor data, sending the aggregate sensor data to a processor, and generating an estimated position of the user's hand and fingers based on the aggregate sensor data.Type: ApplicationFiled: August 24, 2018Publication date: May 23, 2019Inventors: Debanjan MUKHERJEE, Basheer TOME, Sarnab BHATTACHARYA, Bhaskar VADATHAVOOR, Shiblee Imtiaz HASAN, Chun Yat Frank LI
-
Publication number: 20190139323Abstract: Systems, devices, methods, computer program products, and electronic apparatuses for aligning components in virtual reality environments are provided. An example method includes detecting a first input from a handheld controller of a virtual reality system, responsive to detecting the first input, instructing a user to orient a handheld controller in a designated direction, detecting a second input from the handheld controller; and responsive to detecting the second input, storing alignment data representative of an alignment of the handheld controller.Type: ApplicationFiled: January 4, 2019Publication date: May 9, 2019Inventors: David Dearman, Chun Yat Frank Li, Erica Morse
-
Combining gaze input and touch surface input for user interfaces in augmented and/or virtual reality
Patent number: 10275023Abstract: In a virtual reality system, an optical tracking device may detect and track a user's eye gaze direction and/or movement, and/or sensors may detect and track a user's head gaze direction and/or movement, relative to virtual user interfaces displayed in a virtual environment. A processor may process the detected gaze direction and/or movement as a user input, and may translate the user input into a corresponding interaction in the virtual environment. Gaze directed swipes on a virtual keyboard displayed in the virtual environment may be detected and tracked, and translated into a corresponding text input, either alone or together with user input(s) received by the controller. The user may also interact with other types of virtual interfaces in the virtual environment using gaze direction and movement to provide an input, either alone or together with a controller input.Type: GrantFiled: December 21, 2016Date of Patent: April 30, 2019Assignee: GOOGLE LLCInventors: Chris McKenzie, Chun Yat Frank Li, Hayes S. Raffle -
Patent number: 10198874Abstract: Systems, devices, methods, computer program products, and electronic apparatuses for aligning components in virtual reality environments are provided. An example method includes detecting a first input from a handheld controller of a virtual reality system, responsive to detecting the first input, instructing a user to orient a handheld controller in a designated direction, detecting a second input from the handheld controller; and responsive to detecting the second input, storing alignment data representative of an alignment of the handheld controller.Type: GrantFiled: May 12, 2017Date of Patent: February 5, 2019Assignee: Google LLCInventors: David Dearman, Chun Yat Frank Li, Erica Morse
-
Publication number: 20170336882Abstract: A controller configured to control a pointer in a virtual reality environment includes a multi-axis magnetic field sensor, a multi-axis accelerometer, a gyroscope, a touchpad, and a wireless communications circuit. The controller can also include a processor and a memory storing instructions that when executed by the processor, cause the processor to obtain geomagnetic field data from the multi-axis magnetic field sensor, obtain acceleration data describing a direction and a magnitude of force affecting the controller from the multi-axis accelerometer, and obtain angular velocity data describing a rotational position of the controller from the gyroscope. The processor can communicate movement data to a computing device configured to generate a rendering of the virtual reality environment, the movement data describing an orientation of the controller wherein the movement data is based on at least one of the geomagnetic field data, the acceleration data, or the angular velocity data.Type: ApplicationFiled: May 17, 2017Publication date: November 23, 2017Inventors: Basheer TOME, Hayes S. RAFFLE, Chun Yat Frank LI
-
Publication number: 20170336915Abstract: A method for aligning an image on a mobile device disposed within a head-mounted display (HMD) housing includes: detecting a request to align an image on a touchscreen of a mobile device; detecting, on the touchscreen, a first detected location corresponding to a first touchscreen input event; determining a first displacement of the first detected location with respect to a first target location of the first touchscreen input event; and transposing the image on the touchscreen based on the first displacement. A virtual reality system includes: a mobile device having a touchscreen configured to display an image; and a HMD housing having a first contact configured to generate a first input event at a first location on the touchscreen when the mobile device is disposed within the HMD housing.Type: ApplicationFiled: May 3, 2017Publication date: November 23, 2017Inventors: Chun Yat Frank Li, Hayes S. Raffle, Eric Allan MacIntosh
-
Publication number: 20170330387Abstract: Systems, devices, methods, computer program products, and electronic apparatuses for aligning components in virtual reality environments are provided. An example method includes detecting a first input from a handheld controller of a virtual reality system, responsive to detecting the first input, instructing a user to orient a handheld controller in a designated direction, detecting a second input from the handheld controller; and responsive to detecting the second input, storing alignment data representative of an alignment of the handheld controller.Type: ApplicationFiled: May 12, 2017Publication date: November 16, 2017Inventors: David Dearman, Chun Yat Frank Li, Erica Morse
-
Publication number: 20170329419Abstract: A system for combining a gyromouse input with a touch surface input in an augmented reality (AR) environment and/or a virtual reality (VR) environment, a virtual display of virtual items and/or features may be adjusted in response to movement of the gyromouse combined with touch inputs, or touch and drag inputs, received on a touch surface of the gyromouse. Use of the gyromouse in the AR/VR environment may allow touch screen capabilities to be accurately projected into a three dimensional virtual space, providing a controller having improved functionality and utility in the AR/VR environment, and enhancing the user's experience.Type: ApplicationFiled: December 15, 2016Publication date: November 16, 2017Inventors: David Dearman, Chun Yat Frank Li
-
COMBINING GAZE INPUT AND TOUCH SURFACE INPUT FOR USER INTERFACES IN AUGMENTED AND/OR VIRTUAL REALITY
Publication number: 20170322623Abstract: In a virtual reality system, an optical tracking device may detect and track a user's eye gaze direction and/or movement, and/or sensors may detect and track a user's head gaze direction and/or movement, relative to virtual user interfaces displayed in a virtual environment. A processor may process the detected gaze direction and/or movement as a user input, and may translate the user input into a corresponding interaction in the virtual environment. Gaze directed swipes on a virtual keyboard displayed in the virtual environment may be detected and tracked, and translated into a corresponding text input, either alone or together with user input(s) received by the controller. The user may also interact with other types of virtual interfaces in the virtual environment using gaze direction and movement to provide an input, either alone or together with a controller input.Type: ApplicationFiled: December 21, 2016Publication date: November 9, 2017Inventors: Chris McKenzie, Chun Yat Frank Li, Hayes S. Raffle -
Patent number: 9804682Abstract: Embodiments described herein may provide a configuration of input interfaces used to perform multi-touch operations. An example device may involve: (a) a housing arranged on a head-mountable device, (b) a first input interface arranged on either a superior or an inferior surface of the housing, (c) a second input interface arranged on a surface of the housing that is opposite to the first input interface, and (d) a control system configured to: (1) receive first input data from the first input interface, where the first input data corresponds to a first input action, and in response, cause a camera to perform a first operation in accordance with the first input action, and (2) receive second input data from the second input interface, where the second input data corresponds to a second input action(s) on the second input interface, and in response, cause the camera to perform a second operation.Type: GrantFiled: February 1, 2016Date of Patent: October 31, 2017Assignee: Google Inc.Inventors: Chun Yat Frank Li, Hayes Solos Raffle
-
Patent number: 9798517Abstract: Embodiments may relate to intuitive user-interface features for a head-mountable device (HMD), in the context of a hybrid human and computer-automated response system. An illustrative method may involve a head-mountable device (HMD) that comprises a touchpad: (a) sending a speech-segment message to a hybrid response system, wherein the speech-segment message is indicative of a speech segment that is detected in audio data captured at the HMD, and wherein the speech-segment is associated with a first user-account with the hybrid response system, (b) receiving a response message that includes a response to the speech-segment message and an indication of a next action corresponding to the response to the speech-segment message, (c) displaying a screen interface that includes an indication of the response, and (d) while displaying the response, detecting a singular touch gesture and responsively initiating the at least one next action.Type: GrantFiled: January 27, 2017Date of Patent: October 24, 2017Assignee: X Development LLCInventors: Chun Yat Frank Li, Daniel Rodriguez Magana, Thiago Teixeira, Charles Chen, Anand Agarawala
-
Publication number: 20170139672Abstract: Embodiments may relate to intuitive user-interface features for a head-mountable device (HMD), in the context of a hybrid human and computer-automated response system. An illustrative method may involve a head-mountable device (HMD) that comprises a touchpad: (a) sending a speech-segment message to a hybrid response system, wherein the speech-segment message is indicative of a speech segment that is detected in audio data captured at the HMD, and wherein the speech-segment is associated with a first user-account with the hybrid response system, (b) receiving a response message that includes a response to the speech-segment message and an indication of a next action corresponding to the response to the speech-segment message, (c) displaying a screen interface that includes an indication of the response, and (d) while displaying the response, detecting a singular touch gesture and responsively initiating the at least one next action.Type: ApplicationFiled: January 27, 2017Publication date: May 18, 2017Inventors: Chun Yat Frank Li, Daniel Rodriguez Magana, Thiago Teixeira, Charles Chen, Anand Agarawala
-
Publication number: 20170139567Abstract: Embodiments described herein may help to provide a lock-screen for a computing device. An example method involves: (a) displaying two or more rows of characters and an input region that is moveable over the rows of characters, (b) based on head-movement data, determining movement of the input region with respect to the rows of characters, (c) determining an input sequence, where the sequence includes one character from each of the rows of characters that is selected based at least in part on the one or more movements of the input region with respect to the rows of characters, (d) determining whether or not the input sequence matches a predetermined unlock sequence, and (e) if the input sequence matches the predetermined unlock sequence, then unlocking the computing device.Type: ApplicationFiled: July 3, 2013Publication date: May 18, 2017Inventor: Chun Yat Frank Li