Patents by Inventor Liang-Yu (Tom) Chi

Liang-Yu (Tom) Chi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190011982
    Abstract: Methods and systems involving navigation of a graphical interface are disclosed herein. An example system may be configured to: (a) cause a head-mounted display (HMD) to provide a graphical interface, the graphical interface comprising (i) a view port having a view-port orientation and (ii) at least one navigable area having at least one border, the at least one border having a first border orientation; (b) receive input data that indicates movement of the view port towards the at least one border; (c) determine that the view-port orientation is within a predetermined threshold distance from the first border orientation; and (d) based on at least the determination that the view-port orientation is within a predetermined threshold distance from the first border orientation, adjust the first border orientation from the first border orientation to a second border orientation.
    Type: Application
    Filed: August 10, 2018
    Publication date: January 10, 2019
    Inventors: Aaron Wheeler, Liang-Yu (Tom) Chi, Sebastian Thrun, Hayes Solos Raffle, Nirmal Patel
  • Patent number: 10067559
    Abstract: Methods and systems involving navigation of a graphical interface are disclosed herein. An example system may be configured to: (a) cause a head-mounted display (HMD) to provide a graphical interface, the graphical interface comprising (i) a view port having a view-port orientation and (ii) at least one navigable area having at least one border, the at least one border having a first border orientation; (b) receive input data that indicates movement of the view port towards the at least one border; (c) determine that the view-port orientation is within a predetermined threshold distance from the first border orientation; and (d) based on at least the determination that the view-port orientation is within a predetermined threshold distance from the first border orientation, adjust the first border orientation from the first border orientation to a second border orientation.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: September 4, 2018
    Assignee: Google LLC
    Inventors: Aaron Wheeler, Liang-Yu (Tom) Chi, Sebastian Thrun, Hayes Solos Raffle, Nirmal Patel
  • Patent number: 9911418
    Abstract: Methods and apparatus related to processing speech input at a wearable computing device are disclosed. Speech input can be received at the wearable computing device. Speech-related text corresponding to the speech input can be generated. A context can be determined based on database(s) and/or a history of accessed documents. An action can be determined based on an evaluation of at least a portion of the speech-related text and the context. The action can be a command or a search request. If the action is a command, then the wearable computing device can generate output for the command. If the action is a search request, then the wearable computing device can: communicate the search request to a search engine, receive search results from the search engine, and generate output based on the search results. The output can be provided using output component(s) of the wearable computing device.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: March 6, 2018
    Assignee: Google LLC
    Inventor: Liang-Yu (Tom) Chi
  • Patent number: 9900676
    Abstract: Exemplary wearable computing systems may include a head-mounted display that is configured to provide indirect bone-conduction audio. For example, an exemplary head-mounted display may include at least one vibration transducer that is configured to vibrate at least a portion of the head-mounted display based on the audio signal. The vibration transducer is configured such that when the head-mounted display is worn, the vibration transducer vibrates the head-mounted display without directly vibrating a wearer. However, the head-mounted display structure vibrationally couples to a bone structure of the wearer, such that vibrations from the vibration transducer may be indirectly transferred to the wearer's bone structure.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: February 20, 2018
    Assignee: Google LLC
    Inventors: Jianchun Dong, Liang-Yu Tom Chi, Mitchell Heinrich, Leng Ooi
  • Patent number: 9727174
    Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.
    Type: Grant
    Filed: June 10, 2015
    Date of Patent: August 8, 2017
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
  • Publication number: 20170053648
    Abstract: Methods and apparatus related to processing speech input at a wearable computing device are disclosed. Speech input can be received at the wearable computing device. Speech-related text corresponding to the speech input can be generated. A context can be determined based on database(s) and/or a history of accessed documents. An action can be determined based on an evaluation of at least a portion of the speech-related text and the context. The action can be a command or a search request. If the action is a command, then the wearable computing device can generate output for the command. If the action is a search request, then the wearable computing device can: communicate the search request to a search engine, receive search results from the search engine, and generate output based on the search results. The output can be provided using output component(s) of the wearable computing device.
    Type: Application
    Filed: November 8, 2016
    Publication date: February 23, 2017
    Inventor: Liang-Yu (Tom) Chi
  • Publication number: 20160267176
    Abstract: An example method involves: (i) maintaining an attribute-association database comprising data for a set of data items, wherein the data for a given one of the data items specifies for each of one or more attributes from a set of attributes, (a) an associative value, and (b) a temporal-decay value; (ii) detecting an event, wherein the event is associated with one or more event attributes; (iii) determining a relevance of a given one of the data items to the detected event based on a selected one or more of (a) the associative values and (b) the temporal-decay values; (iv) selecting at least one data item, wherein the at least one data item is selected from a set of the data items based on the respective relevancies of the data items in the set; and (v) providing an indication of the at least one selected data item.
    Type: Application
    Filed: May 23, 2016
    Publication date: September 15, 2016
    Inventor: Liang-Yu (Tom) Chi
  • Publication number: 20160192048
    Abstract: Exemplary wearable computing systems may include a head-mounted display that is configured to provide indirect bone-conduction audio. For example, an exemplary head-mounted display may include at least one vibration transducer that is configured to vibrate at least a portion of the head-mounted display based on the audio signal. The vibration transducer is configured such that when the head-mounted display is worn, the vibration transducer vibrates the head-mounted display without directly vibrating a wearer. However, the head-mounted display structure vibrationally couples to a bone structure of the wearer, such that vibrations from the vibration transducer may be indirectly transferred to the wearer's bone structure.
    Type: Application
    Filed: March 10, 2016
    Publication date: June 30, 2016
    Inventors: Jianchun Dong, Liang-Yu Tom Chi, Mitchell Heinrich, Leng Ooi
  • Patent number: 9367864
    Abstract: Exemplary embodiments involve real-time commenting in experience-sharing sessions. An exemplary method involves: (a) a server system facilitating an experience sharing session between a sharing device and one or more viewing devices, wherein the server system receives media in real-time from the sharing device and transmits the media to the one or more viewing devices in real-time, wherein the media comprises video; (b) during the experience sharing session, the server system receiving one or more comments from one or more of the viewing devices; (d) the server system filtering the received comments in real-time based on filter criteria; and (e) the server system initiating real-time delivery, to the sharing device, of one or more of the received comments that satisfy the filter criteria.
    Type: Grant
    Filed: March 19, 2015
    Date of Patent: June 14, 2016
    Assignee: Google Inc.
    Inventors: Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Liang-Yu Tom Chi, Bradley James Rhodes
  • Patent number: 9355110
    Abstract: An example method involves: (i) maintaining an attribute-association database comprising data for a set of data items, wherein the data for a given one of the data items specifies for each of one or more attributes from a set of attributes, (a) an associative value, and (b) a temporal-decay value; (ii) detecting an event, wherein the event is associated with one or more event attributes; (iii) determining a relevance of a given one of the data items to the detected event based on a selected one or more of (a) the associative values and (b) the temporal-decay values; (iv) selecting at least one data item, wherein the at least one data item is selected from a set of the data items based on the respective relevancies of the data items in the set; and (v) providing an indication of the at least one selected data item.
    Type: Grant
    Filed: March 16, 2012
    Date of Patent: May 31, 2016
    Assignee: Google Inc.
    Inventor: Liang-Yu (Tom) Chi
  • Publication number: 20160011724
    Abstract: Methods and devices for providing a user-interface are disclosed. In one embodiment, the method comprises receiving data corresponding to a first position of a wearable computing device and responsively causing the wearable computing device to provide a user-interface. The user-interfaces comprises a view region and a menu, where the view region substantially fills a field of view of the wearable computing device and the menu is not fully visible in the view region. The method further comprises receiving data indicating a selection of an item present in the view region and causing an indicator to be displayed in the view region, wherein the indicator changes incrementally over a length of time. When the length of time has passed, the method comprises responsively causing the wearable computing device to select the item.
    Type: Application
    Filed: March 2, 2012
    Publication date: January 14, 2016
    Applicant: Google Inc.
    Inventors: Aaron Joseph Wheeler, Sergey Brin, Thad Eugene Starner, Alejandro Kauffmann, Cliff L. Biffle, Liang-Yu (Tom) Chi, Steve Lee, Sebastian Thrun, Luis Ricardo Prada Gomez
  • Patent number: 9213185
    Abstract: A wearable computing system may include a head-mounted display (HMD) with a display configured to display images viewable at a viewing location. When aligned with an HMD wearer's line of sight, the entire display area of the display may be within the HMD wearer's field of view. The area within which an HMD wearer's eye can move and still view the entire display area is termed an “eye box.” However, if the HMD slips up or down, the display area may become obscured, such that the wearer can no longer see the entire image. By scaling or subsetting an image area within the display area, the effective eye box dimensions may increase. Further, in response to movements of the HMD with respect to the wearer, the image area can be adjusted to reduce effects such as vibration and slippage.
    Type: Grant
    Filed: May 8, 2012
    Date of Patent: December 15, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Adrian Wong, Yong Zhao, Chia-Jean Wang, Anurag Gupta, Liang-Yu (Tom) Chi
  • Publication number: 20150304253
    Abstract: Exemplary embodiments involve real-time commenting in experience-sharing sessions. An exemplary method involves: (a) a server system facilitating an experience sharing session between a sharing device and one or more viewing devices, wherein the server system receives media in real-time from the sharing device and transmits the media to the one or more viewing devices in real-time, wherein the media comprises video; (b) during the experience sharing session, the server system receiving one or more comments from one or more of the viewing devices; (d) the server system filtering the received comments in real-time based on filter criteria; and (e) the server system initiating real-time delivery, to the sharing device, of one or more of the received comments that satisfy the filter criteria.
    Type: Application
    Filed: March 19, 2015
    Publication date: October 22, 2015
    Inventors: Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Liang-Yu Tom Chi, Bradley James Rhodes
  • Publication number: 20150268799
    Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.
    Type: Application
    Filed: June 10, 2015
    Publication date: September 24, 2015
    Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
  • Patent number: 9069164
    Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: June 30, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
  • Patent number: 9015245
    Abstract: Exemplary embodiments involve real-time commenting in experience-sharing sessions. An exemplary method involves: (a) a server system facilitating an experience sharing session between a sharing device and one or more viewing devices, wherein the server system receives media in real-time from the sharing device and transmits the media to the one or more viewing devices in real-time, wherein the media comprises video; (b) during the experience sharing session, the server system receiving one or more comments from one or more of the viewing devices; (d) the server system filtering the received comments in real-time based on filter criteria; and (e) the server system initiating real-time delivery, to the sharing device, of one or more of the received comments that satisfy the filter criteria.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: April 21, 2015
    Assignee: Google Inc.
    Inventors: Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Liang-Yu Tom Chi, Bradley James Rhodes
  • Patent number: 8957916
    Abstract: Methods and systems are disclosed herein that may help to present graphics in a see-through display of a head-mountable display. An exemplary method may involve: (a) receiving image data that is indicative of a real-world field of view associated with a head-mountable display (HMD); (b) analyzing the image data to determine at least one undesirable portion of the real-world field of view, wherein the at least one undesirable portion is undesirable as a background for display of at least one graphic object in a graphic display of the HMD; (c) determining at least one undesirable area of the graphic display that corresponds to the at least one undesirable portion of the real-world field of view; and (d) causing the at least one graphic object to be displayed in an area of the graphic display such that the graphic object substantially avoids the at least one undesirable area.
    Type: Grant
    Filed: March 23, 2012
    Date of Patent: February 17, 2015
    Assignee: Google Inc.
    Inventors: Elliott Bruce Hedman, Liang-Yu Tom Chi, Aaron Joseph Wheeler
  • Patent number: 8947322
    Abstract: Exemplary methods and systems relate to a wearable computing device determining a user-context and dynamically changing the content of a user-interface based on the determined user-context. The device may determine a user-context based on digital context; such as a text document a user is reading or a current website the device is accessing. User-context may also be based on physical context; such as the device's location or the air temperature around a user. Once a user-context is determined, a device may identify content that is related to the user-context and add objects representing this related content to a user-interface.
    Type: Grant
    Filed: March 19, 2012
    Date of Patent: February 3, 2015
    Assignee: Google Inc.
    Inventors: Liang-Yu (Tom) Chi, Robert Allen Ryskamp, Aaron Joseph Wheeler, Luis Ricardo Prada Gomez
  • Patent number: 8942881
    Abstract: Methods and apparatuses for gesture-based controls are disclosed. In one aspect, a method is disclosed that includes maintaining a correlation between a plurality of predetermined gestures, in combination with a plurality of predetermined regions of a vehicle, and a plurality of functions. The method further includes recording three-dimensional images of an interior portion of the vehicle and, based on the three-dimensional images, detecting a given gesture in a given region of the vehicle, where the given gesture corresponds to one of the plurality of predetermined gestures and the given region corresponds to one of the plurality of predetermined regions. The method still further includes selecting, based on the correlation, a function associated with the given gesture in combination with the given region and initiating the function in the vehicle.
    Type: Grant
    Filed: April 2, 2012
    Date of Patent: January 27, 2015
    Assignee: Google Inc.
    Inventors: Nicholas Kenneth Hobbs, Liang-Yu (Tom) Chi
  • Patent number: 8934015
    Abstract: Disclosed are methods and apparatus for experience sharing for emergency situations. A wearable computing device can receive an indication of an emergency situation. In response to the indication, the wearable computing device can initiate an experience sharing session with one or more emergency contacts. During the experience sharing session, the wearable computing device can capture video data, add text to the captured video data, and transmit the captured video data and added text to the one or more emergency contacts.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: January 13, 2015
    Assignee: Google Inc.
    Inventors: Liang-Yu Tom Chi, Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Luis Ricardo Prada Gomez