Patents by Inventor Liang-Yu (Tom) Chi
Liang-Yu (Tom) Chi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190011982Abstract: Methods and systems involving navigation of a graphical interface are disclosed herein. An example system may be configured to: (a) cause a head-mounted display (HMD) to provide a graphical interface, the graphical interface comprising (i) a view port having a view-port orientation and (ii) at least one navigable area having at least one border, the at least one border having a first border orientation; (b) receive input data that indicates movement of the view port towards the at least one border; (c) determine that the view-port orientation is within a predetermined threshold distance from the first border orientation; and (d) based on at least the determination that the view-port orientation is within a predetermined threshold distance from the first border orientation, adjust the first border orientation from the first border orientation to a second border orientation.Type: ApplicationFiled: August 10, 2018Publication date: January 10, 2019Inventors: Aaron Wheeler, Liang-Yu (Tom) Chi, Sebastian Thrun, Hayes Solos Raffle, Nirmal Patel
-
Patent number: 10067559Abstract: Methods and systems involving navigation of a graphical interface are disclosed herein. An example system may be configured to: (a) cause a head-mounted display (HMD) to provide a graphical interface, the graphical interface comprising (i) a view port having a view-port orientation and (ii) at least one navigable area having at least one border, the at least one border having a first border orientation; (b) receive input data that indicates movement of the view port towards the at least one border; (c) determine that the view-port orientation is within a predetermined threshold distance from the first border orientation; and (d) based on at least the determination that the view-port orientation is within a predetermined threshold distance from the first border orientation, adjust the first border orientation from the first border orientation to a second border orientation.Type: GrantFiled: May 27, 2014Date of Patent: September 4, 2018Assignee: Google LLCInventors: Aaron Wheeler, Liang-Yu (Tom) Chi, Sebastian Thrun, Hayes Solos Raffle, Nirmal Patel
-
Patent number: 9911418Abstract: Methods and apparatus related to processing speech input at a wearable computing device are disclosed. Speech input can be received at the wearable computing device. Speech-related text corresponding to the speech input can be generated. A context can be determined based on database(s) and/or a history of accessed documents. An action can be determined based on an evaluation of at least a portion of the speech-related text and the context. The action can be a command or a search request. If the action is a command, then the wearable computing device can generate output for the command. If the action is a search request, then the wearable computing device can: communicate the search request to a search engine, receive search results from the search engine, and generate output based on the search results. The output can be provided using output component(s) of the wearable computing device.Type: GrantFiled: November 8, 2016Date of Patent: March 6, 2018Assignee: Google LLCInventor: Liang-Yu (Tom) Chi
-
Patent number: 9900676Abstract: Exemplary wearable computing systems may include a head-mounted display that is configured to provide indirect bone-conduction audio. For example, an exemplary head-mounted display may include at least one vibration transducer that is configured to vibrate at least a portion of the head-mounted display based on the audio signal. The vibration transducer is configured such that when the head-mounted display is worn, the vibration transducer vibrates the head-mounted display without directly vibrating a wearer. However, the head-mounted display structure vibrationally couples to a bone structure of the wearer, such that vibrations from the vibration transducer may be indirectly transferred to the wearer's bone structure.Type: GrantFiled: March 10, 2016Date of Patent: February 20, 2018Assignee: Google LLCInventors: Jianchun Dong, Liang-Yu Tom Chi, Mitchell Heinrich, Leng Ooi
-
Patent number: 9727174Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.Type: GrantFiled: June 10, 2015Date of Patent: August 8, 2017Assignee: Google Inc.Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
-
Publication number: 20170053648Abstract: Methods and apparatus related to processing speech input at a wearable computing device are disclosed. Speech input can be received at the wearable computing device. Speech-related text corresponding to the speech input can be generated. A context can be determined based on database(s) and/or a history of accessed documents. An action can be determined based on an evaluation of at least a portion of the speech-related text and the context. The action can be a command or a search request. If the action is a command, then the wearable computing device can generate output for the command. If the action is a search request, then the wearable computing device can: communicate the search request to a search engine, receive search results from the search engine, and generate output based on the search results. The output can be provided using output component(s) of the wearable computing device.Type: ApplicationFiled: November 8, 2016Publication date: February 23, 2017Inventor: Liang-Yu (Tom) Chi
-
Publication number: 20160267176Abstract: An example method involves: (i) maintaining an attribute-association database comprising data for a set of data items, wherein the data for a given one of the data items specifies for each of one or more attributes from a set of attributes, (a) an associative value, and (b) a temporal-decay value; (ii) detecting an event, wherein the event is associated with one or more event attributes; (iii) determining a relevance of a given one of the data items to the detected event based on a selected one or more of (a) the associative values and (b) the temporal-decay values; (iv) selecting at least one data item, wherein the at least one data item is selected from a set of the data items based on the respective relevancies of the data items in the set; and (v) providing an indication of the at least one selected data item.Type: ApplicationFiled: May 23, 2016Publication date: September 15, 2016Inventor: Liang-Yu (Tom) Chi
-
Publication number: 20160192048Abstract: Exemplary wearable computing systems may include a head-mounted display that is configured to provide indirect bone-conduction audio. For example, an exemplary head-mounted display may include at least one vibration transducer that is configured to vibrate at least a portion of the head-mounted display based on the audio signal. The vibration transducer is configured such that when the head-mounted display is worn, the vibration transducer vibrates the head-mounted display without directly vibrating a wearer. However, the head-mounted display structure vibrationally couples to a bone structure of the wearer, such that vibrations from the vibration transducer may be indirectly transferred to the wearer's bone structure.Type: ApplicationFiled: March 10, 2016Publication date: June 30, 2016Inventors: Jianchun Dong, Liang-Yu Tom Chi, Mitchell Heinrich, Leng Ooi
-
Patent number: 9367864Abstract: Exemplary embodiments involve real-time commenting in experience-sharing sessions. An exemplary method involves: (a) a server system facilitating an experience sharing session between a sharing device and one or more viewing devices, wherein the server system receives media in real-time from the sharing device and transmits the media to the one or more viewing devices in real-time, wherein the media comprises video; (b) during the experience sharing session, the server system receiving one or more comments from one or more of the viewing devices; (d) the server system filtering the received comments in real-time based on filter criteria; and (e) the server system initiating real-time delivery, to the sharing device, of one or more of the received comments that satisfy the filter criteria.Type: GrantFiled: March 19, 2015Date of Patent: June 14, 2016Assignee: Google Inc.Inventors: Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Liang-Yu Tom Chi, Bradley James Rhodes
-
Patent number: 9355110Abstract: An example method involves: (i) maintaining an attribute-association database comprising data for a set of data items, wherein the data for a given one of the data items specifies for each of one or more attributes from a set of attributes, (a) an associative value, and (b) a temporal-decay value; (ii) detecting an event, wherein the event is associated with one or more event attributes; (iii) determining a relevance of a given one of the data items to the detected event based on a selected one or more of (a) the associative values and (b) the temporal-decay values; (iv) selecting at least one data item, wherein the at least one data item is selected from a set of the data items based on the respective relevancies of the data items in the set; and (v) providing an indication of the at least one selected data item.Type: GrantFiled: March 16, 2012Date of Patent: May 31, 2016Assignee: Google Inc.Inventor: Liang-Yu (Tom) Chi
-
Publication number: 20160011724Abstract: Methods and devices for providing a user-interface are disclosed. In one embodiment, the method comprises receiving data corresponding to a first position of a wearable computing device and responsively causing the wearable computing device to provide a user-interface. The user-interfaces comprises a view region and a menu, where the view region substantially fills a field of view of the wearable computing device and the menu is not fully visible in the view region. The method further comprises receiving data indicating a selection of an item present in the view region and causing an indicator to be displayed in the view region, wherein the indicator changes incrementally over a length of time. When the length of time has passed, the method comprises responsively causing the wearable computing device to select the item.Type: ApplicationFiled: March 2, 2012Publication date: January 14, 2016Applicant: Google Inc.Inventors: Aaron Joseph Wheeler, Sergey Brin, Thad Eugene Starner, Alejandro Kauffmann, Cliff L. Biffle, Liang-Yu (Tom) Chi, Steve Lee, Sebastian Thrun, Luis Ricardo Prada Gomez
-
Patent number: 9213185Abstract: A wearable computing system may include a head-mounted display (HMD) with a display configured to display images viewable at a viewing location. When aligned with an HMD wearer's line of sight, the entire display area of the display may be within the HMD wearer's field of view. The area within which an HMD wearer's eye can move and still view the entire display area is termed an “eye box.” However, if the HMD slips up or down, the display area may become obscured, such that the wearer can no longer see the entire image. By scaling or subsetting an image area within the display area, the effective eye box dimensions may increase. Further, in response to movements of the HMD with respect to the wearer, the image area can be adjusted to reduce effects such as vibration and slippage.Type: GrantFiled: May 8, 2012Date of Patent: December 15, 2015Assignee: Google Inc.Inventors: Thad Eugene Starner, Adrian Wong, Yong Zhao, Chia-Jean Wang, Anurag Gupta, Liang-Yu (Tom) Chi
-
Publication number: 20150304253Abstract: Exemplary embodiments involve real-time commenting in experience-sharing sessions. An exemplary method involves: (a) a server system facilitating an experience sharing session between a sharing device and one or more viewing devices, wherein the server system receives media in real-time from the sharing device and transmits the media to the one or more viewing devices in real-time, wherein the media comprises video; (b) during the experience sharing session, the server system receiving one or more comments from one or more of the viewing devices; (d) the server system filtering the received comments in real-time based on filter criteria; and (e) the server system initiating real-time delivery, to the sharing device, of one or more of the received comments that satisfy the filter criteria.Type: ApplicationFiled: March 19, 2015Publication date: October 22, 2015Inventors: Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Liang-Yu Tom Chi, Bradley James Rhodes
-
Publication number: 20150268799Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.Type: ApplicationFiled: June 10, 2015Publication date: September 24, 2015Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
-
Patent number: 9069164Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.Type: GrantFiled: June 26, 2012Date of Patent: June 30, 2015Assignee: Google Inc.Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
-
Patent number: 9015245Abstract: Exemplary embodiments involve real-time commenting in experience-sharing sessions. An exemplary method involves: (a) a server system facilitating an experience sharing session between a sharing device and one or more viewing devices, wherein the server system receives media in real-time from the sharing device and transmits the media to the one or more viewing devices in real-time, wherein the media comprises video; (b) during the experience sharing session, the server system receiving one or more comments from one or more of the viewing devices; (d) the server system filtering the received comments in real-time based on filter criteria; and (e) the server system initiating real-time delivery, to the sharing device, of one or more of the received comments that satisfy the filter criteria.Type: GrantFiled: December 8, 2011Date of Patent: April 21, 2015Assignee: Google Inc.Inventors: Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Liang-Yu Tom Chi, Bradley James Rhodes
-
Patent number: 8957916Abstract: Methods and systems are disclosed herein that may help to present graphics in a see-through display of a head-mountable display. An exemplary method may involve: (a) receiving image data that is indicative of a real-world field of view associated with a head-mountable display (HMD); (b) analyzing the image data to determine at least one undesirable portion of the real-world field of view, wherein the at least one undesirable portion is undesirable as a background for display of at least one graphic object in a graphic display of the HMD; (c) determining at least one undesirable area of the graphic display that corresponds to the at least one undesirable portion of the real-world field of view; and (d) causing the at least one graphic object to be displayed in an area of the graphic display such that the graphic object substantially avoids the at least one undesirable area.Type: GrantFiled: March 23, 2012Date of Patent: February 17, 2015Assignee: Google Inc.Inventors: Elliott Bruce Hedman, Liang-Yu Tom Chi, Aaron Joseph Wheeler
-
Patent number: 8947322Abstract: Exemplary methods and systems relate to a wearable computing device determining a user-context and dynamically changing the content of a user-interface based on the determined user-context. The device may determine a user-context based on digital context; such as a text document a user is reading or a current website the device is accessing. User-context may also be based on physical context; such as the device's location or the air temperature around a user. Once a user-context is determined, a device may identify content that is related to the user-context and add objects representing this related content to a user-interface.Type: GrantFiled: March 19, 2012Date of Patent: February 3, 2015Assignee: Google Inc.Inventors: Liang-Yu (Tom) Chi, Robert Allen Ryskamp, Aaron Joseph Wheeler, Luis Ricardo Prada Gomez
-
Patent number: 8942881Abstract: Methods and apparatuses for gesture-based controls are disclosed. In one aspect, a method is disclosed that includes maintaining a correlation between a plurality of predetermined gestures, in combination with a plurality of predetermined regions of a vehicle, and a plurality of functions. The method further includes recording three-dimensional images of an interior portion of the vehicle and, based on the three-dimensional images, detecting a given gesture in a given region of the vehicle, where the given gesture corresponds to one of the plurality of predetermined gestures and the given region corresponds to one of the plurality of predetermined regions. The method still further includes selecting, based on the correlation, a function associated with the given gesture in combination with the given region and initiating the function in the vehicle.Type: GrantFiled: April 2, 2012Date of Patent: January 27, 2015Assignee: Google Inc.Inventors: Nicholas Kenneth Hobbs, Liang-Yu (Tom) Chi
-
Patent number: 8934015Abstract: Disclosed are methods and apparatus for experience sharing for emergency situations. A wearable computing device can receive an indication of an emergency situation. In response to the indication, the wearable computing device can initiate an experience sharing session with one or more emergency contacts. During the experience sharing session, the wearable computing device can capture video data, add text to the captured video data, and transmit the captured video data and added text to the one or more emergency contacts.Type: GrantFiled: December 8, 2011Date of Patent: January 13, 2015Assignee: Google Inc.Inventors: Liang-Yu Tom Chi, Steven John Lee, Indika Charles Mendis, Max Benjamin Braun, Luis Ricardo Prada Gomez