Patents by Inventor Thad Eugene Starner

Thad Eugene Starner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150286877
    Abstract: Methods and devices for initiating a search are disclosed. In one embodiment, a method is disclosed that includes causing a camera on a wearable computing device to record video data, segmenting the video data into a number of layers and, based on the video data, detecting that a pointing object is in proximity to a first layer. The method further includes initiating a first search on the first layer. In another embodiment, a wearable computing device is disclosed that includes a camera configured to record video data, a processor, and data storage comprising instructions executable by the processor to segment the video data into a number of layers and, based on the video data, detect that a pointing object is in proximity to a first layer. The instructions are further executable by the processor to initiate a first search on the first layer.
    Type: Application
    Filed: May 28, 2015
    Publication date: October 8, 2015
    Inventors: Thad Eugene Starner, Irfan Essa
  • Publication number: 20150268799
    Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.
    Type: Application
    Filed: June 10, 2015
    Publication date: September 24, 2015
    Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
  • Publication number: 20150242414
    Abstract: Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving video data recorded by a camera on a wearable computing device, where the video data comprises at least a first frame and a second frame. The method further includes, based on the video data, detecting an area in the first frame that is at least partially bounded by a pointing device and, based on the video data, detecting in the second frame that the area is at least partially occluded by the pointing device. The method still further includes initiating a search on the area.
    Type: Application
    Filed: May 7, 2015
    Publication date: August 27, 2015
    Inventors: Thad Eugene Starner, Irfan Essa, Hayes Solos Raffle, Daniel Aminzade
  • Publication number: 20150227795
    Abstract: Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving video data from a camera on a wearable computing device and, based on the video data, detecting a movement that defines an outline of an area in the video data. The method further includes identifying an object that is located in the area and initiating a search on the object. In another embodiment, a server is disclosed that includes an interface configured to receive video data from a camera on a wearable computing device, at least one processor, and data storage comprising instructions executable by the at least one processor to detect, based on the video data, a movement that defines an outline of an area in the video data, identify an object that is located in the area, and initiate a search on the object.
    Type: Application
    Filed: February 20, 2012
    Publication date: August 13, 2015
    Applicant: GOOGLE INC.
    Inventors: Thad Eugene Starner, Irfan Essa
  • Publication number: 20150193977
    Abstract: Exemplary methods and systems are disclosed that provide for the detection and recognition of target devices, by a mobile computing device, within a pre-defined local environment. An exemplary method may involve (a) receiving, at a mobile computing device, a local-environment message corresponding to a pre-defined local environment that may comprise (i) physical-layout information of the pre-defined local environment or (ii) an indication of a target device located in the pre-defined local environment, (b) receiving image data that is indicative of a field-of-view associated with the mobile computing device, (c) based at least in part on the physical-layout information in the local-environment message, locating the target device in the field-of-view, and (d) causing the mobile computing device to display a virtual control interface for the target device in a location within the field-of-view that is associated with the location of the target device in the field-of-view.
    Type: Application
    Filed: August 31, 2012
    Publication date: July 9, 2015
    Applicant: GOOGLE INC.
    Inventors: Michael Patrick JOHNSON, Thad Eugene STARNER
  • Patent number: 9069382
    Abstract: Methods and devices for initiating a search are disclosed. In one embodiment, a method is disclosed that includes receiving video data captured by an image-capture device on a wearable computing device, segmenting the video data into a number of layers and, based on the video data, detecting that a pointing object is in proximity to a first layer. The method further includes initiating a first search on the first layer. In another embodiment, a wearable computing device is disclosed that includes an interface configured to receive video data captured by an image-capture device, a processor, and data storage comprising instructions executable by the processor to segment the video data into a number of layers and, based on the video data, detect that a pointing object is in proximity to a first layer. The instructions are further executable by the processor to initiate a first search on the first layer.
    Type: Grant
    Filed: March 12, 2012
    Date of Patent: June 30, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Irfan Essa
  • Patent number: 9069164
    Abstract: The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: June 30, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez
  • Publication number: 20150177981
    Abstract: Described techniques enable a computing system to receive character string input (e.g., words, numbers, mathematical expressions, symbolic strings, etc.) by detecting and interpreting an input movement across a user-interface. A touch-based computing system may, for instance, detect an input movement by tracking the path of a pointing element (e.g., a stylus or finger) as it is dragged across a contact-sensitive input surface (e.g., a touch-sensitive screen or external touch pad). Then, the system may interpret the detected input movement using Hidden Markov Modeling.
    Type: Application
    Filed: September 30, 2012
    Publication date: June 25, 2015
    Applicant: GOOGLE INC.
    Inventors: Thad Eugene Starner, Nirmal Patel, Shumin Zhai
  • Patent number: 9064436
    Abstract: Methods and systems for text input are provided. In one example, a head-mountable device (HMD) having a touch interface may be configured to receive touch inputs from a user to enter text. The touch interface may include input areas corresponding to characters. The HMD may be configured to determine characters and words the user wishes to enter according to different touch inputs, including land inputs, lift inputs, flick inputs, drag inputs, tap inputs, and scratch inputs. In one case, the HMD may determine subsets of characters for each letter in a word the user wishes to enter, and determine the word the user wishes to enter according to the subsets of characters. In another case, the HMD may determine a vector array corresponding to the word the user wishes to enter, and determine the word the user wishes to enter by comparing the vector array against word vector templates.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: June 23, 2015
    Assignee: Google Inc.
    Inventors: Nirmal Patel, Thad Eugene Starner
  • Publication number: 20150160461
    Abstract: Example methods and devices are disclosed for generating life-logs with point-of-view images. An example method may involve: receiving image-related data based on electromagnetic radiation reflected from a human eye, generating an eye reflection image based on the image-related data, generating a point-of-view image by filtering the eye reflection image, and storing the point-of-view image. The electromagnetic radiation reflected from a human eye can be captured using one or more video or still cameras associated with a suitably-configured computing device, such as a wearable computing device.
    Type: Application
    Filed: February 20, 2015
    Publication date: June 11, 2015
    Inventors: Thad Eugene Starner, Hayes Solos Raffle, Yong Zhao
  • Patent number: 9052804
    Abstract: Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving video data recorded by a camera on a wearable computing device, where the video data comprises at least a first frame and a second frame. The method further includes, based on the video data, detecting an area in the first frame that is at least partially bounded by a pointing device and, based on the video data, detecting in the second frame that the area is at least partially occluded by the pointing device. The method still further includes initiating a search on the area.
    Type: Grant
    Filed: March 12, 2012
    Date of Patent: June 9, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Irfan Essa, Hayes Solos Raffle, Daniel Aminzade
  • Publication number: 20150154940
    Abstract: Example methods and systems for determining correlated movements associated with movements caused by driving a vehicle are provided. In an example, a computer-implemented method includes identifying a threshold number of sets of correlated movements. The method further includes determining that the threshold number of sets of correlated movements is associated with movements caused by driving a vehicle. The method still further includes causing the wearable computing system to select a driving user interface for the wearable computing system.
    Type: Application
    Filed: February 3, 2015
    Publication date: June 4, 2015
    Inventors: Joshua Weaver, Thad Eugene Starner
  • Publication number: 20150097772
    Abstract: A computing device may receive an eye-tracking signal or gaze signal from an eye-tracking device. The gaze signal may include information indicative of observed movement of an eye. The computing device may make a determination that movement of the eye derived from analyzing the received gaze signal violates a set of rules for eye movement, where the set of rules may be based on an analytical model of eye movement. In response to making the determination, the computing device may provide an indication that the received gaze signal contains unreliable eye-movement information for at least one computer-implemented application that uses measured eye movement as an input.
    Type: Application
    Filed: November 9, 2012
    Publication date: April 9, 2015
    Inventor: Thad Eugene Starner
  • Publication number: 20150098620
    Abstract: Methods and systems are described for determining eye position and/or for determining eye movement based on glints. An exemplary computer-implemented method involves: (a) causing a camera that is attached to a head-mounted display (HMD) to record a video of the eye; (b) while the video of the eye is being recorded, causing a plurality of light sources that are attached to the HMD and generally directed towards the eye to switch on and off according to a predetermined pattern, wherein the predetermined pattern is such that at least two of the light sources are switched on at any given time while the video of the eye is being recorded; (c) analyzing the video of the eye to detect controlled glints that correspond to the plurality of light sources; and (d) determining a measure of eye position based on the controlled glints.
    Type: Application
    Filed: December 16, 2014
    Publication date: April 9, 2015
    Applicant: GOOGLE INC.
    Inventors: Bo Wu, Thad Eugene Starner, Hayes Solos Raffle, Yong Zhao, Edward Allen Keyes
  • Patent number: 8997013
    Abstract: In one example, a method includes outputting, by a computing device and for display, a graphical user interface comprising a first graphical keyboard comprising a first plurality of keys. The method further includes determining, based at least in part on an input context, to output a second graphical keyboard comprising a second plurality of keys, and outputting, for contemporaneous display with the first graphical keyboard, the second graphical keyboard. A character associated with at least one key from the second plurality of keys may be different than each character associated with each key from the first plurality of keys. The method further includes selecting, based at least in part on a first portion of a continuous gesture, a first key from first graphical keyboard, and selecting, based at least in part on a second portion of the continuous gesture, a second key from the second graphical keyboard.
    Type: Grant
    Filed: May 31, 2013
    Date of Patent: March 31, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Nirmal Patel, Shumin Zhai
  • Patent number: 8982471
    Abstract: A wearable computing system may include a head-mounted display (HMD) and an optical system with a display panel configured to generate images. The optical system may include an optical element that is adjustable between at least a first configuration and a second configuration. When the optical element is in the first configuration, the images generated by the display panel are viewable at an internal viewing location. When the optical element is in the second configuration, the images generated by the display panel are projected externally from the HMD. For example, the location, refractive index, reflectance, opacity, and/or polarization of the optical element could be adjusted.
    Type: Grant
    Filed: April 16, 2012
    Date of Patent: March 17, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Chia-Jean Wang
  • Patent number: 8971571
    Abstract: Methods and devices for initiating, updating, and displaying the results of a search of an object-model database are disclosed. In one embodiment, a method is disclosed that includes receiving video data recorded by a camera on a wearable computing device and, based on the video data, detecting a movement corresponding to a selection of an object. The method further includes, before the movement is complete, initiating a search on the object of an object-model database. The method still further includes, during the movement, periodically updating the search and causing the wearable computing device to overlay the object with object-models from the database corresponding to results of the search.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: March 3, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Irfan Essa
  • Patent number: 8963806
    Abstract: A head-mountable device configured to authenticate a wearer is disclosed. The head-mountable device can receive an indication of an eye gesture from at least one proximity sensor in the head-mountable device configured to generate sensor data indicative of light reflected from an eye area. The head-mountable device can capture biometric information indicative of one or more biometric identifiers of a wearer of the head-mountable device responsive to receiving the indication of the eye gesture. The head-mountable device can authenticate the wearer of the head-mountable device based on a comparison of the captured biometric information and a stored biometric profile.
    Type: Grant
    Filed: October 29, 2012
    Date of Patent: February 24, 2015
    Assignee: Google Inc.
    Inventors: Thad Eugene Starner, Michael Patrick Johnson, Antonio Bernardo Monteiro Costa
  • Patent number: 8958599
    Abstract: A method may involve analyzing eye-image data to determine observed movement of a reflected image over a predetermined period, using head-movement data to determine a first eye-movement component corresponding to head movement during the predetermined period, determining a first expected movement of the reflected image corresponding to the first eye-movement component, determining a second eye-movement component based on a difference between the observed movement and the first expected movement, determining a second expected movement of the reflected image based on the combination of the first eye-movement component and the second eye-movement component, determining a difference between the observed movement and the second expected movement, if the difference is less than a threshold, setting eye-movement data for the predetermined period based on the second eye-movement component; and if the difference is greater than the threshold, adjusting the first eye-movement component and repeating the method with the
    Type: Grant
    Filed: October 8, 2012
    Date of Patent: February 17, 2015
    Assignee: Google Inc.
    Inventor: Thad Eugene Starner
  • Patent number: 8955973
    Abstract: Exemplary methods and systems help provide for tracking an eye. An exemplary method may involve: causing the projection of a pattern onto an eye, wherein the pattern comprises at least one line, and receiving data regarding deformation of the at least one line of the pattern. The method further includes correlating the data to iris, sclera, and pupil orientation to determine a position of the eye, and causing an item on a display to move in correlation with the eye position.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: February 17, 2015
    Assignee: Google Inc.
    Inventors: Hayes Solos Raffle, Thad Eugene Starner, Josh Weaver, Edward Allen Keyes