Patents by Inventor Kexi Liu

Kexi Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150049201
    Abstract: An apparatus for calibrating an augmented reality (AR) device having an optical see-through head mounted display (HMD) obtains eye coordinates in an eye coordinate system corresponding to a location of an eye of a user of the AR device, and obtains object coordinates in a world coordinate system corresponding to a location of a real-world object in the field of view of the AR device, as captured by a scene camera having a scene camera coordinate system. The apparatus calculates screen coordinates in a screen coordinate system corresponding to a display point on the HMD, where the calculating is based on the obtained eye coordinates and the obtained object coordinates. The apparatus calculates calibration data based on the screen coordinates, the object coordinates and a transformation from the target coordinate system to the scene camera coordinate system. The apparatus then derives subsequent screen coordinates for the display of AR in relation to other real-world object points based on the calibration data.
    Type: Application
    Filed: January 9, 2014
    Publication date: February 19, 2015
    Applicant: QUALCOMM Incorporated
    Inventors: Kexi LIU, Md Sazzadur RAHMAN, Martin H. RENSCHLER
  • Publication number: 20150049013
    Abstract: An apparatus for calibrating an eye tracking system of a head mounted display displays a moving object in a scene visible through the head mounted display. The object is displayed progressively at a plurality of different points (P) at corresponding different times (T). While the object is at a first point of the plurality of different points in time, the apparatus determines whether an offset between the point P and an eye gaze point (E) satisfies a threshold. The eye-gaze point (E) corresponds to a point where a user is determined to be gazing by the eye tracking system. If the threshold is not satisfied, the apparatus performs a calibration of the eye tracking system when the object is at a second point of the plurality of different points in time. The apparatus then repeats the determining step when the object is at a third point of the plurality of different points in time.
    Type: Application
    Filed: January 23, 2014
    Publication date: February 19, 2015
    Applicant: QUALCOMM Incorporated
    Inventors: Md Sazzadur RAHMAN, Kexi LIU, Martin H. RENSCHLER
  • Publication number: 20150049001
    Abstract: A method, an apparatus, and a computer program product construct an augmented view as perceived by a user of an augmented reality (AR) device having an optical see-through head mounted display (HMD) with AR, for display at a remote device. An apparatus obtains scene data corresponding to a real-world scene visible through the optical see-through HMD, and screen data of at least one of a first augmented object displayed on the optical see-through HMD, and a second augmented object displayed on the optical see-through HMD. The apparatus determines to apply at least one of a first offset to the first augmented object relative to an origin of the real-world scene, and a second offset to the second augmented object relative to the origin. The apparatus then generates augmented-view screen data for displaying the augmented view on an HMD remote from the AR device. The augmented-view screen data is based on at least one of the first offset and the second offset.
    Type: Application
    Filed: January 9, 2014
    Publication date: February 19, 2015
    Applicant: QUALCOMM Incorporated
    Inventors: Md Sazzadur RAHMAN, Martin H. RENSCHLER, Kexi LIU
  • Publication number: 20150049113
    Abstract: A method, an apparatus, and a computer program product conduct online visual searches through an augmented reality (AR) device having an optical see-through head mounted display (HMD). An apparatus identifies a portion of an object in a field of view of the HMD based on user interaction with the HMD. The portion includes searchable content, such as a barcode. The user interaction may be an eye gaze or a gesture. A user interaction point in relation to the HMD screen is tracked to locate a region of the object that includes the portion and the portion is detected within the region. The apparatus captures an image of the portion. The identified portion of the object does not encompass the entirety of the object. Accordingly, the size of the image is less than the size of the object in the field of view. The apparatus transmits the image to a visual search engine.
    Type: Application
    Filed: January 9, 2014
    Publication date: February 19, 2015
    Applicant: QUALCOMM INCORPORATED
    Inventors: Md Sazzadur RAHMAN, Kexi LIU, Martin H. RENSCHLER
  • Publication number: 20150049112
    Abstract: A method, an apparatus, and a computer program product render a graphical user interface (GUI) on an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with a GUI object displayed on the HMD. The GUI object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the GUI object satisfies a criterion, and adjusts a parameter of the GUI object when the criterion is not satisfied. The parameter may be one or more of a size of the GUI object, a size of a boundary associated with the GUI object or a location of the GUI object.
    Type: Application
    Filed: January 9, 2014
    Publication date: February 19, 2015
    Applicant: QUALCOMM Incorporated
    Inventors: Kexi LIU, Md Sazzadur RAHMAN, Martin H. RENSCHLER, Babak FORUTANPOUR
  • Publication number: 20150049012
    Abstract: A method, an apparatus, and a computer program product provide feedback to a user of an augmented reality (AR) device having an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with an object displayed on the HMD. The object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the object satisfies a criterion, and outputs a sensory indication, e.g., visual display, sound, vibration, when the criterion is satisfied. The apparatus may be configured to output a sensory indication when user interaction is successful, e.g., the icon was selected. Alternatively, the apparatus may be configured to output a sensory indication when the user interaction fails.
    Type: Application
    Filed: January 23, 2014
    Publication date: February 19, 2015
    Applicant: QUALCOMM Incorporated
    Inventors: Kexi LIU, Vijay Naicker SUBRAMANIAM, Md Sazzadur RAHMAN, Martin H. RENSCHLER
  • Publication number: 20140198129
    Abstract: An apparatus, a method, and a computer program product are provided. The apparatus detects an eye gaze on a first region in a real world scene, sets a boundary that surrounds the first region, the boundary excluding at least a second region in the real world scene, performs an object recognition procedure on the first region within the boundary, and refrains from performing an object recognition procedure on the at least the second region.
    Type: Application
    Filed: January 13, 2013
    Publication date: July 17, 2014
  • Publication number: 20140133665
    Abstract: Methods and apparatuses for representing a sound field in a physical space are provided and described as embodied in a system that includes a sound transducer array along with a touch surface-enabled display table. The array may include a group of transducers (multiple speakers and/or microphones). The array may be configured to perform spatial processing of signals for the group of transducers so that sound rendering (in configurations where the array includes multiple speakers), or sound pick-up (in configurations where the array includes multiple microphones), may have spatial patterns (or sound projection patterns) that are focused in certain directions while reducing disturbances from other directions.
    Type: Application
    Filed: December 21, 2012
    Publication date: May 15, 2014
    Applicant: QUALCOMM INCORPORATED
    Inventors: Pei Xiang, Kexi Liu
  • Publication number: 20140136203
    Abstract: Some implementations provide a method for identifying a speaker. The method determines position and orientation of a second device based on data from a first device that is for capturing the position and orientation of the second device. The second device includes several microphones for capturing sound. The second device has movable position and movable orientation. The method assigns an object as a representation of a known user. The object has a moveable position. The method receives a position of the object. The position of the object corresponds to a position of the known user. The method processes the captured sound to identify a sound originating from the direction of the object. The direction of the object is relative to the position and the orientation of the second device. The method identifies the sound originating from the direction of the object as belonging to the known user.
    Type: Application
    Filed: December 21, 2012
    Publication date: May 15, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Kexi Liu, Pei Xiang
  • Publication number: 20140136981
    Abstract: Methods and apparatuses for providing tangible control of sound are provided and described as embodied in a system that includes a sound transducer array along with a touch surface-enabled display table. The array may include a group of transducers (multiple speakers and/or microphones) configured to perform spatial processing of signals for the group of transducers so that sound rendering (in configurations where the array includes multiple speakers), or sound pick-up (in configurations where the array includes multiple microphones), have spatial patterns (or sound projection patterns) that are focused in certain directions while reducing disturbances from other directions. Users may directly adjust parameters related to sound projection patterns by interacting with the touch surface while receiving visual feedback by exercising one or more commands on the touch surface. The commands may be adjusted according to visual feedback received from the change of the display on the touch surface.
    Type: Application
    Filed: December 21, 2012
    Publication date: May 15, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Pei Xiang, Kexi Liu