Patents by Inventor Seyed Hesameddin Najafi Shoushtari

Seyed Hesameddin Najafi Shoushtari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11755106
    Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.
    Type: Grant
    Filed: September 9, 2022
    Date of Patent: September 12, 2023
    Assignee: Apple Inc.
    Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
  • Patent number: 11442537
    Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: September 13, 2022
    Assignee: Apple Inc.
    Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
  • Publication number: 20220092334
    Abstract: Feature descriptor matching is reformulated into a graph-matching problem. Keypoints from a query image and a reference image are initially matched and filtered based on the match. For a given keypoint, a feature graph is constructed based on neighboring keypoints surrounding the given keypoint. The feature graph is compared to a corresponding feature graph of a reference image for the matched keypoint. Relocalization data is obtained based on the comparison.
    Type: Application
    Filed: August 24, 2021
    Publication date: March 24, 2022
    Inventors: Chen Huang, Seyed Hesameddin Najafi Shoushtari, Frankie Lu, Shih-Yu Sun, Joshua M. Susskind
  • Publication number: 20200326777
    Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.
    Type: Application
    Filed: June 26, 2020
    Publication date: October 15, 2020
    Applicant: Apple Inc.
    Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
  • Patent number: 10698481
    Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: June 30, 2020
    Assignee: Apple Inc.
    Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
  • Patent number: 10379612
    Abstract: An electronic device may have a display and a gaze tracking system. Control circuitry in the electronic device can produce a saliency map in which items of visual interest are identified among content that has been displayed on a display in the electronic device. The saliency map may identify items such as selectable buttons, text, and other items of visual interest. User input such as mouse clicks, voice commands, and other commands may be used by the control circuitry in identifying when a user is gazing on particular items within the displayed content. Information on a user's actual on-screen point of gaze that is inferred using the saliency map information and user input can be compared to measured eye position information from the gaze tracking system to calibrate the gaze tracking system during normal operation of the electronic device.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: August 13, 2019
    Assignee: Apple Inc.
    Inventors: Nicolas P. Bonnier, William Riedel, Jiaying Wu, Martin P. Grunthaner, Seyed Hesameddin Najafi Shoushtari
  • Patent number: 10372968
    Abstract: A method for guiding a robot equipped with a camera to facilitate three-dimensional (3D) reconstruction through sampling based planning includes recognizing and localizing an object in a two-dimensional (2D) image. The method also includes computing 3D depth maps for the localized object. A 3D object map is constructed from the depth maps. A sampling based structure is grown around the 3D object map and a cost is assigned to each edge of the sampling based structure. The sampling based structure may be searched to determine a lowest cost sequence of edges that may, in turn be used to guide the robot.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: August 6, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Aliakbar Aghamohammadi, Seyed Hesameddin Najafi Shoushtari, Regan Blythe Towal
  • Publication number: 20170213070
    Abstract: A method for guiding a robot equipped with a camera to facilitate three-dimensional (3D) reconstruction through sampling based planning includes recognizing and localizing an object in a two-dimensional (2D) image. The method also includes computing 3D depth maps for the localized object. A 3D object map is constructed from the depth maps. A sampling based structure is grown around the 3D object map and a cost is assigned to each edge of the sampling based structure. The sampling based structure may be searched to determine a lowest cost sequence of edges that may, in turn be used to guide the robot.
    Type: Application
    Filed: June 24, 2016
    Publication date: July 27, 2017
    Inventors: Aliakbar AGHAMOHAMMADI, Seyed Hesameddin NAJAFI SHOUSHTARI, Regan Blythe TOWAL
  • Patent number: 9626590
    Abstract: Methods, systems, computer-readable media, and apparatuses for fast cost aggregation for dense stereo matching are presented. One example method includes the steps of receiving first and second images of a scene; rectifying the images; computing a cost volume based on the first and second images; subsampling the cost volume to generate a subsampled cost volume; for each pixel, p, in the subsampled cost volume, determining one or more local extrema in the subsampled cost volume for each neighboring pixel, q, within a window centered on the pixel, p; for each pixel, p, performing cost aggregation using the one or more local extrema; performing cross checking to identify matching pixels; and responsive to identifying unmatched pixels, performing gap-filling for the unmatched pixels to generate a disparity map; and generate and storing a depth map from the disparity map.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: April 18, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Seyed Hesameddin Najafi Shoushtari, Murali Ramaswamy Chari
  • Publication number: 20170083787
    Abstract: Methods, systems, computer-readable media, and apparatuses for fast cost aggregation for dense stereo matching are presented. One example method includes the steps of receiving first and second images of a scene; rectifying the images; computing a cost volume based on the first and second images; subsampling the cost volume to generate a subsampled cost volume; for each pixel, p, in the subsampled cost volume, determining one or more local extrema in the subsampled cost volume for each neighboring pixel, q, within a window centered on the pixel, p; for each pixel, p, performing cost aggregation using the one or more local extrema; performing cross checking to identify matching pixels; and responsive to identifying unmatched pixels, performing gap-filling for the unmatched pixels to generate a disparity map; and generate and storing a depth map from the disparity map.
    Type: Application
    Filed: September 18, 2015
    Publication date: March 23, 2017
    Inventors: Seyed Hesameddin Najafi Shoushtari, Murali Ramaswamy Chari
  • Patent number: 9582896
    Abstract: A vision based tracking system in a mobile platform tracks objects using groups of detected lines. The tracking system detects lines in a captured image of the object to be tracked. Groups of lines are formed from the detected lines. The groups of lines may be formed by computing intersection points of the detected lines and using intersection points to identified connected lines, where the groups of lines are formed using connected lines. A graph of the detected lines may be constructed and intersection points identified. Interesting subgraphs are generated using the connections and the group of lines is formed with the interesting subgraphs. Once the groups of lines are formed, the groups of lines are used to track the object, e.g., by comparing the groups of lines in a current image of the object to groups of lines in a previous image of the object.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: February 28, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Sheng Yi, Ashwin Swaminathan, Bolan Jiang, Seyed Hesameddin Najafi Shoushtari
  • Patent number: 9406137
    Abstract: Disclosed embodiments pertain to apparatus, systems, and methods for robust feature based tracking. In some embodiments, a score may be computed for a camera captured current image comprising a target object. The score may be based on one or more metrics determined from a comparison of features in the current image and a prior image captured by the camera. The comparison may be based on an estimated camera pose for the current image. In some embodiments, one of a point based, an edge based, or a combined point and edge based feature correspondence method may be selected based on a comparison of the score with a point threshold and/or a line threshold, the point and line thresholds being obtained from a model of the target. The camera pose may be refined by establishing feature correspondences using the selected method between the current image and a model image.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: August 2, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Dheeraj Ahuja, Kiyoung Kim, Yanghai Tsin, Seyed Hesameddin Najafi Shoushtari
  • Publication number: 20140369557
    Abstract: Disclosed embodiments pertain to feature based tracking. In some embodiments, a camera pose may be obtained relative to a tracked object in a first image and a predicted camera pose relative to the tracked object may be determined for a second image subsequent to the first image based, in part, on a motion model of the tracked object. An updated SE(3) camera pose may then be obtained based, in part on the predicted camera pose, by estimating a plane induced homography using an equation of a dominant plane of the tracked object, wherein the plane induced homography is used to align a first lower resolution version of the first image and a first lower resolution version of the second image by minimizing the sum of their squared intensity differences. A feature tracker may be initialized with the updated SE(3) camera pose.
    Type: Application
    Filed: April 28, 2014
    Publication date: December 18, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Guy-Richard Kayombya, Seyed Hesameddin Najafi Shoushtari, Dheeraj Ahuja, Yanghai Tsin
  • Publication number: 20140368645
    Abstract: Disclosed embodiments pertain to apparatus, systems, and methods for robust feature based tracking. In some embodiments, a score may be computed for a camera captured current image comprising a target object. The score may be based on one or more metrics determined from a comparison of features in the current image and a prior image captured by the camera. The comparison may be based on an estimated camera pose for the current image. In some embodiments, one of a point based, an edge based, or a combined point and edge based feature correspondence method may be selected based on a comparison of the score with a point threshold and/or a line threshold, the point and line thresholds being obtained from a model of the target. The camera pose may be refined by establishing feature correspondences using the selected method between the current image and a model image.
    Type: Application
    Filed: May 15, 2014
    Publication date: December 18, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Dheeraj AHUJA, Kiyoung KIM, Yanghai TSIN, Seyed Hesameddin NAJAFI SHOUSHTARI
  • Publication number: 20140270362
    Abstract: Embodiments include detection or relocalization of an object in a current image from a reference image, such as using a simple and relatively fast and invariant edge orientation based edge feature extraction, then a weak initial matching combined with a strong contextual filtering framework, and then a pose estimation framework based on edge segments. Embodiments include fast edge-based object detection using instant learning with a sufficiently large coverage area for object re-localization. Embodiments provide a good trade-off between computational efficiency of the extraction and matching processes.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: QUALCOMM INCORPORATED
    Inventors: Seyed Hesameddin Najafi Shoushtari, Yanghai Tsin, Murali Ramaswamy Chari, Serafin Diaz Spindola
  • Publication number: 20130057700
    Abstract: A vision based tracking system in a mobile platform tracks objects using groups of detected lines. The tracking system detects lines in a captured image of the object to be tracked. Groups of lines are formed from the detected lines. The groups of lines may be formed by computing intersection points of the detected lines and using intersection points to identified connected lines, where the groups of lines are formed using connected lines. A graph of the detected lines may be constructed and intersection points identified. Interesting subgraphs are generated using the connections and the group of lines is formed with the interesting subgraphs. Once the groups of lines are formed, the groups of lines are used to track the object, e.g., by comparing the groups of lines in a current image of the object to groups of lines in a previous image of the object.
    Type: Application
    Filed: March 9, 2012
    Publication date: March 7, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Sheng Yi, Ashwin Swaminathan, Bolan Jiang, Seyed Hesameddin Najafi Shoushtari