Patents by Inventor Seyed Hesameddin Najafi Shoushtari
Seyed Hesameddin Najafi Shoushtari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250005894Abstract: Feature descriptor matching is reformulated into a graph-matching problem. Keypoints from a query image and a reference image are initially matched and filtered based on the match. For a given keypoint, a feature graph is constructed based on neighboring keypoints surrounding the given keypoint. The feature graph is compared to a corresponding feature graph of a reference image for the matched keypoint. Relocalization data is obtained based on the comparison.Type: ApplicationFiled: September 16, 2024Publication date: January 2, 2025Inventors: Chen Huang, Seyed Hesameddin Najafi Shoushtari, Frankie Lu, Shih-Yu Sun, Joshua M. Susskind
-
Patent number: 12094184Abstract: Feature descriptor matching is reformulated into a graph-matching problem. Keypoints from a query image and a reference image are initially matched and filtered based on the match. For a given keypoint, a feature graph is constructed based on neighboring keypoints surrounding the given keypoint. The feature graph is compared to a corresponding feature graph of a reference image for the matched keypoint. Relocalization data is obtained based on the comparison.Type: GrantFiled: August 24, 2021Date of Patent: September 17, 2024Assignee: Apple Inc.Inventors: Chen Huang, Seyed Hesameddin Najafi Shoushtari, Frankie Lu, Shih-Yu Sun, Joshua M. Susskind
-
Patent number: 11755106Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.Type: GrantFiled: September 9, 2022Date of Patent: September 12, 2023Assignee: Apple Inc.Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
-
Patent number: 11442537Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.Type: GrantFiled: June 26, 2020Date of Patent: September 13, 2022Assignee: Apple Inc.Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
-
Publication number: 20220092334Abstract: Feature descriptor matching is reformulated into a graph-matching problem. Keypoints from a query image and a reference image are initially matched and filtered based on the match. For a given keypoint, a feature graph is constructed based on neighboring keypoints surrounding the given keypoint. The feature graph is compared to a corresponding feature graph of a reference image for the matched keypoint. Relocalization data is obtained based on the comparison.Type: ApplicationFiled: August 24, 2021Publication date: March 24, 2022Inventors: Chen Huang, Seyed Hesameddin Najafi Shoushtari, Frankie Lu, Shih-Yu Sun, Joshua M. Susskind
-
Publication number: 20200326777Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.Type: ApplicationFiled: June 26, 2020Publication date: October 15, 2020Applicant: Apple Inc.Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
-
Patent number: 10698481Abstract: Methods and apparatus for glint-assisted gaze tracking in a VR/AR head-mounted display (HMD). Images of a user's eyes captured by gaze tracking cameras may be analyzed to detect glints (reflections on the cornea of light sources that illuminate the user's eyes) and the pupil. The glints are matched to particular ones of the light sources. The glint-light source matches are used to determine the cornea center of the eye, and the pupil center is determined. The optical axis of the eye is reconstructed from the cornea center and the pupil center, and the visual axis is then reconstructed from the optical axis and a 3D model of the user's eye. The point of gaze on the display is then determined based on the visual axis and a 3D model of the HMD.Type: GrantFiled: September 26, 2018Date of Patent: June 30, 2020Assignee: Apple Inc.Inventors: Seyed Hesameddin Najafi Shoushtari, Kuen-Han Lin, Yanghai Tsin
-
Patent number: 10379612Abstract: An electronic device may have a display and a gaze tracking system. Control circuitry in the electronic device can produce a saliency map in which items of visual interest are identified among content that has been displayed on a display in the electronic device. The saliency map may identify items such as selectable buttons, text, and other items of visual interest. User input such as mouse clicks, voice commands, and other commands may be used by the control circuitry in identifying when a user is gazing on particular items within the displayed content. Information on a user's actual on-screen point of gaze that is inferred using the saliency map information and user input can be compared to measured eye position information from the gaze tracking system to calibrate the gaze tracking system during normal operation of the electronic device.Type: GrantFiled: November 29, 2017Date of Patent: August 13, 2019Assignee: Apple Inc.Inventors: Nicolas P. Bonnier, William Riedel, Jiaying Wu, Martin P. Grunthaner, Seyed Hesameddin Najafi Shoushtari
-
Patent number: 10372968Abstract: A method for guiding a robot equipped with a camera to facilitate three-dimensional (3D) reconstruction through sampling based planning includes recognizing and localizing an object in a two-dimensional (2D) image. The method also includes computing 3D depth maps for the localized object. A 3D object map is constructed from the depth maps. A sampling based structure is grown around the 3D object map and a cost is assigned to each edge of the sampling based structure. The sampling based structure may be searched to determine a lowest cost sequence of edges that may, in turn be used to guide the robot.Type: GrantFiled: June 24, 2016Date of Patent: August 6, 2019Assignee: QUALCOMM IncorporatedInventors: Aliakbar Aghamohammadi, Seyed Hesameddin Najafi Shoushtari, Regan Blythe Towal
-
Publication number: 20170213070Abstract: A method for guiding a robot equipped with a camera to facilitate three-dimensional (3D) reconstruction through sampling based planning includes recognizing and localizing an object in a two-dimensional (2D) image. The method also includes computing 3D depth maps for the localized object. A 3D object map is constructed from the depth maps. A sampling based structure is grown around the 3D object map and a cost is assigned to each edge of the sampling based structure. The sampling based structure may be searched to determine a lowest cost sequence of edges that may, in turn be used to guide the robot.Type: ApplicationFiled: June 24, 2016Publication date: July 27, 2017Inventors: Aliakbar AGHAMOHAMMADI, Seyed Hesameddin NAJAFI SHOUSHTARI, Regan Blythe TOWAL
-
Patent number: 9626590Abstract: Methods, systems, computer-readable media, and apparatuses for fast cost aggregation for dense stereo matching are presented. One example method includes the steps of receiving first and second images of a scene; rectifying the images; computing a cost volume based on the first and second images; subsampling the cost volume to generate a subsampled cost volume; for each pixel, p, in the subsampled cost volume, determining one or more local extrema in the subsampled cost volume for each neighboring pixel, q, within a window centered on the pixel, p; for each pixel, p, performing cost aggregation using the one or more local extrema; performing cross checking to identify matching pixels; and responsive to identifying unmatched pixels, performing gap-filling for the unmatched pixels to generate a disparity map; and generate and storing a depth map from the disparity map.Type: GrantFiled: September 18, 2015Date of Patent: April 18, 2017Assignee: QUALCOMM IncorporatedInventors: Seyed Hesameddin Najafi Shoushtari, Murali Ramaswamy Chari
-
Publication number: 20170083787Abstract: Methods, systems, computer-readable media, and apparatuses for fast cost aggregation for dense stereo matching are presented. One example method includes the steps of receiving first and second images of a scene; rectifying the images; computing a cost volume based on the first and second images; subsampling the cost volume to generate a subsampled cost volume; for each pixel, p, in the subsampled cost volume, determining one or more local extrema in the subsampled cost volume for each neighboring pixel, q, within a window centered on the pixel, p; for each pixel, p, performing cost aggregation using the one or more local extrema; performing cross checking to identify matching pixels; and responsive to identifying unmatched pixels, performing gap-filling for the unmatched pixels to generate a disparity map; and generate and storing a depth map from the disparity map.Type: ApplicationFiled: September 18, 2015Publication date: March 23, 2017Inventors: Seyed Hesameddin Najafi Shoushtari, Murali Ramaswamy Chari
-
Patent number: 9582896Abstract: A vision based tracking system in a mobile platform tracks objects using groups of detected lines. The tracking system detects lines in a captured image of the object to be tracked. Groups of lines are formed from the detected lines. The groups of lines may be formed by computing intersection points of the detected lines and using intersection points to identified connected lines, where the groups of lines are formed using connected lines. A graph of the detected lines may be constructed and intersection points identified. Interesting subgraphs are generated using the connections and the group of lines is formed with the interesting subgraphs. Once the groups of lines are formed, the groups of lines are used to track the object, e.g., by comparing the groups of lines in a current image of the object to groups of lines in a previous image of the object.Type: GrantFiled: March 9, 2012Date of Patent: February 28, 2017Assignee: QUALCOMM IncorporatedInventors: Sheng Yi, Ashwin Swaminathan, Bolan Jiang, Seyed Hesameddin Najafi Shoushtari
-
Patent number: 9406137Abstract: Disclosed embodiments pertain to apparatus, systems, and methods for robust feature based tracking. In some embodiments, a score may be computed for a camera captured current image comprising a target object. The score may be based on one or more metrics determined from a comparison of features in the current image and a prior image captured by the camera. The comparison may be based on an estimated camera pose for the current image. In some embodiments, one of a point based, an edge based, or a combined point and edge based feature correspondence method may be selected based on a comparison of the score with a point threshold and/or a line threshold, the point and line thresholds being obtained from a model of the target. The camera pose may be refined by establishing feature correspondences using the selected method between the current image and a model image.Type: GrantFiled: May 15, 2014Date of Patent: August 2, 2016Assignee: QUALCOMM IncorporatedInventors: Dheeraj Ahuja, Kiyoung Kim, Yanghai Tsin, Seyed Hesameddin Najafi Shoushtari
-
Publication number: 20140369557Abstract: Disclosed embodiments pertain to feature based tracking. In some embodiments, a camera pose may be obtained relative to a tracked object in a first image and a predicted camera pose relative to the tracked object may be determined for a second image subsequent to the first image based, in part, on a motion model of the tracked object. An updated SE(3) camera pose may then be obtained based, in part on the predicted camera pose, by estimating a plane induced homography using an equation of a dominant plane of the tracked object, wherein the plane induced homography is used to align a first lower resolution version of the first image and a first lower resolution version of the second image by minimizing the sum of their squared intensity differences. A feature tracker may be initialized with the updated SE(3) camera pose.Type: ApplicationFiled: April 28, 2014Publication date: December 18, 2014Applicant: QUALCOMM IncorporatedInventors: Guy-Richard Kayombya, Seyed Hesameddin Najafi Shoushtari, Dheeraj Ahuja, Yanghai Tsin
-
Publication number: 20140368645Abstract: Disclosed embodiments pertain to apparatus, systems, and methods for robust feature based tracking. In some embodiments, a score may be computed for a camera captured current image comprising a target object. The score may be based on one or more metrics determined from a comparison of features in the current image and a prior image captured by the camera. The comparison may be based on an estimated camera pose for the current image. In some embodiments, one of a point based, an edge based, or a combined point and edge based feature correspondence method may be selected based on a comparison of the score with a point threshold and/or a line threshold, the point and line thresholds being obtained from a model of the target. The camera pose may be refined by establishing feature correspondences using the selected method between the current image and a model image.Type: ApplicationFiled: May 15, 2014Publication date: December 18, 2014Applicant: QUALCOMM IncorporatedInventors: Dheeraj AHUJA, Kiyoung KIM, Yanghai TSIN, Seyed Hesameddin NAJAFI SHOUSHTARI
-
Publication number: 20140270362Abstract: Embodiments include detection or relocalization of an object in a current image from a reference image, such as using a simple and relatively fast and invariant edge orientation based edge feature extraction, then a weak initial matching combined with a strong contextual filtering framework, and then a pose estimation framework based on edge segments. Embodiments include fast edge-based object detection using instant learning with a sufficiently large coverage area for object re-localization. Embodiments provide a good trade-off between computational efficiency of the extraction and matching processes.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Applicant: QUALCOMM INCORPORATEDInventors: Seyed Hesameddin Najafi Shoushtari, Yanghai Tsin, Murali Ramaswamy Chari, Serafin Diaz Spindola
-
Publication number: 20130057700Abstract: A vision based tracking system in a mobile platform tracks objects using groups of detected lines. The tracking system detects lines in a captured image of the object to be tracked. Groups of lines are formed from the detected lines. The groups of lines may be formed by computing intersection points of the detected lines and using intersection points to identified connected lines, where the groups of lines are formed using connected lines. A graph of the detected lines may be constructed and intersection points identified. Interesting subgraphs are generated using the connections and the group of lines is formed with the interesting subgraphs. Once the groups of lines are formed, the groups of lines are used to track the object, e.g., by comparing the groups of lines in a current image of the object to groups of lines in a previous image of the object.Type: ApplicationFiled: March 9, 2012Publication date: March 7, 2013Applicant: QUALCOMM IncorporatedInventors: Sheng Yi, Ashwin Swaminathan, Bolan Jiang, Seyed Hesameddin Najafi Shoushtari