Patents by Inventor Yimu Wang
Yimu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12054173Abstract: Provided are methods for semantic annotation of sensor data using unreliable map annotation inputs, which can include training a machine learning model to accept inputs including images representing sensor data for a geographic area and unreliable semantic annotations for the geographic area. The machine learning model can be trained against validated semantic annotations for the geographic area, such that subsequent to training, additional images representing sensor data and additional unreliable semantic annotations can be passed through the neural network to provide predicted semantic annotations for the additional images. Systems and computer program products are also provided.Type: GrantFiled: July 26, 2021Date of Patent: August 6, 2024Assignee: Motional AD LLCInventors: Paul Schmitt, Yimu Wang
-
Patent number: 12031829Abstract: Among other things, techniques are described for identifying sensor data from a sensor of a first vehicle that includes information related to a pose of at least two other vehicles on a road. The technique further includes determining a geometry of a portion of the road based at least in part on the information about the pose of the at least two other vehicles. The technique further includes comparing the geometry of the portion of the road with map data to identify a match between the portion of the road and a portion of the map data. The technique further includes determining a pose of the first vehicle relative to the map data based at least in part on the match.Type: GrantFiled: December 3, 2020Date of Patent: July 9, 2024Assignee: Motional AD LLCInventors: Yimu Wang, Ning Xu, Ajay Charan, Yih-Jye Hsu
-
Publication number: 20230421908Abstract: An eye tracking system comprising: a plurality of light sources that are arranged to illuminate a user's eye when the eye tracking system is in use; and a controller configured to: receive a first-image of a surface, acquired while the surface is illuminated by a first set of the plurality of light sources; receive a second-image of the surface, acquired while the surface is illuminated by a second set of the plurality of light sources, wherein the second set of light sources is different to the first set of light sources; process the first-image and the second-image to determine an illumination contribution of one or more of the light sources; and determine light-source-control-signaling for one or more of the light sources based on the determined illumination contribution of the one or more of the light sources.Type: ApplicationFiled: June 21, 2023Publication date: December 28, 2023Inventors: PRAVIN KUMAR RANA, YIMU WANG, DANIEL TORNÉUS, GILFREDO REMON SALAZAR, PONTUS CHRISTIAN WALCK
-
Patent number: 11823413Abstract: An eye tracking system configured to: receive a plurality of right eye images of a right eye of a user; receive a plurality of left eye images of a left eye of a user, each left eye image corresponding to a right eye image in the plurality of right eye images; detect a pupil and determine an associated pupil signal, for each of the plurality of right eye images and each of the plurality of left eye images; calculate a right eye pupil variation of the pupil signals for the plurality of right eye images and a left eye pupil variation of the pupil signals for the plurality of left eye images; and determine a right eye weighting and a left eye weighting based on the right eye pupil variation and the left eye pupil variation.Type: GrantFiled: January 25, 2023Date of Patent: November 21, 2023Assignee: Tobii ABInventors: Mikael Rosell, Simon Johansson, Pravin Kumar Rana, Yimu Wang, Gilfredo Remon Salazar
-
Patent number: 11802967Abstract: A sensor data fusion system for a vehicle with multiple sensors includes a first-sensor, a second-sensor, and a controller-circuit. The first-sensor is configured to output a first-frame of data and a subsequent-frame of data indicative of objects present in a first-field-of-view. The first-frame is characterized by a first-time-stamp, the subsequent-frame of data characterized by a subsequent-time-stamp different from the first-time-stamp. The second-sensor is configured to output a second-frame of data indicative of objects present in a second-field-of-view that overlaps the first-field-of-view. The second-frame is characterized by a second-time-stamp temporally located between the first-time-stamp and the subsequent-time-stamp. The controller-circuit is configured to synthesize an interpolated-frame from the first-frame and the subsequent-frame. The interpolated-frame is characterized by an interpolated-time-stamp that corresponds to the second-time-stamp.Type: GrantFiled: August 15, 2022Date of Patent: October 31, 2023Assignee: Motional AD LLCInventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
-
Patent number: 11740360Abstract: Among other things, techniques are described for identifying, in a light detection and ranging (LiDAR) scan line, a first LiDAR data point and a plurality of LiDAR data points within a vicinity of the first LiDAR data point. The techniques may further include identifying, based on a comparison of the first LiDAR data point to at least one LiDAR data point of the plurality of LiDAR return points, a coefficient of the first LiDAR data point, wherein the coefficient is related to image smoothness. The techniques may further include identifying, based on a comparison of the coefficient to a threshold, whether to include the first LiDAR data point in an updated LiDAR scan line, and then identifying, based on the updated LiDAR scan line, a location of the autonomous vehicle.Type: GrantFiled: November 2, 2020Date of Patent: August 29, 2023Assignee: Motional AD LLCInventors: Ajay Charan, Yimu Wang
-
Patent number: 11715237Abstract: Provided are methods for deep learning-based camera calibration, which can include receiving first and second images captured by a camera, processing the first image using a first neural network to determine a depth of the first image, processing the first image and the second image using a second neural network to determine a transformation between a pose of the camera for the first image and a pose of the camera for the second image, generating a projection image based on the depth of the first image, the transformation of the pose of the camera, and intrinsic parameters of the camera, comparing the second image and the projection image to determine a reprojection error, and adjusting at least one of the intrinsic parameters of the camera based on the reprojection error. Systems and computer program products are also provided.Type: GrantFiled: April 4, 2022Date of Patent: August 1, 2023Assignee: Motional AD LLCInventors: Yimu Wang, Wanzhi Zhang
-
Patent number: 11681366Abstract: Images of an eye are captured by a camera. For each of the images, gaze data is obtained and a position of a pupil center is estimated in the image. The gaze data indicates a gaze point and/or gaze direction of the eye when the image was captured. A mapping is calibrated using the obtained gaze data and the estimated positions of the pupil center. The mapping maps positions of the pupil center in images captured by the camera to gaze points at a surface, or to gaze directions. A further image of the eye is captured by the camera. A position of the pupil center is estimated in the further image. Gaze tracking is performed using the calibrated mapping and the estimated position of the pupil center in the further image. These steps may for example be performed at a HMD.Type: GrantFiled: January 13, 2022Date of Patent: June 20, 2023Assignee: Tobii ABInventors: Tiesheng Wang, Gilfredo Remon Salazar, Yimu Wang, Pravin Kumar Rana, Johannes Kron, Mark Ryan, Torbjörn Sundberg
-
Patent number: 11593962Abstract: An eye tracking system configured to: receive a plurality of right-eye-images of a right eye of a user; receive a plurality of left-eye-images of a left eye of a user, each left-eye-image corresponding to a right-eye-image in the plurality of right-eye-images; detect a pupil and determine an associated pupil-signal, for each of the plurality of right-eye-images and each of the plurality of left-eye-images; calculate a right-eye-pupil-variation of the pupil-signals for the plurality of right-eye-images and a left-eye-pupil-variation of the pupil-signals for the plurality of left-eye-images; and determine a right-eye-weighting and a left-eye-weighting based on the right-eye-pupil-variation and the left-eye-pupil-variation.Type: GrantFiled: December 29, 2020Date of Patent: February 28, 2023Assignee: TOBII ABInventors: Mikael Rosell, Simon Johansson, Pravin Kumar Rana, Yimu Wang, Gilfredo Remon Salazar
-
Publication number: 20230027369Abstract: Provided are methods for semantic annotation of sensor data using unreliable map annotation inputs, which can include training a machine learning model to accept inputs including images representing sensor data for a geographic area and unreliable semantic annotations for the geographic area. The machine learning model can be trained against validated semantic annotations for the geographic area, such that subsequent to training, additional images representing sensor data and additional unreliable semantic annotations can be passed through the neural network to provide predicted semantic annotations for the additional images. Systems and computer program products are also provided.Type: ApplicationFiled: July 26, 2021Publication date: January 26, 2023Inventors: Paul Schmitt, Yimu Wang
-
Patent number: 11556006Abstract: Disclosed is a method for detecting a shadow in an image of an eye region of a user wearing a Head Mounted Device, HMD. The method comprises obtaining, from a camera of the HMD, an image of the eye region of the user wearing a HMD and determining an area of interest in the image, the area of interest comprising a plurality of subareas. The method further comprises determining a first brightness level for a first subarea of the plurality of subareas and determining a second brightness level for a second subarea of the plurality of subareas. The method further comprises comparing the first brightness level with the second brightness level, and, based on the comparing, selectively generating a signal indicating a shadow.Type: GrantFiled: October 28, 2019Date of Patent: January 17, 2023Assignee: Tobii ABInventors: Yimu Wang, Ylva Björk, Joakim Zachrisson, Pravin Kumar Rana
-
Publication number: 20220390957Abstract: A sensor data fusion system for a vehicle with multiple sensors includes a first-sensor, a second-sensor, and a controller-circuit. The first-sensor is configured to output a first-frame of data and a subsequent-frame of data indicative of objects present in a first-field-of-view. The first-frame is characterized by a first-time-stamp, the subsequent-frame of data characterized by a subsequent-time-stamp different from the first-time-stamp. The second-sensor is configured to output a second-frame of data indicative of objects present in a second-field-of-view that overlaps the first-field-of-view. The second-frame is characterized by a second-time-stamp temporally located between the first-time-stamp and the subsequent-time-stamp. The controller-circuit is configured to synthesize an interpolated-frame from the first-frame and the subsequent-frame. The interpolated-frame is characterized by an interpolated-time-stamp that corresponds to the second-time-stamp.Type: ApplicationFiled: August 15, 2022Publication date: December 8, 2022Inventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
-
Publication number: 20220375129Abstract: Provided are methods for deep learning-based camera calibration, which can include receiving first and second images captured by a camera, processing the first image using a first neural network to determine a depth of the first image, processing the first image and the second image using a second neural network to determine a transformation between a pose of the camera for the first image and a pose of the camera for the second image, generating a projection image based on the depth of the first image, the transformation of the pose of the camera, and intrinsic parameters of the camera, comparing the second image and the projection image to determine a reprojection error, and adjusting at least one of the intrinsic parameters of the camera based on the reprojection error. Systems and computer program products are also provided.Type: ApplicationFiled: April 4, 2022Publication date: November 24, 2022Inventors: Yimu Wang, Wanzhi Zhang
-
Publication number: 20220326382Abstract: Methods, apparatus, and systems for adaptive point cloud filtering for an autonomous vehicle are disclosed. At least one processor receives multiple LiDAR points from a LiDAR system. The multiple LiDAR points represent at least one object in an environment traveled by the vehicle. The at least one processor determines a Euclidean distance of each LiDAR point. The at least one processor compares the Euclidean distance of each LiDAR point with a respective sampled Euclidean distance from a standard normal distribution of Euclidean distances. Responsive to the Euclidean distance of a LiDAR point being less than the respective sampled Euclidean distance, the at least one processor removes the LiDAR point from the multiple LiDAR points to generate a point cloud. The at least one processor operates the vehicle based on the point cloud.Type: ApplicationFiled: April 9, 2021Publication date: October 13, 2022Inventors: Yimu Wang, Ning Xu
-
Patent number: 11435752Abstract: A sensor data fusion system for a vehicle with multiple sensors includes a first-sensor, a second-sensor, and a controller-circuit. The first-sensor is configured to output a first-frame of data and a subsequent-frame of data indicative of objects present in a first-field-of-view. The first-frame is characterized by a first-time-stamp, the subsequent-frame of data characterized by a subsequent-time-stamp different from the first-time-stamp. The second-sensor is configured to output a second-frame of data indicative of objects present in a second-field-of-view that overlaps the first-field-of-view. The second-frame is characterized by a second-time-stamp temporally located between the first-time-stamp and the subsequent-time-stamp. The controller-circuit is configured to synthesize an interpolated-frame from the first-frame and the subsequent-frame. The interpolated-frame is characterized by an interpolated-time-stamp that corresponds to the second-time-stamp.Type: GrantFiled: March 26, 2018Date of Patent: September 6, 2022Assignee: Motional AD LLCInventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
-
Publication number: 20220207768Abstract: An eye tracking system configured to: receive a plurality of right-eye-images of a right eye of a user; receive a plurality of left-eye-images of a left eye of a user, each left-eye-image corresponding to a right-eye-image in the plurality of right-eye-images; detect a pupil and determine an associated pupil-signal, for each of the plurality of right-eye-images and each of the plurality of left-eye-images; calculate a right-eye-pupil-variation of the pupil-signals for the plurality of right-eye-images and a left-eye-pupil-variation of the pupil-signals for the plurality of left-eye-images; and determine a right-eye-weighting and a left-eye-weighting based on the right-eye-pupil-variation and the left-eye-pupil-variation.Type: ApplicationFiled: December 29, 2020Publication date: June 30, 2022Applicant: Tobii ABInventors: Mikael Rosell, Simon Johansson, Pravin Kumar Rana, Yimu Wang, Gilfredo Remon Salazar
-
Publication number: 20220178700Abstract: Among other things, techniques are described for identifying sensor data from a sensor of a first vehicle that includes information related to a pose of at least two other vehicles on a road. The technique further includes determining a geometry of a portion of the road based at least in part on the information about the pose of the at least two other vehicles. The technique further includes comparing the geometry of the portion of the road with map data to identify a match between the portion of the road and a portion of the map data. The technique further includes determining a pose of the first vehicle relative to the map data based at least in part on the match.Type: ApplicationFiled: December 3, 2020Publication date: June 9, 2022Inventors: Yimu Wang, Ning Xu, Ajay Charan, Yih-Jye Hsu
-
Publication number: 20220137219Abstract: Among other things, techniques are described for identifying, in a light detection and ranging (LiDAR) scan line, a first LiDAR data point and a plurality of LiDAR data points within a vicinity of the first LiDAR data point. The techniques may further include identifying, based on a comparison of the first LiDAR data point to at least one LiDAR data point of the plurality of LiDAR return points, a coefficient of the first LiDAR data point, wherein the coefficient is related to image smoothness. The techniques may further include identifying, based on a comparison of the coefficient to a threshold, whether to include the first LiDAR data point in an updated LiDAR scan line, and then identifying, based on the updated LiDAR scan line, a location of the autonomous vehicle.Type: ApplicationFiled: November 2, 2020Publication date: May 5, 2022Inventors: Ajay Charan, Yimu Wang
-
Publication number: 20220137704Abstract: Images of an eye are captured by a camera. For each of the images, gaze data is obtained and a position of a pupil center is estimated in the image. The gaze data indicates a gaze point and/or gaze direction of the eye when the image was captured. A mapping is calibrated using the obtained gaze data and the estimated positions of the pupil center. The mapping maps positions of the pupil center in images captured by the camera to gaze points at a surface, or to gaze directions. A further image of the eye is captured by the camera. A position of the pupil center is estimated in the further image. Gaze tracking is performed using the calibrated mapping and the estimated position of the pupil center in the further image. These steps may for example be performed at a HMD.Type: ApplicationFiled: January 13, 2022Publication date: May 5, 2022Applicant: Tobii ABInventors: Tiesheng Wang, Gilfredo Remon Salazar, Yimu Wang, Pravin Kumar Rana, Johannes Kron, Mark Ryan, Torbjorn Sundberg
-
Patent number: 11295477Abstract: Provided are methods for deep learning-based camera calibration, which can include receiving first and second images captured by a camera, processing the first image using a first neural network to determine a depth of the first image, processing the first image and the second image using a second neural network to determine a transformation between a pose of the camera for the first image and a pose of the camera for the second image, generating a projection image based on the depth of the first image, the transformation of the pose of the camera, and intrinsic parameters of the camera, comparing the second image and the projection image to determine a reprojection error, and adjusting at least one of the intrinsic parameters of the camera based on the reprojection error. Systems and computer program products are also provided.Type: GrantFiled: May 19, 2021Date of Patent: April 5, 2022Assignee: Motional AD LLCInventors: Yimu Wang, Wanzhi Zhang