Patents by Inventor Yimu Wang

Yimu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12054173
    Abstract: Provided are methods for semantic annotation of sensor data using unreliable map annotation inputs, which can include training a machine learning model to accept inputs including images representing sensor data for a geographic area and unreliable semantic annotations for the geographic area. The machine learning model can be trained against validated semantic annotations for the geographic area, such that subsequent to training, additional images representing sensor data and additional unreliable semantic annotations can be passed through the neural network to provide predicted semantic annotations for the additional images. Systems and computer program products are also provided.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: August 6, 2024
    Assignee: Motional AD LLC
    Inventors: Paul Schmitt, Yimu Wang
  • Patent number: 12031829
    Abstract: Among other things, techniques are described for identifying sensor data from a sensor of a first vehicle that includes information related to a pose of at least two other vehicles on a road. The technique further includes determining a geometry of a portion of the road based at least in part on the information about the pose of the at least two other vehicles. The technique further includes comparing the geometry of the portion of the road with map data to identify a match between the portion of the road and a portion of the map data. The technique further includes determining a pose of the first vehicle relative to the map data based at least in part on the match.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: July 9, 2024
    Assignee: Motional AD LLC
    Inventors: Yimu Wang, Ning Xu, Ajay Charan, Yih-Jye Hsu
  • Publication number: 20230421908
    Abstract: An eye tracking system comprising: a plurality of light sources that are arranged to illuminate a user's eye when the eye tracking system is in use; and a controller configured to: receive a first-image of a surface, acquired while the surface is illuminated by a first set of the plurality of light sources; receive a second-image of the surface, acquired while the surface is illuminated by a second set of the plurality of light sources, wherein the second set of light sources is different to the first set of light sources; process the first-image and the second-image to determine an illumination contribution of one or more of the light sources; and determine light-source-control-signaling for one or more of the light sources based on the determined illumination contribution of the one or more of the light sources.
    Type: Application
    Filed: June 21, 2023
    Publication date: December 28, 2023
    Inventors: PRAVIN KUMAR RANA, YIMU WANG, DANIEL TORNÉUS, GILFREDO REMON SALAZAR, PONTUS CHRISTIAN WALCK
  • Patent number: 11823413
    Abstract: An eye tracking system configured to: receive a plurality of right eye images of a right eye of a user; receive a plurality of left eye images of a left eye of a user, each left eye image corresponding to a right eye image in the plurality of right eye images; detect a pupil and determine an associated pupil signal, for each of the plurality of right eye images and each of the plurality of left eye images; calculate a right eye pupil variation of the pupil signals for the plurality of right eye images and a left eye pupil variation of the pupil signals for the plurality of left eye images; and determine a right eye weighting and a left eye weighting based on the right eye pupil variation and the left eye pupil variation.
    Type: Grant
    Filed: January 25, 2023
    Date of Patent: November 21, 2023
    Assignee: Tobii AB
    Inventors: Mikael Rosell, Simon Johansson, Pravin Kumar Rana, Yimu Wang, Gilfredo Remon Salazar
  • Patent number: 11802967
    Abstract: A sensor data fusion system for a vehicle with multiple sensors includes a first-sensor, a second-sensor, and a controller-circuit. The first-sensor is configured to output a first-frame of data and a subsequent-frame of data indicative of objects present in a first-field-of-view. The first-frame is characterized by a first-time-stamp, the subsequent-frame of data characterized by a subsequent-time-stamp different from the first-time-stamp. The second-sensor is configured to output a second-frame of data indicative of objects present in a second-field-of-view that overlaps the first-field-of-view. The second-frame is characterized by a second-time-stamp temporally located between the first-time-stamp and the subsequent-time-stamp. The controller-circuit is configured to synthesize an interpolated-frame from the first-frame and the subsequent-frame. The interpolated-frame is characterized by an interpolated-time-stamp that corresponds to the second-time-stamp.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: October 31, 2023
    Assignee: Motional AD LLC
    Inventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
  • Patent number: 11740360
    Abstract: Among other things, techniques are described for identifying, in a light detection and ranging (LiDAR) scan line, a first LiDAR data point and a plurality of LiDAR data points within a vicinity of the first LiDAR data point. The techniques may further include identifying, based on a comparison of the first LiDAR data point to at least one LiDAR data point of the plurality of LiDAR return points, a coefficient of the first LiDAR data point, wherein the coefficient is related to image smoothness. The techniques may further include identifying, based on a comparison of the coefficient to a threshold, whether to include the first LiDAR data point in an updated LiDAR scan line, and then identifying, based on the updated LiDAR scan line, a location of the autonomous vehicle.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: August 29, 2023
    Assignee: Motional AD LLC
    Inventors: Ajay Charan, Yimu Wang
  • Patent number: 11715237
    Abstract: Provided are methods for deep learning-based camera calibration, which can include receiving first and second images captured by a camera, processing the first image using a first neural network to determine a depth of the first image, processing the first image and the second image using a second neural network to determine a transformation between a pose of the camera for the first image and a pose of the camera for the second image, generating a projection image based on the depth of the first image, the transformation of the pose of the camera, and intrinsic parameters of the camera, comparing the second image and the projection image to determine a reprojection error, and adjusting at least one of the intrinsic parameters of the camera based on the reprojection error. Systems and computer program products are also provided.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: August 1, 2023
    Assignee: Motional AD LLC
    Inventors: Yimu Wang, Wanzhi Zhang
  • Patent number: 11681366
    Abstract: Images of an eye are captured by a camera. For each of the images, gaze data is obtained and a position of a pupil center is estimated in the image. The gaze data indicates a gaze point and/or gaze direction of the eye when the image was captured. A mapping is calibrated using the obtained gaze data and the estimated positions of the pupil center. The mapping maps positions of the pupil center in images captured by the camera to gaze points at a surface, or to gaze directions. A further image of the eye is captured by the camera. A position of the pupil center is estimated in the further image. Gaze tracking is performed using the calibrated mapping and the estimated position of the pupil center in the further image. These steps may for example be performed at a HMD.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: June 20, 2023
    Assignee: Tobii AB
    Inventors: Tiesheng Wang, Gilfredo Remon Salazar, Yimu Wang, Pravin Kumar Rana, Johannes Kron, Mark Ryan, Torbjörn Sundberg
  • Patent number: 11593962
    Abstract: An eye tracking system configured to: receive a plurality of right-eye-images of a right eye of a user; receive a plurality of left-eye-images of a left eye of a user, each left-eye-image corresponding to a right-eye-image in the plurality of right-eye-images; detect a pupil and determine an associated pupil-signal, for each of the plurality of right-eye-images and each of the plurality of left-eye-images; calculate a right-eye-pupil-variation of the pupil-signals for the plurality of right-eye-images and a left-eye-pupil-variation of the pupil-signals for the plurality of left-eye-images; and determine a right-eye-weighting and a left-eye-weighting based on the right-eye-pupil-variation and the left-eye-pupil-variation.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: February 28, 2023
    Assignee: TOBII AB
    Inventors: Mikael Rosell, Simon Johansson, Pravin Kumar Rana, Yimu Wang, Gilfredo Remon Salazar
  • Publication number: 20230027369
    Abstract: Provided are methods for semantic annotation of sensor data using unreliable map annotation inputs, which can include training a machine learning model to accept inputs including images representing sensor data for a geographic area and unreliable semantic annotations for the geographic area. The machine learning model can be trained against validated semantic annotations for the geographic area, such that subsequent to training, additional images representing sensor data and additional unreliable semantic annotations can be passed through the neural network to provide predicted semantic annotations for the additional images. Systems and computer program products are also provided.
    Type: Application
    Filed: July 26, 2021
    Publication date: January 26, 2023
    Inventors: Paul Schmitt, Yimu Wang
  • Patent number: 11556006
    Abstract: Disclosed is a method for detecting a shadow in an image of an eye region of a user wearing a Head Mounted Device, HMD. The method comprises obtaining, from a camera of the HMD, an image of the eye region of the user wearing a HMD and determining an area of interest in the image, the area of interest comprising a plurality of subareas. The method further comprises determining a first brightness level for a first subarea of the plurality of subareas and determining a second brightness level for a second subarea of the plurality of subareas. The method further comprises comparing the first brightness level with the second brightness level, and, based on the comparing, selectively generating a signal indicating a shadow.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: January 17, 2023
    Assignee: Tobii AB
    Inventors: Yimu Wang, Ylva Björk, Joakim Zachrisson, Pravin Kumar Rana
  • Publication number: 20220390957
    Abstract: A sensor data fusion system for a vehicle with multiple sensors includes a first-sensor, a second-sensor, and a controller-circuit. The first-sensor is configured to output a first-frame of data and a subsequent-frame of data indicative of objects present in a first-field-of-view. The first-frame is characterized by a first-time-stamp, the subsequent-frame of data characterized by a subsequent-time-stamp different from the first-time-stamp. The second-sensor is configured to output a second-frame of data indicative of objects present in a second-field-of-view that overlaps the first-field-of-view. The second-frame is characterized by a second-time-stamp temporally located between the first-time-stamp and the subsequent-time-stamp. The controller-circuit is configured to synthesize an interpolated-frame from the first-frame and the subsequent-frame. The interpolated-frame is characterized by an interpolated-time-stamp that corresponds to the second-time-stamp.
    Type: Application
    Filed: August 15, 2022
    Publication date: December 8, 2022
    Inventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
  • Publication number: 20220375129
    Abstract: Provided are methods for deep learning-based camera calibration, which can include receiving first and second images captured by a camera, processing the first image using a first neural network to determine a depth of the first image, processing the first image and the second image using a second neural network to determine a transformation between a pose of the camera for the first image and a pose of the camera for the second image, generating a projection image based on the depth of the first image, the transformation of the pose of the camera, and intrinsic parameters of the camera, comparing the second image and the projection image to determine a reprojection error, and adjusting at least one of the intrinsic parameters of the camera based on the reprojection error. Systems and computer program products are also provided.
    Type: Application
    Filed: April 4, 2022
    Publication date: November 24, 2022
    Inventors: Yimu Wang, Wanzhi Zhang
  • Publication number: 20220326382
    Abstract: Methods, apparatus, and systems for adaptive point cloud filtering for an autonomous vehicle are disclosed. At least one processor receives multiple LiDAR points from a LiDAR system. The multiple LiDAR points represent at least one object in an environment traveled by the vehicle. The at least one processor determines a Euclidean distance of each LiDAR point. The at least one processor compares the Euclidean distance of each LiDAR point with a respective sampled Euclidean distance from a standard normal distribution of Euclidean distances. Responsive to the Euclidean distance of a LiDAR point being less than the respective sampled Euclidean distance, the at least one processor removes the LiDAR point from the multiple LiDAR points to generate a point cloud. The at least one processor operates the vehicle based on the point cloud.
    Type: Application
    Filed: April 9, 2021
    Publication date: October 13, 2022
    Inventors: Yimu Wang, Ning Xu
  • Patent number: 11435752
    Abstract: A sensor data fusion system for a vehicle with multiple sensors includes a first-sensor, a second-sensor, and a controller-circuit. The first-sensor is configured to output a first-frame of data and a subsequent-frame of data indicative of objects present in a first-field-of-view. The first-frame is characterized by a first-time-stamp, the subsequent-frame of data characterized by a subsequent-time-stamp different from the first-time-stamp. The second-sensor is configured to output a second-frame of data indicative of objects present in a second-field-of-view that overlaps the first-field-of-view. The second-frame is characterized by a second-time-stamp temporally located between the first-time-stamp and the subsequent-time-stamp. The controller-circuit is configured to synthesize an interpolated-frame from the first-frame and the subsequent-frame. The interpolated-frame is characterized by an interpolated-time-stamp that corresponds to the second-time-stamp.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: September 6, 2022
    Assignee: Motional AD LLC
    Inventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
  • Publication number: 20220207768
    Abstract: An eye tracking system configured to: receive a plurality of right-eye-images of a right eye of a user; receive a plurality of left-eye-images of a left eye of a user, each left-eye-image corresponding to a right-eye-image in the plurality of right-eye-images; detect a pupil and determine an associated pupil-signal, for each of the plurality of right-eye-images and each of the plurality of left-eye-images; calculate a right-eye-pupil-variation of the pupil-signals for the plurality of right-eye-images and a left-eye-pupil-variation of the pupil-signals for the plurality of left-eye-images; and determine a right-eye-weighting and a left-eye-weighting based on the right-eye-pupil-variation and the left-eye-pupil-variation.
    Type: Application
    Filed: December 29, 2020
    Publication date: June 30, 2022
    Applicant: Tobii AB
    Inventors: Mikael Rosell, Simon Johansson, Pravin Kumar Rana, Yimu Wang, Gilfredo Remon Salazar
  • Publication number: 20220178700
    Abstract: Among other things, techniques are described for identifying sensor data from a sensor of a first vehicle that includes information related to a pose of at least two other vehicles on a road. The technique further includes determining a geometry of a portion of the road based at least in part on the information about the pose of the at least two other vehicles. The technique further includes comparing the geometry of the portion of the road with map data to identify a match between the portion of the road and a portion of the map data. The technique further includes determining a pose of the first vehicle relative to the map data based at least in part on the match.
    Type: Application
    Filed: December 3, 2020
    Publication date: June 9, 2022
    Inventors: Yimu Wang, Ning Xu, Ajay Charan, Yih-Jye Hsu
  • Publication number: 20220137219
    Abstract: Among other things, techniques are described for identifying, in a light detection and ranging (LiDAR) scan line, a first LiDAR data point and a plurality of LiDAR data points within a vicinity of the first LiDAR data point. The techniques may further include identifying, based on a comparison of the first LiDAR data point to at least one LiDAR data point of the plurality of LiDAR return points, a coefficient of the first LiDAR data point, wherein the coefficient is related to image smoothness. The techniques may further include identifying, based on a comparison of the coefficient to a threshold, whether to include the first LiDAR data point in an updated LiDAR scan line, and then identifying, based on the updated LiDAR scan line, a location of the autonomous vehicle.
    Type: Application
    Filed: November 2, 2020
    Publication date: May 5, 2022
    Inventors: Ajay Charan, Yimu Wang
  • Publication number: 20220137704
    Abstract: Images of an eye are captured by a camera. For each of the images, gaze data is obtained and a position of a pupil center is estimated in the image. The gaze data indicates a gaze point and/or gaze direction of the eye when the image was captured. A mapping is calibrated using the obtained gaze data and the estimated positions of the pupil center. The mapping maps positions of the pupil center in images captured by the camera to gaze points at a surface, or to gaze directions. A further image of the eye is captured by the camera. A position of the pupil center is estimated in the further image. Gaze tracking is performed using the calibrated mapping and the estimated position of the pupil center in the further image. These steps may for example be performed at a HMD.
    Type: Application
    Filed: January 13, 2022
    Publication date: May 5, 2022
    Applicant: Tobii AB
    Inventors: Tiesheng Wang, Gilfredo Remon Salazar, Yimu Wang, Pravin Kumar Rana, Johannes Kron, Mark Ryan, Torbjorn Sundberg
  • Patent number: 11295477
    Abstract: Provided are methods for deep learning-based camera calibration, which can include receiving first and second images captured by a camera, processing the first image using a first neural network to determine a depth of the first image, processing the first image and the second image using a second neural network to determine a transformation between a pose of the camera for the first image and a pose of the camera for the second image, generating a projection image based on the depth of the first image, the transformation of the pose of the camera, and intrinsic parameters of the camera, comparing the second image and the projection image to determine a reprojection error, and adjusting at least one of the intrinsic parameters of the camera based on the reprojection error. Systems and computer program products are also provided.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: April 5, 2022
    Assignee: Motional AD LLC
    Inventors: Yimu Wang, Wanzhi Zhang