Patents by Inventor Yimu Wang
Yimu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11259010Abstract: Images of an eye are captured at respective time instances by a camera of a head-mounted device. For each time instance, a position of a center of corneal curvature is estimated using an image captured at that time instance, a position of a pupil center is estimated using an image captured at that time instance, and a line is determined through the estimated corneal curvature center position and the estimated pupil center position. A first estimated position of a center of the eye is computed based on the lines determined for time instances in a first time period. A second estimated position of the center of the eye is computed based on the lines determined for time instances in a second time period. Relocation of the head-mounted device relative to a user's head is detected based on the first and second estimated positions of the eye center.Type: GrantFiled: October 31, 2019Date of Patent: February 22, 2022Assignee: Tobii ABInventors: Tiesheng Wang, Pravin Kumar Rana, Yimu Wang, Mark Ryan
-
Patent number: 11249547Abstract: Images of an eye are captured by a camera. For each of the images, gaze data is obtained and a position of a pupil center is estimated in the image. The gaze data indicates a gaze point and/or gaze direction of the eye when the image was captured. A mapping is calibrated using the obtained gaze data and the estimated positions of the pupil center. The mapping maps positions of the pupil center in images captured by the camera to gaze points at a surface, or to gaze directions. A further image of the eye is captured by the camera. A position of the pupil center is estimated in the further image. Gaze tracking is performed using the calibrated mapping and the estimated position of the pupil center in the further image. These steps may for example be performed at a HMD.Type: GrantFiled: October 31, 2019Date of Patent: February 15, 2022Assignee: Tobii ABInventors: Tiesheng Wang, Gilfredo Remon Salazar, Yimu Wang, Pravin Kumar Rana, Johannes Kron, Mark Ryan, Torbjörn Sundberg
-
Patent number: 11243607Abstract: A method of identifying scleral reflections in an eye tracking system is disclosed. A glint is identified in an image from an image sensor, wherein the glint is a representation in the image of a reflection of light from a cornea of the eye of the user or from a sclera of the eye of the user. A first pixel intensity of the glint is determined, a second pixel intensity of neighbor pixels of the glint is determined, and an absolute value of the difference between the first pixel intensity of the glint and the second pixel intensity of the neighbor pixels of the glint is determined. The glint is identified as a representation of a reflection from the sclera of the eye of the user on condition that the determined absolute value of the difference is below a predetermine threshold value.Type: GrantFiled: May 23, 2019Date of Patent: February 8, 2022Assignee: Tobii ABInventors: Pravin Kumar Rana, Yimu Wang
-
Patent number: 11232594Abstract: The invention is related to a method for calibrating an eye tracking device within a head-mounted display (HMD) comprising the steps of acquiring with the HMD via an image sensor, at least one optical target image from an optical target, wherein the optical target contains image points in a pattern, indexing image points within the optical target image wherein the image points are indexed by, selecting a rigid region of the optical target image, assigning indices to image points within the rigid region, fitting a polynomial approximation function to at least one column and one row of the image points of the region, predicting the location of at least one image point using the fitted polynomial approximation function, assigning the predicted image point an index, inputting indexed image points into an optimization algorithm that calculates a hardware calibration of the HMD, and writing hardware calibration values calculated from the optimization algorithm to the HMD unit.Type: GrantFiled: October 23, 2020Date of Patent: January 25, 2022Assignee: Tobii ABInventors: Pravin Kumar Rana, Macarena Garcia Romero, Yimu Wang, Rickard Lundahl
-
Patent number: 10852531Abstract: A method for determining eye openness with an eye tracking device is disclosed. The method may include determining, for pixels of an image sensor of an eye tracking device, during a first time period when an eye of a user is open, a first sum of intensity of the pixels. The method may also include determining, during a second time period when the eye of the user is closed, a second sum of intensity of the pixels. The method may further include determining, during a third time period, a third sum of intensity of the pixels. The method may additionally include determining that upon the third sum exceeding a fourth sum of the first sum plus a threshold amount, that the eye of the user is closed, the threshold amount is equal to a product of a threshold fraction and a difference between the first sum and the second sum.Type: GrantFiled: June 24, 2019Date of Patent: December 1, 2020Assignee: Tobii ABInventors: Mark Ryan, Torbjörn Sundberg, Pravin Rana, Yimu Wang
-
Patent number: 10848718Abstract: A vehicle perception sensor adjustment system includes a perception-sensor, a digital-map, and controller-circuit. The perception-sensor is configured to detect an object proximate to a host-vehicle. The perception-sensor is characterized as having a field-of-view that is adjustable. The digital-map indicates a contour of a roadway traveled by the host-vehicle. The controller-circuit in communication with the perception-sensor and the digital-map. The controller-circuit determines the field-of-view of the perception-sensor in accordance with the contour of the roadway indicated by the digital-map, and outputs a control-signal to the perception-sensor that adjusts the field-of-view of the perception-sensor.Type: GrantFiled: March 14, 2018Date of Patent: November 24, 2020Assignee: Aptiv Technologies LimitedInventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
-
Publication number: 20200192473Abstract: Images of an eye are captured by a camera. For each of the images, gaze data is obtained and a position of a pupil center is estimated in the image. The gaze data indicates a gaze point and/or gaze direction of the eye when the image was captured. A mapping is calibrated using the obtained gaze data and the estimated positions of the pupil center. The mapping maps positions of the pupil center in images captured by the camera to gaze points at a surface, or to gaze directions. A further image of the eye is captured by the camera. A position of the pupil center is estimated in the further image. Gaze tracking is performed using the calibrated mapping and the estimated position of the pupil center in the further image. These steps may for example be performed at a HMD.Type: ApplicationFiled: October 31, 2019Publication date: June 18, 2020Applicant: Tobii ABInventors: Tiesheng Wang, Gilfredo Remon Salazar Salazar Salazar, Yimu Wang, Pravin Kumar Rana, Johannes Kron, Mark Ryan, Torbjörn Sundberg
-
Publication number: 20200195915Abstract: Images of an eye are captured at respective time instances by a camera of a head-mounted device. For each time instance, a position of a center of corneal curvature is estimated using an image captured at that time instance, a position of a pupil center is estimated using an image captured at that time instance, and a line is determined through the estimated corneal curvature center position and the estimated pupil center position. A first estimated position of a center of the eye is computed based on the lines determined for time instances in a first time period. A second estimated position of the center of the eye is computed based on the lines determined for time instances in a second time period. Relocation of the head-mounted device relative to a user's head is detected based on the first and second estimated positions of the eye center.Type: ApplicationFiled: October 31, 2019Publication date: June 18, 2020Applicant: Tobii ABInventors: Tiesheng Wang, Pravin Kumar Rana, Yimu Wang, Mark Ryan
-
Publication number: 20200133003Abstract: Disclosed is a method for detecting a shadow in an image of an eye region of a user wearing a Head Mounted Device, HMD. The method comprises obtaining, from a camera of the HMD, an image of the eye region of the user wearing a HMD and determining an area of interest in the image, the area of interest comprising a plurality of subareas. The method further comprises determining a first brightness level for a first subarea of the plurality of subareas and determining a second brightness level for a second subarea of the plurality of subareas. The method further comprises comparing the first brightness level with the second brightness level, and, based on the comparing, selectively generating a signal indicating a shadow.Type: ApplicationFiled: October 28, 2019Publication date: April 30, 2020Applicant: Tobii ABInventors: Yimu Wang, Ylva Björk, Joakim Zachrisson, Pravin Kumar Rana
-
Publication number: 20200026068Abstract: A method for determining eye openness with an eye tracking device is disclosed. The method may include determining, for pixels of an image sensor of an eye tracking device, during a first time period when an eye of a user is open, a first sum of intensity of the pixels. The method may also include determining, during a second time period when the eye of the user is closed, a second sum of intensity of the pixels. The method may further include determining, during a third time period, a third sum of intensity of the pixels. The method may additionally include determining that upon the third sum exceeding a fourth sum of the first sum plus a threshold amount, that the eye of the user is closed, the threshold amount is equal to a product of a threshold fraction and a difference between the first sum and the second sum.Type: ApplicationFiled: June 24, 2019Publication date: January 23, 2020Applicant: Tobii ABInventors: Mark Ryan, Torbjörn Sundberg, Pravin Rana, Yimu Wang
-
Publication number: 20190384276Abstract: A includes a vehicle and a drone. The vehicle includes a sensor system, a navigation system, a map, and a receiver. The sensor system provides vehicle location data to the navigation system that locates the vehicle on a roadway represented by the map. The sensor system includes one or more of a computer vision system, a radar system, and a LIDAR system. The drone includes a transmitter and at least one position-tracking device configured to determine drone location data. Drone use is initiated from the vehicle in accordance with a determination by the processor that the vehicle location data provided by the sensor system is insufficient for the navigation system to navigate the vehicle. The transmitter is configured to transmit the drone location data to the receiver.Type: ApplicationFiled: June 13, 2018Publication date: December 19, 2019Inventor: Yimu Wang
-
Publication number: 20190369720Abstract: A method of identifying scleral reflections in an eye tracking system and a corresponding eye tracking system are disclosed. An image of an eye of a user from an image sensor is received, the image resulting from the image sensor detecting light from one or more illuminators reflected from the eye of the user. A glint is identified in the image, wherein the glint is a representation in the image of a reflection of light from a cornea of the eye of the user or from a sclera of the eye of the user. A first pixel intensity of the glint is determined, a second pixel intensity of neighbor pixels of the glint is determined, and an absolute value of the difference between the first pixel intensity of the glint and the second pixel intensity of the neighbor pixels of the glint is determined. The glint is identified as a representation of a reflection from the sclera of the eye of the user on condition that the determined absolute value of the difference is below a predetermine threshold value.Type: ApplicationFiled: May 23, 2019Publication date: December 5, 2019Applicant: Tobii ABInventors: Pravin Kumar Rana, Yimu Wang
-
Publication number: 20190294176Abstract: A sensor data fusion system for a vehicle with multiple sensors includes a first-sensor, a second-sensor, and a controller-circuit. The first-sensor is configured to output a first-frame of data and a subsequent-frame of data indicative of objects present in a first-field-of-view. The first-frame is characterized by a first-time-stamp, the subsequent-frame of data characterized by a subsequent-time-stamp different from the first-time-stamp. The second-sensor is configured to output a second-frame of data indicative of objects present in a second-field-of-view that overlaps the first-field-of-view. The second-frame is characterized by a second-time-stamp temporally located between the first-time-stamp and the subsequent-time-stamp. The controller-circuit is configured to synthesize an interpolated-frame from the first-frame and the subsequent-frame. The interpolated-frame is characterized by an interpolated-time-stamp that corresponds to the second-time-stamp.Type: ApplicationFiled: March 26, 2018Publication date: September 26, 2019Inventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
-
Publication number: 20190281260Abstract: A vehicle perception sensor adjustment system includes a perception-sensor, a digital-map, and controller-circuit. The perception-sensor is configured to detect an object proximate to a host-vehicle. The perception-sensor is characterized as having a field-of-view that is adjustable. The digital-map indicates a contour of a roadway traveled by the host-vehicle. The controller-circuit in communication with the perception-sensor and the digital-map. The controller-circuit determines the field-of-view of the perception-sensor in accordance with the contour of the roadway indicated by the digital-map, and outputs a control-signal to the perception-sensor that adjusts the field-of-view of the perception-sensor.Type: ApplicationFiled: March 14, 2018Publication date: September 12, 2019Inventors: Guchan Ozbilgin, Wenda Xu, Jarrod M. Snider, Yimu Wang, Yifan Yang, Junqing Wei
-
Patent number: 10394019Abstract: A method for determining eye openness with an eye tracking device is disclosed. The method may include determining, for pixels of an image sensor of an eye tracking device, during a first time period when an eye of a user is open, a first sum of intensity of the pixels. The method may also include determining, during a second time period when the eye of the user is closed, a second sum of intensity of the pixels. The method may further include determining, during a third time period, a third sum of intensity of the pixels. The method may additionally include determining that upon the third sum exceeding a fourth sum of the first sum plus a threshold amount, that the eye of the user is closed, the threshold amount is equal to a product of a threshold fraction and a difference between the first sum and the second sum.Type: GrantFiled: February 9, 2018Date of Patent: August 27, 2019Assignee: Tobii ABInventors: Mark Ryan, Torbjörn Sundberg, Pravin Rana, Yimu Wang
-
Patent number: 10324529Abstract: An image of an eye of a user from an image sensor can be received, the image resulting from the image sensor detecting light from one or more illuminators reflected from the eye of the user. A glint is identified in the image as a representation in the image of a reflection of light from a cornea of the eye of the user or from a sclera of the eye of the user. A first pixel intensity of the glint is determined, a second pixel intensity of neighbor pixels of the glint is determined, and an absolute value of the difference between the first pixel intensity and the second pixel intensity is determined. The glint is identified as a representation of a reflection from the sclera of the eye of the user on condition that the determined absolute value of the difference is below a predetermine threshold value.Type: GrantFiled: May 31, 2018Date of Patent: June 18, 2019Assignee: Tobii ABInventors: Pravin Kumar Rana, Yimu Wang
-
Patent number: 10310087Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.Type: GrantFiled: May 31, 2017Date of Patent: June 4, 2019Assignee: Uber Technologies, Inc.Inventors: Ankit Laddha, J. Andrew Bagnell, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez
-
Patent number: 10288879Abstract: An image of an eye of a user can be received from an image sensor, the image resulting from the image sensor detecting light from one or more illuminators reflected from the eye of the user and reflected from optic arrangements located between the one or more illuminators and the eye of the user, wherein the optic arrangements comprise features having a known pattern. Glints are identified in the image, wherein a glint is a representation in the image of a reflection of light from a cornea of the eye of the user and/or from a feature of the optic arrangements. Relative positions in the image of the identified glints are determined, and on condition that the relative positions conform with the known pattern, at least one glint of the identified glints is identified as a representation in the image of a reflection from a feature of the optic arrangements.Type: GrantFiled: May 31, 2018Date of Patent: May 14, 2019Assignee: Tobii ABInventors: Pravin Kumar Rana, Yimu Wang
-
Publication number: 20190025433Abstract: A tracking system for at least partial automated operation of a host vehicle is configured to detect and monitor a moving object that may be at least momentarily, and at least partially, obstructed by an obstruction. The tracking system includes an object device and a controller. The object device is configured to detect the object with respect to the obstruction by monitoring for object and the obstruction at a prescribed frequency, and output a plurality of object signals at the prescribed frequency. The controller is configured to receive and process the plurality of object signals to recognize the object, determine a reference point of the object, and utilize the reference point to determine a true speed of the object as the object is increasingly or decreasingly obstructed by the obstruction.Type: ApplicationFiled: July 19, 2017Publication date: January 24, 2019Inventors: Yifan Yang, Yimu Wang, Guchan Ozbilgin, Wenda Xu
-
Publication number: 20180348374Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Ankit Laddha, James Andrew Bagnall, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez