Patents by Inventor Davide Onofrio

Davide Onofrio has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230267701
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Application
    Filed: May 1, 2023
    Publication date: August 24, 2023
    Inventors: Yifang Xu, Xin Liu, Chia-Chin Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 11676364
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: June 13, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Publication number: 20210224556
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Application
    Filed: April 5, 2021
    Publication date: July 22, 2021
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 10997433
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: May 4, 2021
    Assignee: NVIDIA Corporation
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 10437347
    Abstract: The technology disclosed relates to tracking motion of a wearable sensor system using a combination a RGB (red, green, and blue) and IR (infrared) pixels of one or more cameras. In particular, it relates to capturing gross features and feature values of a real world space using RGB pixels and capturing fine features and feature values of the real world space using IR pixels. It also relates to enabling multi-user collaboration and interaction in an immersive virtual environment. It also relates to capturing different sceneries of a shared real world space from the perspective of multiple users. It further relates to sharing content between wearable sensor systems. In further relates to capturing images and video streams from the perspective of a first user of a wearable sensor system and sending an augmented version of the captured images and video stream to a second user of the wearable sensor system.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: October 8, 2019
    Assignee: Ultrahaptics IP Two Limited
    Inventors: David S. Holz, Matias Perez, Davide Onofrio
  • Publication number: 20190266418
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Application
    Filed: February 26, 2019
    Publication date: August 29, 2019
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Publication number: 20190056791
    Abstract: The technology disclosed relates to tracking motion of a wearable sensor system using a combination a RGB (red, green, and blue) and IR (infrared) pixels of one or more cameras. In particular, it relates to capturing gross features and feature values of a real world space using RGB pixels and capturing fine features and feature values of the real world space using IR pixels. It also relates to enabling multi-user collaboration and interaction in an immersive virtual environment. It also relates to capturing different sceneries of a shared real world space from the perspective of multiple users. It further relates to sharing content between wearable sensor systems. In further relates to capturing images and video streams from the perspective of a first user of a wearable sensor system and sending an augmented version of the captured images and video stream to a second user of the wearable sensor system.
    Type: Application
    Filed: June 22, 2018
    Publication date: February 21, 2019
    Applicant: Leap Motion, Inc.
    Inventors: David S. HOLZ, Matias PEREZ, Davide ONOFRIO
  • Patent number: 10007350
    Abstract: A technology for tracking motion of a wearable sensor system uses a combination a RGB (red, green, and blue) and IR (infrared) pixels of one or more cameras. In particular, it relates to capturing gross features and feature values of a real world space using RGB pixels and capturing fine features and feature values of the real world space using IR pixels. It also relates to enabling multi-user collaboration and interaction in an immersive virtual environment. It also relates to capturing different sceneries of a shared real world space from the perspective of multiple users. It further relates to sharing content between wearable sensor systems. In further relates to capturing images and video streams from the perspective of a first user of a wearable sensor system and sending an augmented version of the captured images and video stream to a second user of the wearable sensor system.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: June 26, 2018
    Assignee: LEAP MOTION, INC.
    Inventors: David S. Holz, Matias Perez, Davide Onofrio
  • Patent number: 8027532
    Abstract: A method for recognition between a first and second object represented by at least one first image and at least one second image, includes defining rectangular assemblies of random pixels; filtering the first image with first n filters obtained from the assemblies of pixels to obtain n first filtered matrices; classifying the n first filtered matrices by providing a first center and a first radius within a space of N dimensions; filtering the second image with the first n filters to obtain n second filtered matrices; classifying the n second filtered matrices by providing a second center within the space of N dimensions; and comparing the first center and first radius with the second center.
    Type: Grant
    Filed: March 17, 2006
    Date of Patent: September 27, 2011
    Assignee: Kee Square S.r.l.
    Inventors: Marco Marcon, Davide Onofrio
  • Publication number: 20090136095
    Abstract: A recognition method for recognition between a first and second object represented by at least one first image and at least one second image, includes: defining a plurality of rectangular assemblies of random pixels; filtering the first image with first n filters obtained from the assemblies of pixels to obtain n first filtered matrices; classifying the n first filtered matrices by providing a first centre and a first radius within a space of N dimensions; filtering the second image with the first n filters to obtain n second filtered matrices; classifying the n second filtered matrices by providing a second centre within the space of N dimensions; comparing the first centre and first radius with the second centre; whereby recognition between the object and second object has if at least one second centre lies at a distance from the first centre which is less than or equal to at least one first radius.
    Type: Application
    Filed: March 17, 2006
    Publication date: May 28, 2009
    Applicant: Celin Technology Innovation S.R.L.
    Inventors: Marco Marcon, Davide Onofrio