Patents by Inventor John Zedlewski

John Zedlewski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250111216
    Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
    Type: Application
    Filed: December 13, 2024
    Publication date: April 3, 2025
    Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
  • Patent number: 12266148
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Grant
    Filed: May 1, 2023
    Date of Patent: April 1, 2025
    Assignee: NVIDIA Corporation
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 12182694
    Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: December 31, 2024
    Assignee: NVIDIA Corporation
    Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
  • Publication number: 20230267701
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Application
    Filed: May 1, 2023
    Publication date: August 24, 2023
    Inventors: Yifang Xu, Xin Liu, Chia-Chin Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 11676364
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: June 13, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Publication number: 20230004801
    Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
    Type: Application
    Filed: August 30, 2022
    Publication date: January 5, 2023
    Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
  • Patent number: 11436484
    Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: September 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
  • Publication number: 20210224556
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Application
    Filed: April 5, 2021
    Publication date: July 22, 2021
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 10997433
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: May 4, 2021
    Assignee: NVIDIA Corporation
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 10984286
    Abstract: A style transfer neural network may be used to generate stylized synthetic images, where real images provide the style (e.g., seasons, weather, lighting) for transfer to synthetic images. The stylized synthetic images may then be used to train a recognition neural network. In turn, the trained neural network may be used to predict semantic labels for the real images, providing recognition data for the real images. Finally, the real training dataset (real images and predicted recognition data) and the synthetic training dataset are used by the style transfer neural network to generate stylized synthetic images. The training of the neural network, prediction of recognition data for the real images, and stylizing of the synthetic images may be repeated for a number of iterations. The stylization operation more closely aligns a covariate of the synthetic images to the covariate of the real images, improving accuracy of the recognition neural network.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: April 20, 2021
    Assignee: NVIDIA Corporation
    Inventors: Aysegul Dundar, Ming-Yu Liu, Ting-Chun Wang, John Zedlewski, Jan Kautz
  • Patent number: 10896753
    Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: January 19, 2021
    Assignee: Enlitic, Inc.
    Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
  • Publication number: 20200111561
    Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.
    Type: Application
    Filed: December 10, 2019
    Publication date: April 9, 2020
    Applicant: Enlitic, Inc.
    Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
  • Publication number: 20200082269
    Abstract: One embodiment of a method includes performing one or more activation functions in a neural network using weights that have been quantized from floating point values to values that are represented using fewer bits than the floating point values. The method further includes performing a first quantization of the weights from the floating point values to the values that are represented using fewer bits than the floating point values after the floating point values are updated using a first number of forward-backward passes of the neural network using training data. The method further includes performing a second quantization of the weights from the floating point values to the values that are represented using fewer bits than the floating point values after the floating point values are updated using a second number of forward-backward passes of the neural network following the first quantization of the weights.
    Type: Application
    Filed: April 2, 2019
    Publication date: March 12, 2020
    Inventors: Shuang GAO, Hao WU, John ZEDLEWSKI
  • Patent number: 10553311
    Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: February 4, 2020
    Assignee: Enlitic, Inc.
    Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
  • Publication number: 20190303759
    Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
    Type: Application
    Filed: March 27, 2019
    Publication date: October 3, 2019
    Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
  • Publication number: 20190266418
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Application
    Filed: February 26, 2019
    Publication date: August 29, 2019
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Publication number: 20190244060
    Abstract: A style transfer neural network may be used to generate stylized synthetic images, where real images provide the style (e.g., seasons, weather, lighting) for transfer to synthetic images. The stylized synthetic images may then be used to train a recognition neural network. In turn, the trained neural network may be used to predict semantic labels for the real images, providing recognition data for the real images. Finally, the real training dataset (real images and predicted recognition data) and the synthetic training dataset are used by the style transfer neural network to generate stylized synthetic images. The training of the neural network, prediction of recognition data for the real images, and stylizing of the synthetic images may be repeated for a number of iterations. The stylization operation more closely aligns a covariate of the synthetic images to the covariate of the real images, improving accuracy of the recognition neural network.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 8, 2019
    Inventors: Aysegul Dundar, Ming-Yu Liu, Ting-Chun Wang, John Zedlewski, Jan Kautz
  • Publication number: 20180338741
    Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.
    Type: Application
    Filed: August 30, 2017
    Publication date: November 29, 2018
    Applicant: Enlitic, Inc.
    Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
  • Patent number: 8498885
    Abstract: Embodiments of methods and systems for predicting provider negotiated rates are disclosed. One method includes obtaining claims data and provider data, grouping the claims data into priceable units, computing prices for each of the priceable units based on the claims data and the provider data, and estimating provider negotiated rates based on the priceable units and the computed prices for the priceable units.
    Type: Grant
    Filed: July 27, 2011
    Date of Patent: July 30, 2013
    Assignee: Castlight Health Inc.
    Inventors: Matthew Vanderzee, Anshul Amar, Jim Griswold, John Zedlewski, Naveen Saxena, Naomi Allen
  • Patent number: 8296767
    Abstract: Management of contexts that execute on a computer system is described. More specifically, context scheduling in a virtual machine environment is described. Times at which a context transitions from a scheduled state to a descheduled state and times at which the context transitions from a descheduled state to a scheduled state are recorded for each context. Skew is detected using the recorded times. The amount of skew can be quantified, and a corrective action is triggered if the amount of skew fails to satisfy a threshold value.
    Type: Grant
    Filed: February 16, 2007
    Date of Patent: October 23, 2012
    Assignee: VMware, Inc.
    Inventors: Carl Waldspurger, John Zedlewski, Andrei Dorofeev