Patents by Inventor John Zedlewski
John Zedlewski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250111216Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.Type: ApplicationFiled: December 13, 2024Publication date: April 3, 2025Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
-
Patent number: 12266148Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.Type: GrantFiled: May 1, 2023Date of Patent: April 1, 2025Assignee: NVIDIA CorporationInventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
-
Patent number: 12182694Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.Type: GrantFiled: August 30, 2022Date of Patent: December 31, 2024Assignee: NVIDIA CorporationInventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
-
Publication number: 20230267701Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.Type: ApplicationFiled: May 1, 2023Publication date: August 24, 2023Inventors: Yifang Xu, Xin Liu, Chia-Chin Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
-
Patent number: 11676364Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.Type: GrantFiled: April 5, 2021Date of Patent: June 13, 2023Assignee: NVIDIA CorporationInventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
-
Publication number: 20230004801Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.Type: ApplicationFiled: August 30, 2022Publication date: January 5, 2023Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
-
Patent number: 11436484Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.Type: GrantFiled: March 27, 2019Date of Patent: September 6, 2022Assignee: NVIDIA CorporationInventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
-
Publication number: 20210224556Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.Type: ApplicationFiled: April 5, 2021Publication date: July 22, 2021Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
-
Patent number: 10997433Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.Type: GrantFiled: February 26, 2019Date of Patent: May 4, 2021Assignee: NVIDIA CorporationInventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
-
Patent number: 10984286Abstract: A style transfer neural network may be used to generate stylized synthetic images, where real images provide the style (e.g., seasons, weather, lighting) for transfer to synthetic images. The stylized synthetic images may then be used to train a recognition neural network. In turn, the trained neural network may be used to predict semantic labels for the real images, providing recognition data for the real images. Finally, the real training dataset (real images and predicted recognition data) and the synthetic training dataset are used by the style transfer neural network to generate stylized synthetic images. The training of the neural network, prediction of recognition data for the real images, and stylizing of the synthetic images may be repeated for a number of iterations. The stylization operation more closely aligns a covariate of the synthetic images to the covariate of the real images, improving accuracy of the recognition neural network.Type: GrantFiled: February 1, 2019Date of Patent: April 20, 2021Assignee: NVIDIA CorporationInventors: Aysegul Dundar, Ming-Yu Liu, Ting-Chun Wang, John Zedlewski, Jan Kautz
-
Patent number: 10896753Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.Type: GrantFiled: December 10, 2019Date of Patent: January 19, 2021Assignee: Enlitic, Inc.Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
-
Publication number: 20200111561Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.Type: ApplicationFiled: December 10, 2019Publication date: April 9, 2020Applicant: Enlitic, Inc.Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
-
Publication number: 20200082269Abstract: One embodiment of a method includes performing one or more activation functions in a neural network using weights that have been quantized from floating point values to values that are represented using fewer bits than the floating point values. The method further includes performing a first quantization of the weights from the floating point values to the values that are represented using fewer bits than the floating point values after the floating point values are updated using a first number of forward-backward passes of the neural network using training data. The method further includes performing a second quantization of the weights from the floating point values to the values that are represented using fewer bits than the floating point values after the floating point values are updated using a second number of forward-backward passes of the neural network following the first quantization of the weights.Type: ApplicationFiled: April 2, 2019Publication date: March 12, 2020Inventors: Shuang GAO, Hao WU, John ZEDLEWSKI
-
Patent number: 10553311Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.Type: GrantFiled: August 30, 2017Date of Patent: February 4, 2020Assignee: Enlitic, Inc.Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
-
Publication number: 20190303759Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.Type: ApplicationFiled: March 27, 2019Publication date: October 3, 2019Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
-
Publication number: 20190266418Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.Type: ApplicationFiled: February 26, 2019Publication date: August 29, 2019Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
-
Publication number: 20190244060Abstract: A style transfer neural network may be used to generate stylized synthetic images, where real images provide the style (e.g., seasons, weather, lighting) for transfer to synthetic images. The stylized synthetic images may then be used to train a recognition neural network. In turn, the trained neural network may be used to predict semantic labels for the real images, providing recognition data for the real images. Finally, the real training dataset (real images and predicted recognition data) and the synthetic training dataset are used by the style transfer neural network to generate stylized synthetic images. The training of the neural network, prediction of recognition data for the real images, and stylizing of the synthetic images may be repeated for a number of iterations. The stylization operation more closely aligns a covariate of the synthetic images to the covariate of the real images, improving accuracy of the recognition neural network.Type: ApplicationFiled: February 1, 2019Publication date: August 8, 2019Inventors: Aysegul Dundar, Ming-Yu Liu, Ting-Chun Wang, John Zedlewski, Jan Kautz
-
Publication number: 20180338741Abstract: A lung screening assessment system is operable to receive a chest computed tomography (CT) scan that includes a plurality of cross sectional images. Nodule classification data of the chest CT scan is generated by utilizing a computer vision model that is trained on a plurality of training chest CT scans to identify a nodule in the plurality of cross sectional images and determine an assessment score. A lung screening report that includes the assessment score of the nodule classification data is generated for display on a display device associated with a user of the lung screening assessment system.Type: ApplicationFiled: August 30, 2017Publication date: November 29, 2018Applicant: Enlitic, Inc.Inventors: Kevin Lyman, Devon Bernard, Li Yao, Ben Covington, Diogo Almeida, Brian Basham, Jeremy Howard, Anthony Upton, John Zedlewski
-
Patent number: 8498885Abstract: Embodiments of methods and systems for predicting provider negotiated rates are disclosed. One method includes obtaining claims data and provider data, grouping the claims data into priceable units, computing prices for each of the priceable units based on the claims data and the provider data, and estimating provider negotiated rates based on the priceable units and the computed prices for the priceable units.Type: GrantFiled: July 27, 2011Date of Patent: July 30, 2013Assignee: Castlight Health Inc.Inventors: Matthew Vanderzee, Anshul Amar, Jim Griswold, John Zedlewski, Naveen Saxena, Naomi Allen
-
Patent number: 8296767Abstract: Management of contexts that execute on a computer system is described. More specifically, context scheduling in a virtual machine environment is described. Times at which a context transitions from a scheduled state to a descheduled state and times at which the context transitions from a descheduled state to a scheduled state are recorded for each context. Skew is detected using the recorded times. The amount of skew can be quantified, and a corrective action is triggered if the amount of skew fails to satisfy a threshold value.Type: GrantFiled: February 16, 2007Date of Patent: October 23, 2012Assignee: VMware, Inc.Inventors: Carl Waldspurger, John Zedlewski, Andrei Dorofeev