Patents by Inventor David Wehr
David Wehr has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11954914Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.Type: GrantFiled: August 2, 2021Date of Patent: April 9, 2024Assignee: NVIDIA CorporationInventors: David Wehr, Ibrahim Eden, Joachim Pehserl
-
Publication number: 20240094178Abstract: In various embodiments, both very high speed and very high sensitivity hydrogen detection is achieved by controlling water vapor concentration over the catalyst used to convert hydrogen in sample gas (e.g., ambient air) to water vapor, to provide a substantially stable water vapor mixing level at a target mixing ratio. The naturally-occurring water vapor in the sample gas, without further steps, typically would vary over time within a wide range (e.g., due to changing atmospheric conditions). By controlling a level of water vapor over the catalyst to be substantially equal to a target mixing ratio that is not too low as to impair response time, and not too high as to impair sensitivity, both very high speed and very high sensitivity can be provided.Type: ApplicationFiled: December 1, 2023Publication date: March 21, 2024Inventors: David D. Nelson, JR., Scott C. Herndon, Joanne H. Shorter, Joseph R. Roscioli, Elizabeth M. Lunny, Richard A. Wehr
-
Patent number: 11915493Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: August 25, 2022Date of Patent: February 27, 2024Assignee: NVIDIA CorporationInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20240029447Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: October 6, 2023Publication date: January 25, 2024Inventors: Nikolai SMOLYANSKIY, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20230054759Abstract: In various examples, an obstacle detector is capable of tracking a velocity state of detected objects or obstacles using LiDAR data. For example, using LiDAR data alone, an iterative closest point (ICP) algorithm may be used to determine a current state of detected objects for a current frame and a Kalman filter may be used to maintain a tracked state of the one or more objects detected over time. The obstacle detector may be configured to estimate velocity for one or more detected objects, compare the estimated velocity to one or more previous tracked states for previously detected objects, determine that the detected objects corresponds to a certain previously detected object, and update the tracked state for the previously detected object with the estimated velocity.Type: ApplicationFiled: August 23, 2021Publication date: February 23, 2023Inventors: Richard Zachary Robinson, Jens Christian Bo Joergensen, David Wehr, Joachim Pehserl
-
Patent number: 11587315Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality measuring of equipment. An example apparatus disclosed herein includes an image comparator to compare camera data with reference information of a reference vehicle part to identify a vehicle part included in the camera data, and an inspection image analyzer to, in response to the image comparator identifying the vehicle part, measure the vehicle part by causing an interface generator to generate an overlay representation of the reference vehicle part on the camera data displayed on a user interface, and determining, based on one or more user inputs to adjust the overlay representation, a measurement corresponding to the vehicle part.Type: GrantFiled: May 21, 2020Date of Patent: February 21, 2023Assignee: Deere & CompanyInventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
-
Patent number: 11580628Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality vehicle condition inspection. An example apparatus disclosed herein includes a location analyzer to determine whether a camera is at an inspection location and directed towards a first vehicle in an inspection profile, the inspection location corresponding to a location of the camera relative to the first vehicle, an interface generator to generate an indication on a display that the camera is at the inspection location, the indication associated with an inspection image being captured, and an image analyzer to compare the inspection image captured at the inspection location with a reference image taken of a reference vehicle of a same type as the first vehicle, and determine a vehicle part condition or a vehicle condition based on the comparison of the inspection image and the reference image.Type: GrantFiled: May 21, 2020Date of Patent: February 14, 2023Assignee: Deere & CompanyInventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
-
Publication number: 20230033470Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.Type: ApplicationFiled: August 2, 2021Publication date: February 2, 2023Inventors: David Wehr, Ibrahim Eden, Joachim Pehserl
-
Publication number: 20220415059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 25, 2022Publication date: December 29, 2022Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11532168Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: June 29, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11334467Abstract: A computer-implemented method, system and computer program product for representing source code in vector space. The source code is parsed into an abstract syntax tree, which is then traversed to produce a sequence of tokens. Token embeddings may then be constructed for a subset of the sequence of tokens, which are inputted into an encoder artificial neural network (“encoder”) for encoding the token embeddings. A decoder artificial neural network (“decoder”) is initialized with a final internal cell state of the encoder. The decoder is run the same number of steps as the encoding performed by the encoder. After running the decoder and completing the training of the decoder to learn the inputted token embeddings, the final internal cell state of the encoder is used as the code representation vector which may be used to detect errors in the source code.Type: GrantFiled: May 3, 2019Date of Patent: May 17, 2022Assignee: International Business Machines CorporationInventors: David Wehr, Eleanor Pence, Halley Fede, Isabella Yamin, Alexander Sobran, Bo Zhang
-
Patent number: 11238306Abstract: A method, system and computer program product for obtaining vector representations of code snippets capturing semantic similarity. A first and second training set of code snippets are collected, where the first training set of code snippets implements the same function representing semantic similarity and the second training set of code snippets implements a different function representing semantic dissimilarity. A vector representation of a first and second code snippet from either the first or second training set of code snippets is generated using a machine learning model. A loss value is generated utilizing a loss function that is proportional or inverse to the distance between the first and second vectors in response to receiving the first and second code snippets from the first or second training set of code snippets, respectively. The machine learning model is trained to capture the semantic similarity in the code snippets by minimizing the loss value.Type: GrantFiled: September 27, 2018Date of Patent: February 1, 2022Assignee: International Business Machines CorporationInventors: Bo Zhang, Alexander Sobran, David Wehr, Halley Fede, Eleanor Pence, Joseph Hughes, John H. Walczyk, III, Guilherme Ferreira
-
Publication number: 20210342608Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210342609Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210150230Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: June 29, 2020Publication date: May 20, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20200401803Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality measuring of equipment. An example apparatus disclosed herein includes an image comparator to compare camera data with reference information of a reference vehicle part to identify a vehicle part included in the camera data, and an inspection image analyzer to, in response to the image comparator identifying the vehicle part, measure the vehicle part by causing an interface generator to generate an overlay representation of the reference vehicle part on the camera data displayed on a user interface, and determining, based on one or more user inputs to adjust the overlay representation, a measurement corresponding to the vehicle part.Type: ApplicationFiled: May 21, 2020Publication date: December 24, 2020Inventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
-
Publication number: 20200402219Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality vehicle condition inspection. An example apparatus disclosed herein includes a location analyzer to determine whether a camera is at an inspection location and directed towards a first vehicle in an inspection profile, the inspection location corresponding to a location of the camera relative to the first vehicle, an interface generator to generate an indication on a display that the camera is at the inspection location, the indication associated with an inspection image being captured, and an image analyzer to compare the inspection image captured at the inspection location with a reference image taken of a reference vehicle of a same type as the first vehicle, and determine a vehicle part condition or a vehicle condition based on the comparison of the inspection image and the reference image.Type: ApplicationFiled: May 21, 2020Publication date: December 24, 2020Inventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
-
Publication number: 20200349052Abstract: A computer-implemented method, system and computer program product for representing source code in vector space. The source code is parsed into an abstract syntax tree, which is then traversed to produce a sequence of tokens. Token embeddings may then be constructed for a subset of the sequence of tokens, which are inputted into an encoder artificial neural network (“encoder”) for encoding the token embeddings. A decoder artificial neural network (“decoder”) is initialized with a final internal cell state of the encoder. The decoder is run the same number of steps as the encoding performed by the encoder. After running the decoder and completing the training of the decoder to learn the inputted token embeddings, the final internal cell state of the encoder is used as the code representation vector which may be used to detect errors in the source code.Type: ApplicationFiled: May 3, 2019Publication date: November 5, 2020Inventors: David Wehr, Eleanor Pence, Halley Fede, Isabella Yamin, Alexander Sobran, Bo Zhang
-
Publication number: 20200104631Abstract: A method, system and computer program product for obtaining vector representations of code snippets capturing semantic similarity. A first and second training set of code snippets are collected, where the first training set of code snippets implements the same function representing semantic similarity and the second training set of code snippets implements a different function representing semantic dissimilarity. A vector representation of a first and second code snippet from either the first or second training set of code snippets is generated using a machine learning model. A loss value is generated utilizing a loss function that is proportional or inverse to the distance between the first and second vectors in response to receiving the first and second code snippets from the first or second training set of code snippets, respectively. The machine learning model is trained to capture the semantic similarity in the code snippets by minimizing the loss value.Type: ApplicationFiled: September 27, 2018Publication date: April 2, 2020Inventors: Bo Zhang, Alexander Sobran, David Wehr, Halley Fede, Eleanor Pence, Joseph Hughes, John H. Walczyk, III, Guilherme Ferreira
-
Publication number: 20070234821Abstract: A magnetic flow metering device and method is disclosed for the measurement of corrosive flow streams. The device utilizes a unibody construction wherein the flow conduit is constructed entirely from an insulative, non-conducting material without resorting to a metallic outer housing. The portions of the electrodes in contact with the flow stream are made of a suitable conductive polymer material, resistant to the corrosive media. The electrodes also feature shields that are molded into the electrode assembly to reduce background electrical noise. The invention also utilizes an electrical configuration that actively drives the electrode shield circuit (electrodes as well as cabling) to provide a more accurate measurement of the electromotive force.Type: ApplicationFiled: May 31, 2007Publication date: October 11, 2007Inventors: David Wehrs, Robert Chinnock