Patents by Inventor David Wehr

David Wehr has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954914
    Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: April 9, 2024
    Assignee: NVIDIA Corporation
    Inventors: David Wehr, Ibrahim Eden, Joachim Pehserl
  • Publication number: 20240094178
    Abstract: In various embodiments, both very high speed and very high sensitivity hydrogen detection is achieved by controlling water vapor concentration over the catalyst used to convert hydrogen in sample gas (e.g., ambient air) to water vapor, to provide a substantially stable water vapor mixing level at a target mixing ratio. The naturally-occurring water vapor in the sample gas, without further steps, typically would vary over time within a wide range (e.g., due to changing atmospheric conditions). By controlling a level of water vapor over the catalyst to be substantially equal to a target mixing ratio that is not too low as to impair response time, and not too high as to impair sensitivity, both very high speed and very high sensitivity can be provided.
    Type: Application
    Filed: December 1, 2023
    Publication date: March 21, 2024
    Inventors: David D. Nelson, JR., Scott C. Herndon, Joanne H. Shorter, Joseph R. Roscioli, Elizabeth M. Lunny, Richard A. Wehr
  • Patent number: 11915493
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20240029447
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 6, 2023
    Publication date: January 25, 2024
    Inventors: Nikolai SMOLYANSKIY, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20230054759
    Abstract: In various examples, an obstacle detector is capable of tracking a velocity state of detected objects or obstacles using LiDAR data. For example, using LiDAR data alone, an iterative closest point (ICP) algorithm may be used to determine a current state of detected objects for a current frame and a Kalman filter may be used to maintain a tracked state of the one or more objects detected over time. The obstacle detector may be configured to estimate velocity for one or more detected objects, compare the estimated velocity to one or more previous tracked states for previously detected objects, determine that the detected objects corresponds to a certain previously detected object, and update the tracked state for the previously detected object with the estimated velocity.
    Type: Application
    Filed: August 23, 2021
    Publication date: February 23, 2023
    Inventors: Richard Zachary Robinson, Jens Christian Bo Joergensen, David Wehr, Joachim Pehserl
  • Patent number: 11587315
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality measuring of equipment. An example apparatus disclosed herein includes an image comparator to compare camera data with reference information of a reference vehicle part to identify a vehicle part included in the camera data, and an inspection image analyzer to, in response to the image comparator identifying the vehicle part, measure the vehicle part by causing an interface generator to generate an overlay representation of the reference vehicle part on the camera data displayed on a user interface, and determining, based on one or more user inputs to adjust the overlay representation, a measurement corresponding to the vehicle part.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: February 21, 2023
    Assignee: Deere & Company
    Inventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
  • Patent number: 11580628
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality vehicle condition inspection. An example apparatus disclosed herein includes a location analyzer to determine whether a camera is at an inspection location and directed towards a first vehicle in an inspection profile, the inspection location corresponding to a location of the camera relative to the first vehicle, an interface generator to generate an indication on a display that the camera is at the inspection location, the indication associated with an inspection image being captured, and an image analyzer to compare the inspection image captured at the inspection location with a reference image taken of a reference vehicle of a same type as the first vehicle, and determine a vehicle part condition or a vehicle condition based on the comparison of the inspection image and the reference image.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: February 14, 2023
    Assignee: Deere & Company
    Inventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
  • Publication number: 20230033470
    Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 2, 2023
    Inventors: David Wehr, Ibrahim Eden, Joachim Pehserl
  • Publication number: 20220415059
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 25, 2022
    Publication date: December 29, 2022
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11532168
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11334467
    Abstract: A computer-implemented method, system and computer program product for representing source code in vector space. The source code is parsed into an abstract syntax tree, which is then traversed to produce a sequence of tokens. Token embeddings may then be constructed for a subset of the sequence of tokens, which are inputted into an encoder artificial neural network (“encoder”) for encoding the token embeddings. A decoder artificial neural network (“decoder”) is initialized with a final internal cell state of the encoder. The decoder is run the same number of steps as the encoding performed by the encoder. After running the decoder and completing the training of the decoder to learn the inputted token embeddings, the final internal cell state of the encoder is used as the code representation vector which may be used to detect errors in the source code.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: May 17, 2022
    Assignee: International Business Machines Corporation
    Inventors: David Wehr, Eleanor Pence, Halley Fede, Isabella Yamin, Alexander Sobran, Bo Zhang
  • Patent number: 11238306
    Abstract: A method, system and computer program product for obtaining vector representations of code snippets capturing semantic similarity. A first and second training set of code snippets are collected, where the first training set of code snippets implements the same function representing semantic similarity and the second training set of code snippets implements a different function representing semantic dissimilarity. A vector representation of a first and second code snippet from either the first or second training set of code snippets is generated using a machine learning model. A loss value is generated utilizing a loss function that is proportional or inverse to the distance between the first and second vectors in response to receiving the first and second code snippets from the first or second training set of code snippets, respectively. The machine learning model is trained to capture the semantic similarity in the code snippets by minimizing the loss value.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: February 1, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bo Zhang, Alexander Sobran, David Wehr, Halley Fede, Eleanor Pence, Joseph Hughes, John H. Walczyk, III, Guilherme Ferreira
  • Publication number: 20210342608
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210342609
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210150230
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: June 29, 2020
    Publication date: May 20, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20200401803
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality measuring of equipment. An example apparatus disclosed herein includes an image comparator to compare camera data with reference information of a reference vehicle part to identify a vehicle part included in the camera data, and an inspection image analyzer to, in response to the image comparator identifying the vehicle part, measure the vehicle part by causing an interface generator to generate an overlay representation of the reference vehicle part on the camera data displayed on a user interface, and determining, based on one or more user inputs to adjust the overlay representation, a measurement corresponding to the vehicle part.
    Type: Application
    Filed: May 21, 2020
    Publication date: December 24, 2020
    Inventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
  • Publication number: 20200402219
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality vehicle condition inspection. An example apparatus disclosed herein includes a location analyzer to determine whether a camera is at an inspection location and directed towards a first vehicle in an inspection profile, the inspection location corresponding to a location of the camera relative to the first vehicle, an interface generator to generate an indication on a display that the camera is at the inspection location, the indication associated with an inspection image being captured, and an image analyzer to compare the inspection image captured at the inspection location with a reference image taken of a reference vehicle of a same type as the first vehicle, and determine a vehicle part condition or a vehicle condition based on the comparison of the inspection image and the reference image.
    Type: Application
    Filed: May 21, 2020
    Publication date: December 24, 2020
    Inventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
  • Publication number: 20200349052
    Abstract: A computer-implemented method, system and computer program product for representing source code in vector space. The source code is parsed into an abstract syntax tree, which is then traversed to produce a sequence of tokens. Token embeddings may then be constructed for a subset of the sequence of tokens, which are inputted into an encoder artificial neural network (“encoder”) for encoding the token embeddings. A decoder artificial neural network (“decoder”) is initialized with a final internal cell state of the encoder. The decoder is run the same number of steps as the encoding performed by the encoder. After running the decoder and completing the training of the decoder to learn the inputted token embeddings, the final internal cell state of the encoder is used as the code representation vector which may be used to detect errors in the source code.
    Type: Application
    Filed: May 3, 2019
    Publication date: November 5, 2020
    Inventors: David Wehr, Eleanor Pence, Halley Fede, Isabella Yamin, Alexander Sobran, Bo Zhang
  • Publication number: 20200104631
    Abstract: A method, system and computer program product for obtaining vector representations of code snippets capturing semantic similarity. A first and second training set of code snippets are collected, where the first training set of code snippets implements the same function representing semantic similarity and the second training set of code snippets implements a different function representing semantic dissimilarity. A vector representation of a first and second code snippet from either the first or second training set of code snippets is generated using a machine learning model. A loss value is generated utilizing a loss function that is proportional or inverse to the distance between the first and second vectors in response to receiving the first and second code snippets from the first or second training set of code snippets, respectively. The machine learning model is trained to capture the semantic similarity in the code snippets by minimizing the loss value.
    Type: Application
    Filed: September 27, 2018
    Publication date: April 2, 2020
    Inventors: Bo Zhang, Alexander Sobran, David Wehr, Halley Fede, Eleanor Pence, Joseph Hughes, John H. Walczyk, III, Guilherme Ferreira
  • Publication number: 20070234821
    Abstract: A magnetic flow metering device and method is disclosed for the measurement of corrosive flow streams. The device utilizes a unibody construction wherein the flow conduit is constructed entirely from an insulative, non-conducting material without resorting to a metallic outer housing. The portions of the electrodes in contact with the flow stream are made of a suitable conductive polymer material, resistant to the corrosive media. The electrodes also feature shields that are molded into the electrode assembly to reduce background electrical noise. The invention also utilizes an electrical configuration that actively drives the electrode shield circuit (electrodes as well as cabling) to provide a more accurate measurement of the electromotive force.
    Type: Application
    Filed: May 31, 2007
    Publication date: October 11, 2007
    Inventors: David Wehrs, Robert Chinnock