Patents by Inventor David Wehr
David Wehr has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240410981Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 21, 2024Publication date: December 12, 2024Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 12164059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: July 15, 2021Date of Patent: December 10, 2024Assignee: NVIDIA CorporationInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 12080078Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: August 25, 2022Date of Patent: September 3, 2024Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 12072443Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: July 15, 2021Date of Patent: August 27, 2024Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20240281988Abstract: In various examples, perception of landmark shapes may be used for localization in autonomous systems and applications. In some embodiments, a deep neural network (DNN) is used to generate (e.g., per-point) classifications of measured 3D points (e.g., classified LiDAR points), and a representation of the shape of one or more detected landmarks is regressed from the classifications. For each of one or more classes, the classification data may be thresholded to generate a binary mask and/or dilated to generate a densified representation, and the resulting (e.g., dilated, binary) mask may be clustered into connected components that are iteratively: fitted a shape (e.g., a polynomial or Bezier spline for lane lines, a circle for top-down representations of poles or traffic lights), weighted, and merged. As such, the resulting connected components and their fitted shapes may be used to represent detected landmarks and used for localization, navigation, and/or other uses.Type: ApplicationFiled: February 17, 2023Publication date: August 22, 2024Inventors: Joshua Edward ABBOTT, Amir AKBARZADEH, Joachim PEHSERL, Samuel Ogden, David WEHR, Ke CHEN
-
Publication number: 20240280372Abstract: In various examples, one or more DNNs may be used to detect landmarks (e.g., lane lines) and regress a representation of their shape. A DNN may be used to jointly generate classifications of measured 3D points using one output head (e.g., a classification head) and regress a representation of one or more fitted shapes (e.g., polylines, circles) using a second output head (e.g., a regression head). In some embodiments, multiple DNNs (e.g., a chain of multiple DNNs or multiple stages of a DNN) are used to sequentially generate classifications of measured 3D points and a regressed representation of the shape of one or more detected landmarks. As such, classified landmarks and corresponding fitted shapes may be decoded and used for localization, navigation, and/or other uses.Type: ApplicationFiled: February 17, 2023Publication date: August 22, 2024Inventors: Joshua Edward ABBOTT, Amir AKBARZADEH, Joachim PEHSERL, Samuel OGDEN, David WEHR, Ke CHEN
-
Publication number: 20240273919Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: April 26, 2024Publication date: August 15, 2024Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20240265712Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.Type: ApplicationFiled: March 8, 2024Publication date: August 8, 2024Inventors: David Wehr, Ibrahim Eden, Joachim Pehserl
-
Patent number: 11954914Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.Type: GrantFiled: August 2, 2021Date of Patent: April 9, 2024Assignee: NVIDIA CorporationInventors: David Wehr, Ibrahim Eden, Joachim Pehserl
-
Patent number: 11915493Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: August 25, 2022Date of Patent: February 27, 2024Assignee: NVIDIA CorporationInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20240029447Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: October 6, 2023Publication date: January 25, 2024Inventors: Nikolai SMOLYANSKIY, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20230054759Abstract: In various examples, an obstacle detector is capable of tracking a velocity state of detected objects or obstacles using LiDAR data. For example, using LiDAR data alone, an iterative closest point (ICP) algorithm may be used to determine a current state of detected objects for a current frame and a Kalman filter may be used to maintain a tracked state of the one or more objects detected over time. The obstacle detector may be configured to estimate velocity for one or more detected objects, compare the estimated velocity to one or more previous tracked states for previously detected objects, determine that the detected objects corresponds to a certain previously detected object, and update the tracked state for the previously detected object with the estimated velocity.Type: ApplicationFiled: August 23, 2021Publication date: February 23, 2023Inventors: Richard Zachary Robinson, Jens Christian Bo Joergensen, David Wehr, Joachim Pehserl
-
Patent number: 11587315Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality measuring of equipment. An example apparatus disclosed herein includes an image comparator to compare camera data with reference information of a reference vehicle part to identify a vehicle part included in the camera data, and an inspection image analyzer to, in response to the image comparator identifying the vehicle part, measure the vehicle part by causing an interface generator to generate an overlay representation of the reference vehicle part on the camera data displayed on a user interface, and determining, based on one or more user inputs to adjust the overlay representation, a measurement corresponding to the vehicle part.Type: GrantFiled: May 21, 2020Date of Patent: February 21, 2023Assignee: Deere & CompanyInventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
-
Patent number: 11580628Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality vehicle condition inspection. An example apparatus disclosed herein includes a location analyzer to determine whether a camera is at an inspection location and directed towards a first vehicle in an inspection profile, the inspection location corresponding to a location of the camera relative to the first vehicle, an interface generator to generate an indication on a display that the camera is at the inspection location, the indication associated with an inspection image being captured, and an image analyzer to compare the inspection image captured at the inspection location with a reference image taken of a reference vehicle of a same type as the first vehicle, and determine a vehicle part condition or a vehicle condition based on the comparison of the inspection image and the reference image.Type: GrantFiled: May 21, 2020Date of Patent: February 14, 2023Assignee: Deere & CompanyInventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
-
Publication number: 20230033470Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.Type: ApplicationFiled: August 2, 2021Publication date: February 2, 2023Inventors: David Wehr, Ibrahim Eden, Joachim Pehserl
-
Publication number: 20220415059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 25, 2022Publication date: December 29, 2022Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11532168Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: June 29, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11334467Abstract: A computer-implemented method, system and computer program product for representing source code in vector space. The source code is parsed into an abstract syntax tree, which is then traversed to produce a sequence of tokens. Token embeddings may then be constructed for a subset of the sequence of tokens, which are inputted into an encoder artificial neural network (“encoder”) for encoding the token embeddings. A decoder artificial neural network (“decoder”) is initialized with a final internal cell state of the encoder. The decoder is run the same number of steps as the encoding performed by the encoder. After running the decoder and completing the training of the decoder to learn the inputted token embeddings, the final internal cell state of the encoder is used as the code representation vector which may be used to detect errors in the source code.Type: GrantFiled: May 3, 2019Date of Patent: May 17, 2022Assignee: International Business Machines CorporationInventors: David Wehr, Eleanor Pence, Halley Fede, Isabella Yamin, Alexander Sobran, Bo Zhang
-
Patent number: 11238306Abstract: A method, system and computer program product for obtaining vector representations of code snippets capturing semantic similarity. A first and second training set of code snippets are collected, where the first training set of code snippets implements the same function representing semantic similarity and the second training set of code snippets implements a different function representing semantic dissimilarity. A vector representation of a first and second code snippet from either the first or second training set of code snippets is generated using a machine learning model. A loss value is generated utilizing a loss function that is proportional or inverse to the distance between the first and second vectors in response to receiving the first and second code snippets from the first or second training set of code snippets, respectively. The machine learning model is trained to capture the semantic similarity in the code snippets by minimizing the loss value.Type: GrantFiled: September 27, 2018Date of Patent: February 1, 2022Assignee: International Business Machines CorporationInventors: Bo Zhang, Alexander Sobran, David Wehr, Halley Fede, Eleanor Pence, Joseph Hughes, John H. Walczyk, III, Guilherme Ferreira
-
Publication number: 20210342608Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister