Patents by Inventor Joachim Pehserl

Joachim Pehserl has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220277193
    Abstract: An annotation pipeline may be used to produce 2D and/or 3D ground truth data for deep neural networks, such as autonomous or semi-autonomous vehicle perception networks. Initially, sensor data may be captured with different types of sensors and synchronized to align frames of sensor data that represent a similar world state. The aligned frames may be sampled and packaged into a sequence of annotation scenes to be annotated. An annotation project may be decomposed into modular tasks and encoded into a labeling tool, which assigns tasks to labelers and arranges the order of inputs using a wizard that steps through the tasks. During the tasks, each type of sensor data in an annotation scene may be simultaneously presented, and information may be projected across sensor modalities to provide useful contextual information. After all annotation tasks have been completed, the resulting ground truth data may be exported in any suitable format.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Tilman Wekel, Joachim Pehserl, Jacob Meyer, Jake Guza, Anton Mitrokhin, Richard Whitcomb, Marco Scoffier, David Nister, Grant Monroe
  • Publication number: 20210342609
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210342608
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210156960
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: March 31, 2020
    Publication date: May 27, 2021
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20210156963
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: March 31, 2020
    Publication date: May 27, 2021
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20210150230
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: June 29, 2020
    Publication date: May 20, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210063578
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Publication number: 20210026355
    Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.
    Type: Application
    Filed: July 24, 2020
    Publication date: January 28, 2021
    Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
  • Patent number: 10147235
    Abstract: A system and method are disclosed for use in a virtual reality environment including a head mounted display device and a processing unit. In examples, the processing unit adjusts an amount by which left and right displayed images overlap each other at a given distance, such as the focal distance, from the head mounted display device.
    Type: Grant
    Filed: December 10, 2015
    Date of Patent: December 4, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cynthia S. Bell, Drew Steedly, Yarn Chee Poon, Joachim Pehserl
  • Publication number: 20170171538
    Abstract: A system and method are disclosed for use in a virtual reality environment including a head mounted display device and a processing unit. In examples, the processing unit adjusts an amount by which left and right displayed images overlap each other at a given distance, such as the focal distance, from the head mounted display device.
    Type: Application
    Filed: December 10, 2015
    Publication date: June 15, 2017
    Inventors: Cynthia S. Bell, Drew Steedly, Yarn Chee Poon, Joachim Pehserl
  • Patent number: 8428088
    Abstract: Systems and methods are described herein that cause data from asynchronous data sources to be provided with a timestamp that corresponds to a common time base. A trigger board can be used to control synchronized data sources, and can generate timestamps when data is collected by the synchronized data sources. Unsynchronized data sources can generate data independent of the trigger board. System timestamps are generated each time data from the synchronized data source and the unsynchronized data source is received. Values of the system timestamp can be modified, and can be replaced by timestamps that correspond to the time base used by the trigger board.
    Type: Grant
    Filed: May 31, 2011
    Date of Patent: April 23, 2013
    Assignee: Microsoft Corporation
    Inventors: Michael Kroepfl, Gerhard Neuhold, Stefan Bernögger, Martin Josef Ponticelli, Joachim Pehserl, Gur Kimchi, John Charles Curlander
  • Patent number: 8244431
    Abstract: A system described herein includes a receiver component that receives first velocity data that is indicative of a velocity of a vehicle over a period of time, wherein the first velocity data corresponds to a first sensor. The receiver component also receives second velocity data that is indicative of the velocity of the vehicle over the period of time, wherein the second velocity data corresponds to a second sensor. The system also includes a modifier component that determines a difference between the first velocity data and the second velocity data and outputs at least one final velocity value for the vehicle based at least in part upon the first difference data.
    Type: Grant
    Filed: February 13, 2009
    Date of Patent: August 14, 2012
    Assignee: Microsoft Corporation
    Inventors: Michael Kroepfl, Joachim Pehserl, Joachim Bauer, Stephen L. Lawler, Gur Kimchi, John Curlander
  • Publication number: 20110228091
    Abstract: Systems and methods are described herein that cause data from asynchronous data sources to be provided with a timestamp that corresponds to a common time base. A trigger board can be used to control synchronized data sources, and can generate timestamps when data is collected by the synchronized data sources. Unsynchronized data sources can generate data independent of the trigger board. System timestamps are generated each time data from the synchronized data source and the unsynchronized data source is received. Values of the system timestamp can be modified, and can be replaced by timestamps that correspond to the time base used by the trigger board.
    Type: Application
    Filed: May 31, 2011
    Publication date: September 22, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Michael Kroepfl, Gerhard Neuhold, Stefan Bernögger, Martin Josef Ponticelli, Joachim Pehserl, Gur Kimchi, John Charles Curlander
  • Patent number: 7974314
    Abstract: Systems and methods are described herein that cause data from asynchronous data sources to be provided with a timestamp that corresponds to a common time base. A trigger board can be used to control synchronized data sources, and can generate timestamps when data is collected by the synchronized data sources. Unsynchronized data sources can generate data independent of the trigger board. System timestamps are generated each time data from the synchronized data source and the unsynchronized data source is received. Values of the system timestamp can be modified, and can be replaced by timestamps that correspond to the time base used by the trigger board.
    Type: Grant
    Filed: January 16, 2009
    Date of Patent: July 5, 2011
    Assignee: Microsoft Corporation
    Inventors: Michael Kroepfl, Gerhard Neuhold, Stefan Bernögger, Martin Josef Ponticelli, Joachim Pehserl, Gur Kimchi, John Charles Curlander
  • Publication number: 20100211317
    Abstract: A system described herein includes a receiver component that receives first velocity data that is indicative of a velocity of a vehicle over a period of time, wherein the first velocity data corresponds to a first sensor. The receiver component also receives second velocity data that is indicative of the velocity of the vehicle over the period of time, wherein the second velocity data corresponds to a second sensor. The system also includes a modifier component that determines a difference between the first velocity data and the second velocity data and outputs at least one final velocity value for the vehicle based at least in part upon the first difference data.
    Type: Application
    Filed: February 13, 2009
    Publication date: August 19, 2010
    Applicant: Microsoft Corporation
    Inventors: Michael Kroepfl, Joachim Pehserl, Joachim Bauer, Stephen L. Lawler, Gur Kimchi, John Curlander
  • Publication number: 20100183034
    Abstract: Systems and methods are described herein that cause data from asynchronous data sources to be provided with a timestamp that corresponds to a common time base. A trigger board can be used to control synchronized data sources, and can generate timestamps when data is collected by the synchronized data sources. Unsynchronized data sources can generate data independent of the trigger board. System timestamps are generated each time data from the synchronized data source and the unsynchronized data source is received. Values of the system timestamp can be modified, and can be replaced by timestamps that correspond to the time base used by the trigger board.
    Type: Application
    Filed: January 16, 2009
    Publication date: July 22, 2010
    Applicant: Microsoft Corporation
    Inventors: Michael Kroepfl, Gerhard Neuhold, Stefan Bernogger, Martin Josef Ponticelli, Joachim Pehserl, Gur Kimchi, John Charles Curlander