Patents by Inventor Robert Stephen DiPietro

Robert Stephen DiPietro has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220101112
    Abstract: Apparatuses, systems, and techniques to use one or more neural networks to generate data labels. In at least one embodiment, one or more neural networks is trained based, at least in part on, one or more labels, pseudo-labels, training data, and modified training data.
    Type: Application
    Filed: September 25, 2020
    Publication date: March 31, 2022
    Inventors: Abel Karl Brown, Robert Stephen DiPietro, Benedikt Dietmar Schifferer
  • Publication number: 20210142491
    Abstract: Navigation instructions are determined using visual data or other sensory information. Individual frames can be extracted from video data, captured from passes through an environment, to generate a sequence of image frames. The frames are processed using a feature extractor to generate frame-specific feature vectors. Image triplets are generated, including a representative image frame (or corresponding feature vector), a similar image frame adjacent in the sequence, and a disparate image frame that is separated by a number of frames in the sequence. The embedding network is trained using the triplets. Image data for a current position and a target destination can then be provided as input to the trained embedding model, which outputs a navigation vector indicating a direction and distance over which the vehicle is to be navigated in the physical environment.
    Type: Application
    Filed: January 19, 2021
    Publication date: May 13, 2021
    Inventors: Abel Karl Brown, Robert Stephen DiPietro
  • Patent number: 10902616
    Abstract: Navigation instructions are determined using visual data or other sensory information. Individual frames can be extracted from video data, captured from passes through an environment, to generate a sequence of image frames. The frames are processed using a feature extractor to generate frame-specific feature vectors. Image triplets are generated, including a representative image frame (or corresponding feature vector), a similar image frame adjacent in the sequence, and a disparate image frame that is separated by a number of frames in the sequence. The embedding network is trained using the triplets. Image data for a current position and a target destination can then be provided as input to the trained embedding model, which outputs a navigation vector indicating a direction and distance over which the vehicle is to be navigated in the physical environment.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: January 26, 2021
    Assignee: Nvidia Corporation
    Inventors: Abel Karl Brown, Robert Stephen DiPietro
  • Publication number: 20200051252
    Abstract: Navigation instructions are determined using visual data or other sensory information. Individual frames can be extracted from video data, captured from passes through an environment, to generate a sequence of image frames. The frames are processed using a feature extractor to generate frame-specific feature vectors. Image triplets are generated, including a representative image frame (or corresponding feature vector), a similar image frame adjacent in the sequence, and a disparate image frame that is separated by a number of frames in the sequence. The embedding network is trained using the triplets. Image data for a current position and a target destination can then be provided as input to the trained embedding model, which outputs a navigation vector indicating a direction and distance over which the vehicle is to be navigated in the physical environment.
    Type: Application
    Filed: December 11, 2018
    Publication date: February 13, 2020
    Inventors: Abel Karl Brown, Robert Stephen DiPietro