Patents by Inventor Nadine Behrmann

Nadine Behrmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11921817
    Abstract: A computer-implemented unsupervised learning method of training a video feature extractor. The video feature extractor is configured to extract a feature representation from a video sequence. The method uses training data representing multiple training video sequences. From a training video sequence of the multiple training video sequences, a current subsequence; a preceding subsequence preceding the current subsequence; and a succeeding subsequence succeeding the current subsequence are selected. The video feature extractor is applied to the current subsequence to extract a current feature representation of the current subsequence. A training signal is derived from a joint predictability of the preceding and succeeding subsequences given the current feature representation. The parameters of the video feature extractor are updated based on the training signal.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: March 5, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Mehdi Noroozi, Nadine Behrmann
  • Publication number: 20230360399
    Abstract: A method for transforming a frame sequence of video frames into a scene sequence of scenes. In the method: features are extracted from each video frame, and are transformed into a feature representation in a first working space; a feature interaction of each feature representation with the other feature representations is ascertained, characterizing a frame prediction; the class belonging to each already-ascertained scene is transformed into a scene representation in a second working space; a scene interaction of a scene representation with each of all the other scene representations is ascertained; a scene-feature interaction of each scene interaction with each feature interaction is ascertained; and from the scene-feature interactions, at least the class of the next scene in the scene sequence that is most plausible in view of the frame sequence and the already-ascertained scenes is ascertained.
    Type: Application
    Filed: April 27, 2023
    Publication date: November 9, 2023
    Inventors: Nadine Behrmann, Mehdi Noroozi, S. Alireza Golestaneh
  • Publication number: 20230036743
    Abstract: A method for coding a predefined time sequence of video images in a representation which is evaluable by machine made up of stationary features and nonstationary features. In the method: at least one function parameterized using trainable parameters is provided, which maps sequences of video images on representations; from the sequence of video images, N adjoining, nonoverlapping short extracts and one long extract, which contains all N short extracts are selected; using the parameterized function, a representation of the long extract and multiple representations of the short extracts are ascertained; the parameterized function is assessed; the parameters of the function are optimized with the goal that the assessment of the cost function for representations ascertained in future is expected to improve; using the function parameterized by the finished optimized parameters, the predefined time sequence of video images is mapped on the sought representation.
    Type: Application
    Filed: July 7, 2022
    Publication date: February 2, 2023
    Inventors: Mehdi Noroozi, Mohsen Fayyaz, Nadine Behrmann
  • Publication number: 20230025169
    Abstract: A method for training an encoder that maps data samples of measurement data onto machine-evaluable representations. In the method, a set of training samples is provided, a relation being defined, in the context of a specified application, concerning the degree to which two samples are similar to one another. A function is provided that is parameterized with trainable parameters and that maps samples onto representations. A similarity measure is provided that assigns samples a similarity of representations and/or of processing products of these representations. From the set of training samples, at least one query sample is drawn. For this query sample, the following are ascertained: a set, ordered in a ranked order, of positive samples from the set that are similar to the query sample, and a set of negative samples from the set that are no longer similar to the query sample. At least the parameters are optimized.
    Type: Application
    Filed: July 13, 2022
    Publication date: January 26, 2023
    Inventors: David Hoffmann, Mehdi Noroozi, Nadine Behrmann
  • Publication number: 20220129699
    Abstract: A computer-implemented unsupervised learning method of training a video feature extractor. The video feature extractor is configured to extract a feature representation from a video sequence. The method uses training data representing multiple training video sequences. From a training video sequence of the multiple training video sequences, a current subsequence; a preceding subsequence preceding the current subsequence; and a succeeding subsequence succeeding the current subsequence are selected. The video feature extractor is applied to the current subsequence to extract a current feature representation of the current subsequence. A training signal is derived from a joint predictability of the preceding and succeeding subsequences given the current feature representation. The parameters of the video feature extractor are updated based on the training signal.
    Type: Application
    Filed: September 28, 2021
    Publication date: April 28, 2022
    Inventors: Mehdi Noroozi, Nadine Behrmann