Patents by Inventor Anna Khoreva
Anna Khoreva has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240135515Abstract: A computer-implemented method of processing digital image data. The method includes: determining, by an encoder configured to map a first digital image to an extended latent space associated with a generator of a generative adversarial network, GAN, system, a noise prediction associated with the first digital image, determining, by the generator of the GAN system, at least one further digital image based on the noise prediction associated with the first digital image and a plurality of latent variables associated with the extended latent space.Type: ApplicationFiled: October 12, 2023Publication date: April 25, 2024Inventors: Yumeng Li, Anna Khoreva, Dan Zhang
-
Publication number: 20240135699Abstract: A computer-implemented method for training an encoder. The encoder is configured for determining a latent representation of an image. Training the encoder includes: determining a latent representation and a noise image by providing a training image to the encoder, wherein the encoder is configured for determining a latent representation and a noise image for a provided image; masking out parts of the noise image, thereby determining a masked noise image; determining a predicted image by providing the latent representation and the masked noise image to a generator of a generative adversarial network; training the encoder by adapting parameters of the encoder based on a loss value, wherein the loss value characterizes a difference between the predicted image and the training image.Type: ApplicationFiled: October 11, 2023Publication date: April 25, 2024Inventors: Yumeng Li, Anna Khoreva, Dan Zhang
-
Patent number: 11961275Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.Type: GrantFiled: August 16, 2021Date of Patent: April 16, 2024Assignee: ROBERT BOSCH GMBHInventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
-
Publication number: 20230386046Abstract: A device and computer-implemented method for determining pixels of a synthetic image. The method comprises providing a generator that is configured to determine an output from a first input comprising a label map and a first latent code, wherein the label map comprises a mapping of at least one class to at least one of the pixels, wherein the method comprises providing the label map and a latent code, wherein the latent code comprises input data points in a latent space, providing a first direction for moving input data points in the latent space, determining the first latent code depending on at least one input data point of the latent code that is moved in the first direction, determining the synthetic image depending on an output of the generator for the first input.Type: ApplicationFiled: May 4, 2023Publication date: November 30, 2023Inventors: Edgar Schoenfeld, Anna Khoreva, Julio Borges
-
Publication number: 20230386004Abstract: A device and method for evaluating a control of a generator for determining pixels of a synthetic image. The generator determining pixels of the synthetic image from a first input comprising a label map and a first latent code. The method includes providing the label map and latent code which includes input data points in a latent space; providing the control including a set of directions for moving the latent code in the latent space, determining the first latent code depending on at least one input data point of the latent code that is moved in a first direction which is selected from the set of directions, determining a distance between at least one pair of synthetic images generated by the generator for different first inputs which comprise the label map and vary by the first direction that is selected for determining the first latent code from the latent code.Type: ApplicationFiled: May 5, 2023Publication date: November 30, 2023Inventors: Edgar Schoenfeld, Anna Khoreva, Julio Borges
-
Patent number: 11804034Abstract: A computer-implemented method of training a machine learnable function, such as an image classifier or image feature extractor. When applying such machine learnable functions in autonomous driving and similar application areas, generalizability may be important. To improve generalizability, the machine learnable function is rewarded for responding predictably at a layer of the machine learnable function to a set of differences between input observations. This is done by means of a regularization objective included in the objective function used to train the machine learnable function. The regularization objective rewards a mutual statistical dependence between representations of input observations at the given layer, given a difference label indicating a difference between the input observations.Type: GrantFiled: April 16, 2021Date of Patent: October 31, 2023Assignee: ROBERT BOSCH GMBHInventors: Thomas Andy Keller, Anna Khoreva, Max Welling
-
Patent number: 11775818Abstract: A training system for training a generator neural network arranged to transform measured sensor data into generated sensor data. The generator network is arranged to receive as input sensor data and a transformation goal selected from a plurality of transformation goals and is arranged to transform the sensor data according to the transformation goal.Type: GrantFiled: May 5, 2020Date of Patent: October 3, 2023Assignee: ROBERT BOSCH GMBHInventors: Anna Khoreva, Dan Zhang
-
Publication number: 20230267653Abstract: A method for generating images from a semantic map, which assigns to each pixel of the images a semantic meaning of an object to which this pixel belongs. The semantic map is provided as a map tensor comprising channels which each indicates all the pixels of the images to be generated, to which the semantic map assigns a specific semantic meaning; a set of variable pixels of the images to be generated is provided, which are to vary from one image to the next; using values taken from a random distribution, a noise tensor with channels is generated, those values of the noise tensor which relate to the set of variable pixels being reused for each image to be generated; the channels of the map tensor are merged with the channels of the noise tensor to yield an input tensor, which is mapped by a trained generator onto an image.Type: ApplicationFiled: August 20, 2021Publication date: August 24, 2023Inventors: Anna Khoreva, Edgar Schoenfeld, Vadim Sushko
-
Publication number: 20230177809Abstract: The invention relates to a method (100) for training a generator (1) for images (3) from a semantic map (2, 5a) that assigns each pixel of the image (3) a semantic meaning (4) of an object to which that pixel belongs, wherein: a mixed image (6) is generated (140) from at least one image (3) generated by the generator (1) and at least one determined actual training image (5), in which mixed image a first genuine subset (6a) of pixels is occupied by relevant corresponding pixel values of the image (3) generated by the generator (1) and the remaining genuine subset (6b) of pixels is occupied by relevant corresponding pixel values of the actual training image (5); and the images (3) generated by the generator (1), the at least one actual training image (5), and at least one mixed image (6), which belong to the same semantic training map (5a), are supplied (150) to a discriminator (7), which is configured to distinguish images (3) generated by the generator (1) from actual images (5) of the scenery predefined byType: ApplicationFiled: August 20, 2021Publication date: June 8, 2023Inventors: Anna Khoreva, Edgar Schoenfeld, Vadim Sushko, Dan Zhang
-
Publication number: 20230134062Abstract: A method for training a generator for images from a semantic map that assigns to each pixel of the image a semantic meaning of an object to which this pixel belongs. The images generated by the generator and the at least one real training image that belong to the same semantic training map are supplied to a discriminator. The discriminator ascertains a semantic segmentation of the image assigned to it, the segmentation assigning a semantic meaning to each pixel of this image. From the semantic segmentation ascertained by the discriminator, it is evaluated whether the image supplied to the discriminator is a generated image or a real training image.Type: ApplicationFiled: August 20, 2021Publication date: May 4, 2023Inventors: Anna Khoreva, Edgar Schoenfeld, Vadim Sushko, Dan Zhang
-
Publication number: 20230091396Abstract: A computer-implemented method for training a first machine learning system which is configured to generate an output characterizing a label map of an image. The method includes: providing first and second inputs, the first input characterizing a binary vector characterizing respective presences or absences of classes from a plurality of classes, and the second input characterizing a randomly drawn value; determining, by a first generator, an output based on the first and second inputs, the output characterizing a first label map, wherein the first label map characterizes probabilities for the classes from the plurality of classes; determining a representation of the first label map using a global pooling operation; training the first machine learning system based on a loss function, wherein the loss function characterizes an F1 loss, wherein the F1 loss characterizes a difference between the first input and the representation of the first label map.Type: ApplicationFiled: September 13, 2022Publication date: March 23, 2023Inventors: Anna Khoreva, Edgar Schoenfeld
-
Publication number: 20230031755Abstract: A generative adversarial network. The generative adversarial network includes: a generator configured for generating an image and a corresponding label map; a discriminator configured for determining a classification of a provided image and a provided label map, wherein the classification characterizes whether the provided image and the provided label map have been generated by the generator or not and determining the classification comprises the steps of: determining a first feature map of the provided image; masking the first feature map according to the provided label map thereby determining a masked feature map; globally pooling the masked feature map thereby determining a feature representation of the provided image masked by the provided label map; determining a classification of the image based on the feature representation.Type: ApplicationFiled: July 19, 2022Publication date: February 2, 2023Inventors: Anna Khoreva, Vadim Sushko, Dan Zhang
-
Publication number: 20220262106Abstract: A computer-implemented method for training a generative adversarial network. A generator of the generative adversarial network is configured to generate at least one image based on at least one input value. Training the generative adversarial network includes maximizing a loss function that characterizes a difference between a first image determined by the generator for at least one first input value and a third image determined by the generator for at least one second input value.Type: ApplicationFiled: January 31, 2022Publication date: August 18, 2022Inventors: Anna Khoreva, Vadim Sushko
-
Publication number: 20220253702Abstract: A computer-implemented method for training a machine learning system, which includes a generator configured to generate at least one image. The method includes: generating, by the generator, a first image based on at least one randomly drawn value; determining, by a discriminator of the machine learning system, a first output characterizing two classifications of the first image and determining, by the discriminator, a second output characterizing two classifications of a provided second image; training the discriminator such that the content value and layout value in the first output characterize a classification into a first class and such that the content value and layout value in the second output characterize a classification into a second class; and training the generator such that the content value and layout value in the first output characterize a classification into the second class.Type: ApplicationFiled: January 25, 2022Publication date: August 11, 2022Inventors: Anna Khoreva, Vadim Sushko
-
Publication number: 20220101074Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow is configured to determine a first output signal characterizing a likelihood or a log-likelihood of an input signal. The normalizing flow includes at least one first layer which includes trainable parameters. A layer input to the first layer is based on the input signal and the first output signal is based on a layer output of the first layer. The training includes: determining at least one training input signal; determining a training output signal for each training input signal using the normalizing flow; determining a first loss value which is based on a likelihood or a log-likelihood of the at least one determined training output signal with respect to a predefined probability distribution; determining an approximation of a gradient of the trainable parameters; updating the trainable parameters of the first layer based on the approximation of the gradient.Type: ApplicationFiled: September 20, 2021Publication date: March 31, 2022Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Patrick Forre, Priyank Jaini
-
Publication number: 20220076044Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.Type: ApplicationFiled: August 16, 2021Publication date: March 10, 2022Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
-
Publication number: 20220076119Abstract: A device and a method of training a generative neural network. The method includes: generating an edge image using an edge detection applied to a digital image, the edge image comprising a plurality of edge pixels determined as representing edges of one or more digital objects in the digital image; selecting edge-pixels from the plurality of edge pixels; providing a segmentation image using the digital image, the segmentation image comprising a plurality of first pixels, the positions of the first pixels corresponding to the positions of the selected edge-pixels; selecting one or more second pixels for each first pixel in the segmentation image; generating a distorted segmentation image using a two-dimensional distortion applied to the segmentation image; and training the generative neural network using the distorted segmentation image as input image to estimate the digital image.Type: ApplicationFiled: September 1, 2021Publication date: March 10, 2022Inventors: Anna Khoreva, Prateek Katiyar
-
Publication number: 20210357750Abstract: A system and method are provided for classifying objects in spatial data using a machine learned model, as well as a system and method for training the machine learned model. The machine learned model may comprise a content sensitive classifier, a location sensitive classifier and at least one outlier detector. Both classifiers may jointly distinguish between objects in spatial data being in-distribution or marginal-out-of-distribution. The outlier detection part may be trained on inlier examples from the training data, while the presence of actual outliers in the input data of the machine learnable model may be mimicked in the feature space of the machine learnable model during training. The combination of these parts may provide a more robust classification of objects in spatial data with respect to outliers, without having to increase the size of the training data.Type: ApplicationFiled: April 19, 2021Publication date: November 18, 2021Inventors: Chaithanya Kumar Mummadi, Anna Khoreva, Kaspar Sakmann, Kilian Rambach, Piyapat Saranrittichai, Volker Fischer
-
Publication number: 20210350182Abstract: A computer-implemented method of training a machine learnable function, such as an image classifier or image feature extractor. When applying such machine learnable functions in autonomous driving and similar application areas, generalizability may be important. To improve generalizability, the machine learnable function is rewarded for responding predictably at a layer of the machine learnable function to a set of differences between input observations. This is done by means of a regularization objective included in the objective function used to train the machine learnable function. The regularization objective rewards a mutual statistical dependence between representations of input observations at the given layer, given a difference label indicating a difference between the input observations.Type: ApplicationFiled: April 16, 2021Publication date: November 11, 2021Inventors: Thomas Andy Keller, Anna Khoreva, Max Welling
-
Publication number: 20210287093Abstract: A method for training a neural network. The neural network comprises a first layer which includes a plurality of filters to provide a first layer output comprising a plurality of feature maps. Training of the classifier includes: receiving, by a preceding layer, a first layer input in the first layer, wherein the first layer input is based on the input signal; determining the first layer output based on the first layer input and a plurality of parameters of the first layer; determining a first layer loss value based on the first layer output, wherein the first layer loss value characterizes a degree of dependency between the feature maps, the first layer loss value being obtained in an unsupervised fashion; and training the neural network. The training includes an adaption of the parameters of the first layer, the adaption being based on the first layer loss value.Type: ApplicationFiled: February 19, 2021Publication date: September 16, 2021Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Max Welling, Priyank Jaini