Patents by Inventor Shabab BAZRAFKAN

Shabab BAZRAFKAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11797864
    Abstract: Systems and methods for training a conditional generator model are described. Methods receive a sample, and determine a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: October 24, 2023
    Inventors: Shabab Bazrafkan, Peter Corcoran
  • Publication number: 20230334688
    Abstract: Computing a height of a building is performed by inputting a pair of two-dimensional (2-D) aerial images of a city along with their metadata. Using the metadata, a three-dimensional (3-D) vector from each image toward the location of the camera when each image was taken is determined. A plurality of pairs of corresponding image points from the images are computed, in each pair of image points an image point of one image identifies the same physical point on the building as the second image point of the second image. Next, the images are superimposed, and for each pair of image points, determine the intersection of the 3-D vector of the first image originating at the first image point with the 3-D vector of the second image originating at the second image point. Each intersection is a 3-D position and the height is determined from the median of these 3-D positions.
    Type: Application
    Filed: February 8, 2023
    Publication date: October 19, 2023
    Inventors: Christian POGLITSCH, Thomas HOLZMANN, Stefan HABENSCHUSS, Christian PIRCHHEIM, Shabab BAZRAFKAN
  • Patent number: 11776148
    Abstract: Computing a height of a building is performed by inputting a pair of two-dimensional (2-D) aerial images of a city along with their metadata. Using the metadata, a three-dimensional (3-D) vector from each image toward the location of the camera when each image was taken is determined. A plurality of pairs of corresponding image points from the images are computed, in each pair of image points an image point of one image identifies the same physical point on the building as the second image point of the second image. Next, the images are superimposed, and for each pair of image points, determine the intersection of the 3-D vector of the first image originating at the first image point with the 3-D vector of the second image originating at the second image point. Each intersection is a 3-D position and the height is determined from the median of these 3-D positions.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: October 3, 2023
    Assignee: Blackshark.ai GmbH
    Inventors: Christian Poglitsch, Thomas Holzmann, Stefan Habenschuss, Christian Pirchheim, Shabab Bazrafkan
  • Patent number: 11769278
    Abstract: Vectorization of an image begins by receiving a two-dimensional rasterized image and returning a descriptor for each pixel in the image. Corner detection returns coordinates for all corners in the image. The descriptors are filtered using the corner positions to produce corner descriptors for the corner positions. A score matrix is extracted using the corner descriptors in order to produce a permutation matrix that indicates the connections between all of the corner positions. The corner coordinates and the permutation matrix are used to perform vector extraction to produce a machine-readable vector file that represents the two-dimensional image. Optionally, the corner descriptors may be refined before score extraction and the corner coordinates may be refined before vector extraction. A three-dimensional or N-dimensional image may also be input.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: September 26, 2023
    Assignee: Blackshark.ai GmbH
    Inventors: Stefano Zorzi, Shabab Bazrafkan, Friedrich Fraundorfer, Stefan Habenschuss
  • Patent number: 11710306
    Abstract: Two-dimensional objects are displayed upon a user interface; user input selects an area and selects a machine learning model for execution. The results are displayed as an overlay over the objects in the user interface. User input selects a second model for execution; the result of this execution is displayed as a second overlay over the objects. A first overlay from a model is displayed over a set of objects in a user interface and a ground truth corresponding to the objects is displayed as a second overlay on the user interface. User input selects the ground truth overlay as a reference and causes a comparison of the first overlay with the ground truth overlay; the visual data from the comparison is displayed on the user interface. A comparison of M inference overlays with N reference overlays is performed and visual data from the comparison is displayed on the interface.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: July 25, 2023
    Assignee: Blackshark.ai GmbH
    Inventors: Shabab Bazrafkan, Martin Presenhuber, Stefan Habenschuss
  • Publication number: 20230146018
    Abstract: Vectorization of an image begins by receiving a two-dimensional rasterized image and returning a descriptor for each pixel in the image. Corner detection returns coordinates for all corners in the image. The descriptors are filtered using the corner positions to produce corner descriptors for the corner positions. A score matrix is extracted using the corner descriptors in order to produce a permutation matrix that indicates the connections between all of the corner positions. The corner coordinates and the permutation matrix are used to perform vector extraction to produce a machine-readable vector file that represents the two-dimensional image. Optionally, the corner descriptors may be refined before score extraction and the corner coordinates may be refined before vector extraction. A three-dimensional or N-dimensional image may also be input.
    Type: Application
    Filed: November 7, 2022
    Publication date: May 11, 2023
    Inventors: Stefano ZORZI, Shabab BAZRAFKAN, Friedrich FRAUNDORFER, Stefan HABENSCHUSS
  • Patent number: 10915817
    Abstract: Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: February 9, 2021
    Assignee: FotoNation Limited
    Inventors: Shabab Bazrafkan, Joe Lemley
  • Patent number: 10657351
    Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: May 19, 2020
    Assignee: FotoNation Limited
    Inventor: Shabab Bazrafkan
  • Patent number: 10546231
    Abstract: Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: January 28, 2020
    Assignee: FotoNation Limited
    Inventors: Shabab Bazrafkan, Joe Lemley
  • Publication number: 20190385019
    Abstract: Systems and methods for training a conditional generator model are described. Methods receive a sample, and determine a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.
    Type: Application
    Filed: November 16, 2018
    Publication date: December 19, 2019
    Applicant: FotoNation Limited
    Inventors: Shabab Bazrafkan, Peter Corcoran
  • Publication number: 20190272409
    Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.
    Type: Application
    Filed: May 20, 2019
    Publication date: September 5, 2019
    Applicant: FotoNation Limited
    Inventor: Shabab BAZRAFKAN
  • Patent number: 10303916
    Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: May 28, 2019
    Assignee: FotoNation Limited
    Inventor: Shabab Bazrafkan
  • Publication number: 20180211164
    Abstract: Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.
    Type: Application
    Filed: January 23, 2017
    Publication date: July 26, 2018
    Inventors: Shabab BAZRAFKAN, Joe LEMLEY
  • Publication number: 20180211155
    Abstract: Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.
    Type: Application
    Filed: January 23, 2017
    Publication date: July 26, 2018
    Inventors: Shabab BAZRAFKAN, Joe LEMLEY
  • Publication number: 20170032170
    Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.
    Type: Application
    Filed: July 26, 2016
    Publication date: February 2, 2017
    Inventor: Shabab BAZRAFKAN