Patents by Inventor Shabab BAZRAFKAN
Shabab BAZRAFKAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11797864Abstract: Systems and methods for training a conditional generator model are described. Methods receive a sample, and determine a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.Type: GrantFiled: November 16, 2018Date of Patent: October 24, 2023Inventors: Shabab Bazrafkan, Peter Corcoran
-
Publication number: 20230334688Abstract: Computing a height of a building is performed by inputting a pair of two-dimensional (2-D) aerial images of a city along with their metadata. Using the metadata, a three-dimensional (3-D) vector from each image toward the location of the camera when each image was taken is determined. A plurality of pairs of corresponding image points from the images are computed, in each pair of image points an image point of one image identifies the same physical point on the building as the second image point of the second image. Next, the images are superimposed, and for each pair of image points, determine the intersection of the 3-D vector of the first image originating at the first image point with the 3-D vector of the second image originating at the second image point. Each intersection is a 3-D position and the height is determined from the median of these 3-D positions.Type: ApplicationFiled: February 8, 2023Publication date: October 19, 2023Inventors: Christian POGLITSCH, Thomas HOLZMANN, Stefan HABENSCHUSS, Christian PIRCHHEIM, Shabab BAZRAFKAN
-
Patent number: 11776148Abstract: Computing a height of a building is performed by inputting a pair of two-dimensional (2-D) aerial images of a city along with their metadata. Using the metadata, a three-dimensional (3-D) vector from each image toward the location of the camera when each image was taken is determined. A plurality of pairs of corresponding image points from the images are computed, in each pair of image points an image point of one image identifies the same physical point on the building as the second image point of the second image. Next, the images are superimposed, and for each pair of image points, determine the intersection of the 3-D vector of the first image originating at the first image point with the 3-D vector of the second image originating at the second image point. Each intersection is a 3-D position and the height is determined from the median of these 3-D positions.Type: GrantFiled: February 8, 2023Date of Patent: October 3, 2023Assignee: Blackshark.ai GmbHInventors: Christian Poglitsch, Thomas Holzmann, Stefan Habenschuss, Christian Pirchheim, Shabab Bazrafkan
-
Patent number: 11769278Abstract: Vectorization of an image begins by receiving a two-dimensional rasterized image and returning a descriptor for each pixel in the image. Corner detection returns coordinates for all corners in the image. The descriptors are filtered using the corner positions to produce corner descriptors for the corner positions. A score matrix is extracted using the corner descriptors in order to produce a permutation matrix that indicates the connections between all of the corner positions. The corner coordinates and the permutation matrix are used to perform vector extraction to produce a machine-readable vector file that represents the two-dimensional image. Optionally, the corner descriptors may be refined before score extraction and the corner coordinates may be refined before vector extraction. A three-dimensional or N-dimensional image may also be input.Type: GrantFiled: November 7, 2022Date of Patent: September 26, 2023Assignee: Blackshark.ai GmbHInventors: Stefano Zorzi, Shabab Bazrafkan, Friedrich Fraundorfer, Stefan Habenschuss
-
Patent number: 11710306Abstract: Two-dimensional objects are displayed upon a user interface; user input selects an area and selects a machine learning model for execution. The results are displayed as an overlay over the objects in the user interface. User input selects a second model for execution; the result of this execution is displayed as a second overlay over the objects. A first overlay from a model is displayed over a set of objects in a user interface and a ground truth corresponding to the objects is displayed as a second overlay on the user interface. User input selects the ground truth overlay as a reference and causes a comparison of the first overlay with the ground truth overlay; the visual data from the comparison is displayed on the user interface. A comparison of M inference overlays with N reference overlays is performed and visual data from the comparison is displayed on the interface.Type: GrantFiled: June 24, 2022Date of Patent: July 25, 2023Assignee: Blackshark.ai GmbHInventors: Shabab Bazrafkan, Martin Presenhuber, Stefan Habenschuss
-
Publication number: 20230146018Abstract: Vectorization of an image begins by receiving a two-dimensional rasterized image and returning a descriptor for each pixel in the image. Corner detection returns coordinates for all corners in the image. The descriptors are filtered using the corner positions to produce corner descriptors for the corner positions. A score matrix is extracted using the corner descriptors in order to produce a permutation matrix that indicates the connections between all of the corner positions. The corner coordinates and the permutation matrix are used to perform vector extraction to produce a machine-readable vector file that represents the two-dimensional image. Optionally, the corner descriptors may be refined before score extraction and the corner coordinates may be refined before vector extraction. A three-dimensional or N-dimensional image may also be input.Type: ApplicationFiled: November 7, 2022Publication date: May 11, 2023Inventors: Stefano ZORZI, Shabab BAZRAFKAN, Friedrich FRAUNDORFER, Stefan HABENSCHUSS
-
Patent number: 10915817Abstract: Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.Type: GrantFiled: January 23, 2017Date of Patent: February 9, 2021Assignee: FotoNation LimitedInventors: Shabab Bazrafkan, Joe Lemley
-
Patent number: 10657351Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.Type: GrantFiled: May 20, 2019Date of Patent: May 19, 2020Assignee: FotoNation LimitedInventor: Shabab Bazrafkan
-
Patent number: 10546231Abstract: Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.Type: GrantFiled: January 23, 2017Date of Patent: January 28, 2020Assignee: FotoNation LimitedInventors: Shabab Bazrafkan, Joe Lemley
-
Publication number: 20190385019Abstract: Systems and methods for training a conditional generator model are described. Methods receive a sample, and determine a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.Type: ApplicationFiled: November 16, 2018Publication date: December 19, 2019Applicant: FotoNation LimitedInventors: Shabab Bazrafkan, Peter Corcoran
-
Publication number: 20190272409Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.Type: ApplicationFiled: May 20, 2019Publication date: September 5, 2019Applicant: FotoNation LimitedInventor: Shabab BAZRAFKAN
-
Patent number: 10303916Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.Type: GrantFiled: July 26, 2016Date of Patent: May 28, 2019Assignee: FotoNation LimitedInventor: Shabab Bazrafkan
-
Publication number: 20180211164Abstract: Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.Type: ApplicationFiled: January 23, 2017Publication date: July 26, 2018Inventors: Shabab BAZRAFKAN, Joe LEMLEY
-
Publication number: 20180211155Abstract: Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.Type: ApplicationFiled: January 23, 2017Publication date: July 26, 2018Inventors: Shabab BAZRAFKAN, Joe LEMLEY
-
Publication number: 20170032170Abstract: An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.Type: ApplicationFiled: July 26, 2016Publication date: February 2, 2017Inventor: Shabab BAZRAFKAN