Patents by Inventor Stefan Habenschuss

Stefan Habenschuss has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230334688
    Abstract: Computing a height of a building is performed by inputting a pair of two-dimensional (2-D) aerial images of a city along with their metadata. Using the metadata, a three-dimensional (3-D) vector from each image toward the location of the camera when each image was taken is determined. A plurality of pairs of corresponding image points from the images are computed, in each pair of image points an image point of one image identifies the same physical point on the building as the second image point of the second image. Next, the images are superimposed, and for each pair of image points, determine the intersection of the 3-D vector of the first image originating at the first image point with the 3-D vector of the second image originating at the second image point. Each intersection is a 3-D position and the height is determined from the median of these 3-D positions.
    Type: Application
    Filed: February 8, 2023
    Publication date: October 19, 2023
    Inventors: Christian POGLITSCH, Thomas HOLZMANN, Stefan HABENSCHUSS, Christian PIRCHHEIM, Shabab BAZRAFKAN
  • Patent number: 11776148
    Abstract: Computing a height of a building is performed by inputting a pair of two-dimensional (2-D) aerial images of a city along with their metadata. Using the metadata, a three-dimensional (3-D) vector from each image toward the location of the camera when each image was taken is determined. A plurality of pairs of corresponding image points from the images are computed, in each pair of image points an image point of one image identifies the same physical point on the building as the second image point of the second image. Next, the images are superimposed, and for each pair of image points, determine the intersection of the 3-D vector of the first image originating at the first image point with the 3-D vector of the second image originating at the second image point. Each intersection is a 3-D position and the height is determined from the median of these 3-D positions.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: October 3, 2023
    Assignee: Blackshark.ai GmbH
    Inventors: Christian Poglitsch, Thomas Holzmann, Stefan Habenschuss, Christian Pirchheim, Shabab Bazrafkan
  • Patent number: 11769278
    Abstract: Vectorization of an image begins by receiving a two-dimensional rasterized image and returning a descriptor for each pixel in the image. Corner detection returns coordinates for all corners in the image. The descriptors are filtered using the corner positions to produce corner descriptors for the corner positions. A score matrix is extracted using the corner descriptors in order to produce a permutation matrix that indicates the connections between all of the corner positions. The corner coordinates and the permutation matrix are used to perform vector extraction to produce a machine-readable vector file that represents the two-dimensional image. Optionally, the corner descriptors may be refined before score extraction and the corner coordinates may be refined before vector extraction. A three-dimensional or N-dimensional image may also be input.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: September 26, 2023
    Assignee: Blackshark.ai GmbH
    Inventors: Stefano Zorzi, Shabab Bazrafkan, Friedrich Fraundorfer, Stefan Habenschuss
  • Patent number: 11710306
    Abstract: Two-dimensional objects are displayed upon a user interface; user input selects an area and selects a machine learning model for execution. The results are displayed as an overlay over the objects in the user interface. User input selects a second model for execution; the result of this execution is displayed as a second overlay over the objects. A first overlay from a model is displayed over a set of objects in a user interface and a ground truth corresponding to the objects is displayed as a second overlay on the user interface. User input selects the ground truth overlay as a reference and causes a comparison of the first overlay with the ground truth overlay; the visual data from the comparison is displayed on the user interface. A comparison of M inference overlays with N reference overlays is performed and visual data from the comparison is displayed on the interface.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: July 25, 2023
    Assignee: Blackshark.ai GmbH
    Inventors: Shabab Bazrafkan, Martin Presenhuber, Stefan Habenschuss
  • Publication number: 20230146018
    Abstract: Vectorization of an image begins by receiving a two-dimensional rasterized image and returning a descriptor for each pixel in the image. Corner detection returns coordinates for all corners in the image. The descriptors are filtered using the corner positions to produce corner descriptors for the corner positions. A score matrix is extracted using the corner descriptors in order to produce a permutation matrix that indicates the connections between all of the corner positions. The corner coordinates and the permutation matrix are used to perform vector extraction to produce a machine-readable vector file that represents the two-dimensional image. Optionally, the corner descriptors may be refined before score extraction and the corner coordinates may be refined before vector extraction. A three-dimensional or N-dimensional image may also be input.
    Type: Application
    Filed: November 7, 2022
    Publication date: May 11, 2023
    Inventors: Stefano ZORZI, Shabab BAZRAFKAN, Friedrich FRAUNDORFER, Stefan HABENSCHUSS
  • Patent number: 11049044
    Abstract: An interactive learning cycle includes an operator, a computer and a pool of images. The operator produces a sparsely-labeled data set. A back-end system produces live feedback: a densely-labeled training set which is displayed on the computer. Immediate feedback is displayed in color on the operator computer in less than about five seconds. A labeling tool displays a user interface and for every labeling project a region is defined that is downloaded as an image data batch. The operator annotates on a per-image basis in the region and uses several UI tools to mark features in the image and group them to a predefined label class. The back-end system includes processes that run in parallel and feed back into each other, each executing a model. A local model is used independently of the global model. The global model accepts sparsely-labeled images from numerous operator computers.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: June 29, 2021
    Assignee: BLACKSHARK.AI GMBH
    Inventors: Stefan Habenschuss, Arno Hollosi, Pavel Kuksa, Martin Presenhuber