Patents by Inventor Darko Zikic

Darko Zikic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11302065
    Abstract: Examples disclosed herein may involve (i) obtaining 2D image data and 3D sensor data that is representative of an area, (ii) identifying a first set of pixels associated with ephemeral objects detected in the area and a second set of pixels associated with non-ephemeral objects detected in the area, (iii) identifying a first set of ephemeral 3D data points associated with the detected ephemeral objects and a second set of non-ephemeral 3D data points associated with the detected non-ephemeral objects, (iv) mapping the first and second sets of 3D data points to a grid of voxels associated with the area, (v) making a determination that one or more voxels in the grid each contain a threshold extent of ephemeral data points, and (vi) based at least in part on the determination, filtering the 3D sensor data to remove the 3D data points contained within the one or more voxels.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: April 12, 2022
    Assignee: Woven Planet North America, Inc.
    Inventors: Wilhelm Richert, Darko Zikic, Clemens Marschner
  • Patent number: 11295521
    Abstract: The present disclosure relates to a method of generating a ground map. For instance, the present disclosure provides a method of determining relevant point cloud data for generating a surface topology over real-world geographical areas using sensor data, which may involve (i) receiving a first dataset from a first sensor, (ii) receiving a second dataset from a second sensor, (iii) classifying the first dataset by identifying characteristics of the first dataset that correspond to a ground surface, (iv) filtering the second dataset based on the classified first dataset, and (v) generating a ground map from the filtered second dataset.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: April 5, 2022
    Assignee: Woven Planet North America, Inc.
    Inventors: Marco Caccin, Clemens Marschner, Nikolai Morin, Darko Zikic
  • Patent number: 11288522
    Abstract: The present invention relates to a method of generating an overhead view image of an area. More particularly, the present invention relates to a method of generating a contextual multi-image based overhead view image of an area using ground map data and field of view image data. Various embodiments of the present technology can include methods, systems and non-transitory computer readable media and computer programs configured to determine a ground map of the geographical area; receiving a plurality of images of the geographical area; process the plurality of images to select a subset of images to generate the overhead view of the geographical area; divide the ground map into a plurality of sampling points of the geographical area; and determine a color of a plurality of patches of the overhead view image from the subset of images, each patch representing each sampling point of the geographical area.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: March 29, 2022
    Assignee: Woven Planet North America, Inc.
    Inventors: Clemens Marschner, Wilhelm Richert, Thomas Schiwietz, Darko Zikic
  • Patent number: 11244500
    Abstract: Various embodiments of the present technology can include methods, systems and non-transitory computer readable media and computer programs configured to determine one or more images of a generated overhead view of a geographical area. The generated overhead view is generated from aggregated pixel values determined from correlated pixel values in images of the geographical area. The disclosed approaches identify semantic map features as being present in the images of the overhead perspective of the geographic area. The disclosed approaches extract the semantic map features of the geographical area from the images of the substantially overhead perspective of the geographical area. The disclosed approaches translate the extracted semantic map features to a semantic map layer of a geometric map associated with the geographical area.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 8, 2022
    Assignee: Woven Planet North America, Inc.
    Inventors: Clemens Marschner, Wilhelm Richert, Thomas Schiwietz, Darko Zikic
  • Publication number: 20210304491
    Abstract: The present disclosure relates to a method of generating a ground map. For instance, the present disclosure provides a method of determining relevant point cloud data for generating a surface topology over real-world geographical areas using sensor data, which may involve (i) receiving a first dataset from a first sensor, (ii) receiving a second dataset from a second sensor, (iii) classifying the first dataset by identifying characteristics of the first dataset that correspond to a ground surface, (iv) filtering the second dataset based on the classified first dataset, and (v) generating a ground map from the filtered second dataset.
    Type: Application
    Filed: March 25, 2020
    Publication date: September 30, 2021
    Inventors: Marco Caccin, Clemens Marschner, Nikolai Morin, Darko Zikic
  • Publication number: 20210201050
    Abstract: The present invention relates to a method of generating an overhead view image of an area. More particularly, the present invention relates to a method of generating a contextual multi-image based overhead view image of an area using ground map data and field of view image data. Various embodiments of the present technology can include methods, systems and non-transitory computer readable media and computer programs configured to determine a ground map of the geographical area; receiving a plurality of images of the geographical area; process the plurality of images to select a subset of images to generate the overhead view of the geographical area; divide the ground map into a plurality of sampling points of the geographical area; and determine a color of a plurality of patches of the overhead view image from the subset of images, each patch representing each sampling point of the geographical area.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Applicant: Lyft, Inc.
    Inventors: Clemens Marschner, Wilhelm Richert, Thomas Schiwietz, Darko Zikic
  • Publication number: 20210201569
    Abstract: The present invention relates to a method of generating an overhead view image of an area. More particularly, the present invention relates to a method of generating a contextual multi-image based overhead view image of an area using ground map data and field of view image data.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Applicant: Lyft, Inc.
    Inventors: Clemens Marschner, Wilhelm Richert, Thomas Schiwietz, Darko Zikic
  • Publication number: 20210183139
    Abstract: Examples disclosed herein may involve (i) obtaining 2D image data and 3D sensor data that is representative of an area, (ii) identifying a first set of pixels associated with ephemeral objects detected in the area and a second set of pixels associated with non-ephemeral objects detected in the area, (iii) identifying a first set of ephemeral 3D data points associated with the detected ephemeral objects and a second set of non-ephemeral 3D data points associated with the detected non-ephemeral objects, (iv) mapping the first and second sets of 3D data points to a grid of voxels associated with the area, (v) making a determination that one or more voxels in the grid each contain a threshold extent of ephemeral data points, and (vi) based at least in part on the determination, filtering the 3D sensor data to remove the 3D data points contained within the one or more voxels.
    Type: Application
    Filed: December 17, 2019
    Publication date: June 17, 2021
    Inventors: Wilhelm Richert, Darko Zikic, Clemens Marschner
  • Patent number: 10776423
    Abstract: Video processing for motor task analysis is described. In various examples, a video of a person carrying out a motor task, such as placing the forefinger on the nose, is input to a trained machine learning system to classify the motor task into one of a plurality of classes. In an example, motion descriptors such as optical flow are computed from pairs of frames of the video and the motion descriptors are input to the machine learning system. The motor task analysis may be used to assess or evaluate neurological conditions such as multiple sclerosis and/or Parkinson's.
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: September 15, 2020
    Assignee: Novartis AG
    Inventors: Peter Kontschieder, Jonas Dorn, Darko Zikic, Antonio Criminsi, Frank Kurt Dahlke
  • Patent number: 10083233
    Abstract: Video processing for motor task analysis is described. In various examples, a video of at least part of a person or animal carrying out a motor task, such as placing the forefinger on the nose, is input to a trained machine learning system to classify the motor task into one of a plurality of classes. In an example, motion descriptors such as optical flow are computed from pairs of frames of the video and the motion descriptors are input to the machine learning system. For example, during training the machine learning system identifies time-dependent and/or location-dependent acceleration or velocity features which discriminate between the classes of the motor task. In examples, the trained machine learning system computes, from the motion descriptors, the location dependent acceleration or velocity features which it has learned as being good discriminators. In various examples, a feature is computed using sub-volumes of the video.
    Type: Grant
    Filed: November 9, 2014
    Date of Patent: September 25, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Peter Kontschieder, Jonas Dorn, Darko Zikic, Antonio Criminisi
  • Publication number: 20170293805
    Abstract: Video processing for motor task analysis is described. In various examples, a video of a person carrying out a motor task, such as placing the forefinger on the nose, is input to a trained machine learning system to classify the motor task into one of a plurality of classes. In an example, motion descriptors such as optical flow are computed from pairs of frames of the video and the motion descriptors are input to the machine learning system. The motor task analysis may be used to assess or evaluate neurological conditions such as multiple sclerosis and/or Parkinson's.
    Type: Application
    Filed: March 9, 2015
    Publication date: October 12, 2017
    Inventors: Peter Kontschieder, Jonas Dorn, Darko Zikic, Antonio Criminsi, Frank Kurt Dahlke
  • Publication number: 20160071284
    Abstract: Video processing for motor task analysis is described. In various examples, a video of at least part of a person or animal carrying out a motor task, such as placing the forefinger on the nose, is input to a trained machine learning system to classify the motor task into one of a plurality of classes. In an example, motion descriptors such as optical flow are computed from pairs of frames of the video and the motion descriptors are input to the machine learning system. For example, during training the machine learning system identifies time-dependent and/or location-dependent acceleration or velocity features which discriminate between the classes of the motor task. In examples, the trained machine learning system computes, from the motion descriptors, the location dependent acceleration or velocity features which it has learned as being good discriminators. In various examples, a feature is computed using sub-volumes of the video.
    Type: Application
    Filed: November 9, 2014
    Publication date: March 10, 2016
    Inventors: Peter Kontschieder, Jonas Dorn, Darko Zikic, Antonio Criminisi
  • Patent number: 8494243
    Abstract: A method for performing deformable non-rigid registration of 2D and 3D images of a vascular structure for assistance in surgical intervention includes acquiring 3D image data. An abdominal aorta is segmented from the 3D image data using graph-cut based segmentation to produce a segmentation mask. Centerlines are generated from the segmentation mask using a sequential topological thinning process. 3D graphs are generated from the centerlines. 2D image data is acquired. The 2D image data is segmented to produce a distance map. An energy function is defined based on the 3D graphs and the distance map. The energy function is minimized to perform non-rigid registration between the 3D image data and the 2D image data. The registration may be optimized.
    Type: Grant
    Filed: May 28, 2010
    Date of Patent: July 23, 2013
    Assignee: Siemens Aktiengesellschaft
    Inventors: Hari Sundar, Ali Kamen, Rui Liao, Darko Zikic, Martin Groher, Yunhao Tan
  • Publication number: 20110026794
    Abstract: A method for performing deformable non-rigid registration of 2D and 3D images of a vascular structure for assistance in surgical intervention includes acquiring 3D image data. An abdominal aorta is segmented from the 3D image data using graph-cut based segmentation to produce a segmentation mask. Centerlines are generated from the segmentation mask using a sequential topological thinning process. 3D graphs are generated from the centerlines. 2D image data is acquired. The 2D image data is segmented to produce a distance map. An energy function is defined based on the 3D graphs and the distance map. The energy function is minimized to perform non-rigid registration between the 3D image data and the 2D image data. The registration may be optimized.
    Type: Application
    Filed: May 28, 2010
    Publication date: February 3, 2011
    Applicant: Siemens Corporation
    Inventors: Hari Sundar, Ali Kamen, Rui Liao, Darko Zikic, Martin Groher, Yunhao Tan