Patents by Inventor Konstantinos Daniilidis

Konstantinos Daniilidis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230136306
    Abstract: Provided is a method for predicting a location of a fast-moving object. The method includes receiving event information from an event camera, the event information corresponding to an event detected by the event camera, generating a Binary Event History Image (BEHI) based on the event information, providing the BEHI as an input to an event-based neural network, obtaining, as an output of the event-based neural network, a first predicted location of the fast-moving object, a normal distribution indicating prediction uncertainty of the predicted location, and a predicted time-to-collision (TTC). The method further includes estimating a second predicted location of the fast-moving object based on the first predicted location, the normal distribution, and the predicted TTC output by the event-based neural network, and actuating a mechanical catching device to be at the second predicted location.
    Type: Application
    Filed: November 1, 2022
    Publication date: May 4, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD., The Trustees of the University of Pennsylvania
    Inventors: Ziyun WANG, Fernando Cladera OJEDA, Anthony Robert BISULCO, Dae Won LEE, Camillo J. TAYLOR, Konstantinos DANIILIDIS, Ani HSIEH, Ibrahim Volkan ISLER
  • Patent number: 11288818
    Abstract: A method for prediction of an indication of motion using input from an event-based camera includes receiving events captured by an event-based camera, wherein each of the events represents a location of a change in pixel intensity, a polarity of the change, and a time. The method further includes discretizing the events into time discretized event volumes, each of which contain events that occur within a specified time range. The method further includes providing the time discretized event volumes as input to an encoder-decoder neural network trained to predict an indication of motion using a loss function that measures quality of image deblurring; generating, using the neural network, a prediction of the indication of motion. The method further includes using the prediction of the indication of motion in a machine vision application.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: March 29, 2022
    Assignee: THE TRUSTEES OF THE UNIVERSITY OF PENNSYLVANIA
    Inventors: Konstantinos Daniilidis, Alex Zihao Zhu
  • Patent number: 11187536
    Abstract: A method for simultaneous location and mapping (SLAM) includes receiving, by at least one processor, a set of sensor measurements from a movement sensor of a mobile robot and a set of images captured by a camera on the mobile robot as the mobile robot traverses an environment. The method includes, for each image of at least a subset of the set of images, extracting, by the at least one processor, a plurality of detected objects from the image. The method includes estimating, by the at least one processor, a trajectory of the mobile robot and a respective semantic label and position of each detected object within the environment using the sensor measurements and an expectation maximization (EM) algorithm.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: November 30, 2021
    Assignee: THE TRUSTEES OF THE UNIVERSITY OF PENNSYLVANIA
    Inventors: Konstantinos Daniilidis, George J. Pappas, Sean Laurence Bowman, Nikolay Asenov Atanasov
  • Patent number: 11138742
    Abstract: A method for implementing a soft data association modeled with probabilities is provided. The association probabilities are computed in an intertwined expectation maximization (EM) scheme with an optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, longer tracks can be enabled by computing the affine deformation with respect to the initial point and using the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration that is different for every feature and sized inversely proportional to the length of the optical flow. The results can be seen in egomotion and very fast vehicle sequences.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: October 5, 2021
    Assignee: THE TRUSTEES OF THE UNIVERSITY OF PENNSYLVANIA
    Inventors: Konstantinos Daniilidis, Alex Zihao Zhu, Nikolay Asenov Atanasov
  • Publication number: 20200265590
    Abstract: A method for prediction of an indication of motion using input from an event-based camera includes receiving events captured by an event-based camera, wherein each of the events represents a location of a change in pixel intensity, a polarity of the change, and a time. The method further includes discretizing the events into time discretized event volumes, each of which contain events that occur within a specified time range. The method further includes providing the time discretized event volumes as input to an encoder-decoder neural network trained to predict an indication of motion using a loss function that measures quality of image deblurring; generating, using the neural network, a prediction of the indication of motion. The method further includes using the prediction of the indication of motion in a machine vision application.
    Type: Application
    Filed: February 19, 2020
    Publication date: August 20, 2020
    Inventors: Konstantinos Daniilidis, Alex Zihao Zhu
  • Publication number: 20200005469
    Abstract: Methods, systems and computer readable media for implementing a soft data association modeled with probabilities. The association probabilities are computed in an intertwined EM scheme with the optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, longer tracks can be enabled by computing the affme deformation with respect to the initial point and use the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow. The results can be seen in egomotion and very fast vehicle sequences.
    Type: Application
    Filed: February 14, 2018
    Publication date: January 2, 2020
    Inventors: Konstantinos Daniilidis, Alex Zihao Zhu, Nikolay Asenov Atanasov
  • Publication number: 20190219401
    Abstract: A method for simultaneous location and mapping (SLAM) includes receiving, by at least one processor, a set of sensor measurements from a movement sensor of a mobile robot and a set of images captured by a camera on the mobile robot as the mobile robot traverses an environment. The method includes, for each image of at least a subset of the set of images, extracting, by the at least one processor, a plurality of detected objects from the image. The method includes estimating, by the at least one processor, a trajectory of the mobile robot and a respective semantic label and position of each detected object within the environment using the sensor measurements and an expectation maximization (EM) algorithm.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 18, 2019
    Inventors: Konstantinos Daniilidis, George J. Pappas, Sean Laurence Bowman, Nikolay Asenov Atanasov