Patents by Inventor Ondrej MIKSIK

Ondrej MIKSIK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11822620
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to optimizing the accuracy of local feature detection in a variety of physical environments. Homographic adaptation for facilitating personalization of local feature models to specific target environments is formulated in a bilevel optimization framework instead of relying on conventional randomization techniques. Models for extraction of local image features can be adapted according to homography transformations that are determined to be most relevant or optimal for a user's target environment.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: November 21, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vibhav Vineet, Ondrej Miksik, Vishnu Sai Rao Suresh Lokhande
  • Publication number: 20230154032
    Abstract: In various embodiments there is a method for camera localization within a scene. An image of a scene captured by the camera is input to a machine learning model, which has been trained for the particular scene to detect a plurality of 3D scene landmarks. The 3D scene landmarks are pre-specified in a pre-built map of the scene. The machine learning model outputs a plurality of predictions, each prediction comprising: either a 2D location in the image which is predicted to depict one of the 3D scene landmarks, or a 3D bearing vector, being a vector originating at the camera and pointing towards a predicted 3D location of one of the 3D scene landmarks. Using the predictions, an estimate of a position and orientation of the camera in the pre-built map of the scene is computed.
    Type: Application
    Filed: February 3, 2022
    Publication date: May 18, 2023
    Inventors: Sudipta Narayan SINHA, Ondrej MIKSIK, Joseph Michael DEGOL, Tien DO
  • Publication number: 20220261594
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to optimizing the accuracy of local feature detection in a variety of physical environments. Homographic adaptation for facilitating personalization of local feature models to specific target environments is formulated in a bilevel optimization framework instead of relying on conventional randomization techniques. Models for extraction of local image features can be adapted according to homography transformations that are determined to be most relevant or optimal for a user's target environment.
    Type: Application
    Filed: February 18, 2021
    Publication date: August 18, 2022
    Inventors: Vibhav VINEET, Ondrej MIKSIK, Vishnu Sai Rao Suresh LOKHANDE
  • Patent number: 11378977
    Abstract: A robotic system is controlled. Audiovisual data representing an environment in which at least part of the robotic system is located is received via at least one camera and at least one microphone. The audiovisual data comprises a visual data component representing a visible part of the environment and an audio data component representing an audible part of the environment. A location of a sound source that emits sound that is represented in the audio data component of the audiovisual data is identified based on the audio data component of the audiovisual data. The sound source is outside the visible part of the environment and is not represented in the visual data component of the audiovisual data. Operation of a controllable element located in the environment is controlled based on the identified location of the sound source.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: July 5, 2022
    Assignee: Emotech Ltd
    Inventors: Ondrej Miksik, Pawel Swietojanski, Srikanth Reddy Bethi, Raymond W. M. Ng
  • Patent number: 11328182
    Abstract: A three-dimensional (3D) map inconsistency detection machine includes an input transformation layer connected to a neural network. The input transformation layer is configured to 1) receive a test 3D map including 3D map data modeling a physical entity, 2) transform the 3D map data into a set of 2D images collectively corresponding to volumes of view frustums of a plurality of virtual camera views of the physical entity modeled by the test 3D map, and 3) output the set of 2D images to the neural network. The neural network is configured to output an inconsistency value indicating a degree to which the test 3D map includes inconsistencies based on analysis of the set of 2D images collectively corresponding to the volumes of the view frustums of the plurality of virtual camera views.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: May 10, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lukas Gruber, Christoph Vogel, Ondrej Miksik, Marc Andre Leon Pollefeys
  • Publication number: 20210383172
    Abstract: A three-dimensional (3D) map inconsistency detection machine includes an input transformation layer connected to a neural network. The input transformation layer is configured to 1) receive a test 3D map including 3D map data modeling a physical entity, 2) transform the 3D map data into a set of 2D images collectively corresponding to volumes of view frustums of a plurality of virtual camera views of the physical entity modeled by the test 3D map, and 3) output the set of 2D images to the neural network. The neural network is configured to output an inconsistency value indicating a degree to which the test 3D map includes inconsistencies based on analysis of the set of 2D images collectively corresponding to the volumes of the view frustums of the plurality of virtual camera views.
    Type: Application
    Filed: June 9, 2020
    Publication date: December 9, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Lukas GRUBER, Christoph VOGEL, Ondrej MIKSIK, Marc Andre Leon POLLEFEYS
  • Publication number: 20200019184
    Abstract: A robotic system is controlled. Audiovisual data representing an environment in which at least part of the robotic system is located is received via at least one camera and at least one microphone. The audiovisual data comprises a visual data component representing a visible part of the environment and an audio data component representing an audible part of the environment. A location of a sound source that emits sound that is represented in the audio data component of the audiovisual data is identified based on the audio data component of the audiovisual data. The sound source is outside the visible part of the environment and is not represented in the visual data component of the audiovisual data. Operation of a controllable element located in the environment is controlled based on the identified location of the sound source.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 16, 2020
    Inventors: Ondrej MIKSIK, Pawel SWIETOJANSKI, Srikanth REDDY, Raymond W.M. NG