Patents by Inventor Tomasz Malisiewicz

Tomasz Malisiewicz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11657286
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: May 23, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Patent number: 11593654
    Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: February 28, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Publication number: 20210365785
    Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
    Type: Application
    Filed: June 7, 2021
    Publication date: November 25, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Patent number: 11062209
    Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 13, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Publication number: 20210182636
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Application
    Filed: February 23, 2021
    Publication date: June 17, 2021
    Applicant: MAGIC LEAP, INC.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Patent number: 10963758
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: March 30, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel Detone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Patent number: 10657376
    Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: May 19, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Malisiewicz, Andrew Rabinovich
  • Publication number: 20200097819
    Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
    Type: Application
    Filed: September 30, 2019
    Publication date: March 26, 2020
    Applicant: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Patent number: 10489708
    Abstract: A method for generating inputs for a neural network based on an image includes receiving the image, identifying a position within the image, and identifying a subset of the image at the position. The subset of the image is defined by a first set of corners. The method also includes perturbing at least one of the first set of corners to form a second set of corners. The second set of corners defines a modified subset of the image. The method further includes determining a homography based on a comparison between the subset of the image and the modified subset of the image, generating a transformed image by applying the homography to the image, and identifying a subset of the transformed image at the position.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: November 26, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Publication number: 20190286951
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Application
    Filed: March 27, 2019
    Publication date: September 19, 2019
    Applicant: MAGIC LEAP, INC.
    Inventors: Andrew RABINOVICH, Vijay BADRINARAYANAN, Daniel DETONE, Srivignesh RAJENDRAN, Douglas Bertram LEE, Tomasz MALISIEWICZ
  • Patent number: 10255529
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: April 9, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Publication number: 20180268220
    Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.
    Type: Application
    Filed: March 16, 2018
    Publication date: September 20, 2018
    Inventors: Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Malisiewicz, Andrew Rabinovich
  • Publication number: 20180137642
    Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 17, 2018
    Inventors: Tomasz Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
  • Publication number: 20170337470
    Abstract: A method for generating inputs for a neural network based on an image includes receiving the image, identifying a position within the image, and identifying a subset of the image at the position. The subset of the image is defined by a first set of corners. The method also includes perturbing at least one of the first set of corners to form a second set of corners. The second set of corners defines a modified subset of the image. The method further includes determining a homography based on a comparison between the subset of the image and the modified subset of the image, generating a transformed image by applying the homography to the image, and identifying a subset of the transformed image at the position.
    Type: Application
    Filed: May 19, 2017
    Publication date: November 23, 2017
    Applicant: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Publication number: 20170262737
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Application
    Filed: March 13, 2017
    Publication date: September 14, 2017
    Applicant: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel DeTone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz