Patents by Inventor Trebor Lee CONNELL

Trebor Lee CONNELL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10909423
    Abstract: Data representing a scene is received. The scene includes labeled elements such as walls, a floor, a ceiling, and objects placed at various locations in the scene. The original received scene may be modified in different ways to create new scenes that are based on the original scene. These modifications include adding clutter to the scene, moving one or more elements of the scene, swapping one or more elements of the scene with different labeled elements, changing the size, color, or materials associated with one or more of the elements of the scene, and changing the lighting used in the scene. Each new scene may be used to generate labeled training data for a classifier by placing a virtual sensor (e.g., a camera) in the new scene, and generating sensor output data for the virtual sensor based on its placement in the new scene.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: February 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Trebor Lee Connell, Emanuel Shalev, Michael John Ebstyne, Don Dongwoo Kim
  • Patent number: 10713836
    Abstract: Examples are disclosed that relate to computing devices and methods for simulating light passing through one or more lenses. In one example, a method comprises obtaining a point spread function of the one or more lenses, obtaining a first input raster image comprising a plurality of pixels, and ray tracing the first input raster image using the point spread function to generate a first output image. Based on ray tracing the first input raster image, a look up table is generated by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image. A second input raster image is obtained, and the look up table is used to generate a second output image from the second input raster image.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: July 14, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Trebor Lee Connell
  • Publication number: 20190392628
    Abstract: Examples are disclosed that relate to computing devices and methods for simulating light passing through one or more lenses. In one example, a method comprises obtaining a point spread function of the one or more lenses, obtaining a first input raster image comprising a plurality of pixels, and ray tracing the first input raster image using the point spread function to generate a first output image. Based on ray tracing the first input raster image, a look up table is generated by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image. A second input raster image is obtained, and the look up table is used to generate a second output image from the second input raster image.
    Type: Application
    Filed: June 25, 2018
    Publication date: December 26, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventor: Trebor Lee CONNELL
  • Publication number: 20190377980
    Abstract: Data representing a scene is received. The scene includes labeled elements such as walls, a floor, a ceiling, and objects placed at various locations in the scene. The original received scene may be modified in different ways to create new scenes that are based on the original scene. These modifications include adding clutter to the scene, moving one or more elements of the scene, swapping one or more elements of the scene with different labeled elements, changing the size, color, or materials associated with one or more of the elements of the scene, and changing the lighting used in the scene. Each new scene may be used to generate labeled training data for a classifier by placing a virtual sensor (e.g., a camera) in the new scene, and generating sensor output data for the virtual sensor based on its placement in the new scene.
    Type: Application
    Filed: June 7, 2018
    Publication date: December 12, 2019
    Inventors: Trebor Lee CONNELL, Emanuel SHALEV, Michael John EBSTYNE, Don Dongwoo KIM