Patents by Inventor Ali Osman ULUSOY

Ali Osman ULUSOY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11010961
    Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: May 18, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Ali Osman Ulusoy, Hendrik Mark Langerak, Michelle Brook
  • Patent number: 10846923
    Abstract: A method for spatial mapping of an environment comprises receiving a plurality of depth images of an environment via an imaging device, each depth image associated with a local coordinate system. For each local coordinate system, each associated depth image is fused to generate a local volume. Each local volume is then fused into a global volume having a global coordinate system, and then a surface mesh is extracted for the global volume. One or more regions of inconsistency within the global volume are determined and localized to one or more erroneous local volumes. The one or more erroneous local volumes are unfused from the global volume, and then non-erroneous local volumes are re-fused into a corrected global volume. By using a two-step fusion process, regions of inconsistency, such as mirror reflections, may be corrected without requiring reconstruction of the entire global volume.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: November 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yuri Pekelny, Ali Osman Ulusoy, Salah Eddine Nouri
  • Patent number: 10825217
    Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: November 3, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ali Osman Ulusoy, Yuri Pekelny, Szymon Piotr Stachniak
  • Publication number: 20200226820
    Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.
    Type: Application
    Filed: March 13, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr STACHNIAK, Ali Osman ULUSOY, Hendrik Mark LANGERAK, Michelle BROOK
  • Publication number: 20200211243
    Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.
    Type: Application
    Filed: January 2, 2019
    Publication date: July 2, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ali Osman ULUSOY, Yuri PEKELNY, Szymon Piotr STACHNIAK
  • Publication number: 20190362544
    Abstract: A method for spatial mapping of an environment comprises receiving a plurality of depth images of an environment via an imaging device, each depth image associated with a local coordinate system. For each local coordinate system, each associated depth image is fused to generate a local volume. Each local volume is then fused into a global volume having a global coordinate system, and then a surface mesh is extracted for the global volume. One or more regions of inconsistency within the global volume are determined and localized to one or more erroneous local volumes. The one or more erroneous local volumes are unfused from the global volume, and then non-erroneous local volumes are re-fused into a corrected global volume. By using a two-step fusion process, regions of inconsistency, such as mirror reflections, may be corrected without requiring reconstruction of the entire global volume.
    Type: Application
    Filed: May 24, 2018
    Publication date: November 28, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yuri PEKELNY, Ali Osman ULUSOY, Salah Eddine NOURI