Patents by Inventor Ali Osman ULUSOY
Ali Osman ULUSOY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240193815Abstract: A method may receive first image data from a first camera and second image data from a second camera. A method may determine a calibration target position based on an extrinsic calibration, the first image data, the second image data, and calibration target information. A method may measure a first reprojection error of the first camera based on the extrinsic calibration, the calibration target position, and the first image data. A method may measure a second reprojection error of the second camera based on the extrinsic calibration, the calibration target position, and the second image data. Upon determining that any combination of the first reprojection error is greater than a threshold reprojection error or the second reprojection error is greater than the threshold reprojection error, the method may provide an indication that the first camera and the second camera are out of calibration.Type: ApplicationFiled: December 8, 2023Publication date: June 13, 2024Inventor: Ali Osman Ulusoy
-
Patent number: 11010961Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.Type: GrantFiled: March 13, 2019Date of Patent: May 18, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Szymon Piotr Stachniak, Ali Osman Ulusoy, Hendrik Mark Langerak, Michelle Brook
-
Patent number: 10846923Abstract: A method for spatial mapping of an environment comprises receiving a plurality of depth images of an environment via an imaging device, each depth image associated with a local coordinate system. For each local coordinate system, each associated depth image is fused to generate a local volume. Each local volume is then fused into a global volume having a global coordinate system, and then a surface mesh is extracted for the global volume. One or more regions of inconsistency within the global volume are determined and localized to one or more erroneous local volumes. The one or more erroneous local volumes are unfused from the global volume, and then non-erroneous local volumes are re-fused into a corrected global volume. By using a two-step fusion process, regions of inconsistency, such as mirror reflections, may be corrected without requiring reconstruction of the entire global volume.Type: GrantFiled: May 24, 2018Date of Patent: November 24, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Yuri Pekelny, Ali Osman Ulusoy, Salah Eddine Nouri
-
Patent number: 10825217Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.Type: GrantFiled: January 2, 2019Date of Patent: November 3, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Ali Osman Ulusoy, Yuri Pekelny, Szymon Piotr Stachniak
-
Publication number: 20200226820Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.Type: ApplicationFiled: March 13, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Szymon Piotr STACHNIAK, Ali Osman ULUSOY, Hendrik Mark LANGERAK, Michelle BROOK
-
Publication number: 20200211243Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.Type: ApplicationFiled: January 2, 2019Publication date: July 2, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Ali Osman ULUSOY, Yuri PEKELNY, Szymon Piotr STACHNIAK
-
Publication number: 20190362544Abstract: A method for spatial mapping of an environment comprises receiving a plurality of depth images of an environment via an imaging device, each depth image associated with a local coordinate system. For each local coordinate system, each associated depth image is fused to generate a local volume. Each local volume is then fused into a global volume having a global coordinate system, and then a surface mesh is extracted for the global volume. One or more regions of inconsistency within the global volume are determined and localized to one or more erroneous local volumes. The one or more erroneous local volumes are unfused from the global volume, and then non-erroneous local volumes are re-fused into a corrected global volume. By using a two-step fusion process, regions of inconsistency, such as mirror reflections, may be corrected without requiring reconstruction of the entire global volume.Type: ApplicationFiled: May 24, 2018Publication date: November 28, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Yuri PEKELNY, Ali Osman ULUSOY, Salah Eddine NOURI