Patents by Inventor Kyle L. Simek

Kyle L. Simek has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928779
    Abstract: Various implementations disclosed herein generate a mesh representing the surfaces in a physical environment. The mesh is generated using multi-resolution voxels based on detected depth information, e.g., from a depth camera. The techniques may use multiple hash tables to store the multi-resolution voxel data. For example, the hash tables may store each voxel's 3D position and a truncated signed distance field (TSDF) value corresponding to each voxels' distance to a nearest surface. Each of the multiple hash tables may include data corresponding to a different level of resolution and those resolutions may depend upon distance/noise or other factors. For example, voxels close to a depth camera may have a finer resolution and smaller size compared to voxels that are further from the depth camera. Techniques disclosed herein may involve using a meshing algorithm that combines multi-resolution voxel information stored in multiple hash tables to generate a single mesh.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: March 12, 2024
    Assignee: Apple Inc.
    Inventors: Maxime Meilland, Andrew Predoehl, Kyle L. Simek, Ming Chuang, Pedro A. Pinies Rodriguez
  • Publication number: 20240077985
    Abstract: An electronic device may include one or more sensors that capture sensor data for a physical environment around the electronic device. The sensor data may be used to determine a scene understanding data set for an extended reality environment including the electronic device. The scene understanding data set may include information such as spatial information, information regarding physical objects in the extended reality environment, and information regarding virtual objects in the extended reality environment. When providing scene understanding data to one or more applications running on the electronic device, spatial and/or temporal restrictions may be applied to the scene understanding data set. Scene understanding data that is associated with locations within a boundary and that is associated with times after a cutoff time may be provided to an application.
    Type: Application
    Filed: June 21, 2023
    Publication date: March 7, 2024
    Inventors: Divya T. Ramakrishnan, Brandon J. Van Ryswyk, Reinhard Klapfer, Antti P. Saarinen, Kyle L. Simek, Aitor Aldoma Buchaca, Tobias Böttger-Brill, Robert Maier, Ming Chuang
  • Publication number: 20230215081
    Abstract: Various implementations disclosed herein include devices, systems, and methods that adjusts operating modes for generating three-dimensional (3D) representations of a physical environment. For example, an example process may include acquiring sensor data by the one or more sensors in a physical environment and operating the device according to a first operating mode and a second operating mode during different periods of time. In the first operating mode (e.g., discovery mode), the device generates a 3D representation of the physical environment based on the sensor data and the device monitors one or more conditions to switch to the second operating mode. In the second operating mode (e.g., monitoring mode), the device monitors the one or more conditions to switch to the first operating mode and generates the 3D representation differently than the first operating mode.
    Type: Application
    Filed: January 5, 2023
    Publication date: July 6, 2023
    Inventors: Johan V. Hedberg, Corentin Cheron, Mukul Sati, Kyle L. Simek
  • Publication number: 20220237872
    Abstract: Various implementations disclosed herein generate a mesh representing the surfaces in a physical environment. The mesh is generated using multi-resolution voxels based on detected depth information, e.g., from a depth camera. The techniques may use multiple hash tables to store the multi-resolution voxel data. For example, the hash tables may store each voxel's 3D position and a truncated signed distance field (TSDF) value corresponding to each voxels' distance to a nearest surface. Each of the multiple hash tables may include data corresponding to a different level of resolution and those resolutions may depend upon distance/noise or other factors. For example, voxels close to a depth camera may have a finer resolution and smaller size compared to voxels that are further from the depth camera. Techniques disclosed herein may involve using a meshing algorithm that combines multi-resolution voxel information stored in multiple hash tables to generate a single mesh.
    Type: Application
    Filed: April 11, 2022
    Publication date: July 28, 2022
    Inventors: Maxine Meilland, Andrew Predoehl, Kyle L. Simek, Ming Chuang, Pedro A. Pinies Rodriguez
  • Patent number: 11328481
    Abstract: Various implementations disclosed herein generate a mesh representing the surfaces in a physical environment. The mesh is generated using multi-resolution voxels based on detected depth information, e.g., from a depth camera. The techniques may use multiple hash tables to store the multi-resolution voxel data. For example, the hash tables may store each voxel's 3D position and a truncated signed distance field (TSDF) value corresponding to each voxels' distance to a nearest surface. Each of the multiple hash tables may include data corresponding to a different level of resolution and those resolutions may depend upon distance/noise or other factors. For example, voxels close to a depth camera may have a finer resolution and smaller size compared to voxels that are further from the depth camera. Techniques disclosed herein may involve using a meshing algorithm that combines multi-resolution voxel information stored in multiple hash tables to generate a single mesh.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: May 10, 2022
    Assignee: Apple Inc.
    Inventors: Maxime Meilland, Andrew Predoehl, Kyle L. Simek, Ming Chuang, Pedro A. Pinies Rodriguez
  • Publication number: 20210225074
    Abstract: Various implementations disclosed herein generate a mesh representing the surfaces in a physical environment. The mesh is generated using multi-resolution voxels based on detected depth information, e.g., from a depth camera. The techniques may use multiple hash tables to store the multi-resolution voxel data. For example, the hash tables may store each voxel's 3D position and a truncated signed distance field (TSDF) value corresponding to each voxels' distance to a nearest surface. Each of the multiple hash tables may include data corresponding to a different level of resolution and those resolutions may depend upon distance/noise or other factors. For example, voxels close to a depth camera may have a finer resolution and smaller size compared to voxels that are further from the depth camera. Techniques disclosed herein may involve using a meshing algorithm that combines multi-resolution voxel information stored in multiple hash tables to generate a single mesh.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 22, 2021
    Inventors: Maxime Meilland, Andrew Predoehl, Kyle L. Simek, Ming Chuang, Pedro A. Pinies Rodriguez