Patents by Inventor Muzaffer Kal

Muzaffer Kal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240129846
    Abstract: In one embodiment, a method includes accessing a map of a building floor plan with locations of access points within the floor plan, the access points being capable of performing wireless communications with wireless devices. Determining a pose of a wireless device within the map using images captured by one or more cameras of the wireless device. Selecting a preferred access point based on the pose of the wireless device, the floor plan, and the locations of the plurality of access points within the floor plan. Configuring wireless communication settings of the wireless device to communicate with the preferred access point based on the pose of the wireless device and the location of the preferred access point within the floor plan.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 18, 2024
    Inventors: Armin Alaghi, Muzaffer Kal, Richard Andrew Newcombe
  • Publication number: 20230237692
    Abstract: A method includes accessing map data of an area of a real environment, the map data comprising three-dimensional feature descriptors describing features visible in the real environment. A plurality of map packages are generated based on the map data, wherein each of the map packages (1) corresponds to a two-dimensional sub-area within the area of the real environment, and (2) comprises a subset of the three-dimensional feature descriptors describing features visible in the sub-area. A first sequence of the plurality of map packages are broadcast through one or more base stations, wherein the first sequence is based on the two-dimensional sub-area of each of the map packages, wherein each of the map packages is configured to be received and used by an artificial-reality device to determine a pose of the artificial-reality device in the associated sub-area based on the associated subset of the three-dimensional feature descriptors.
    Type: Application
    Filed: January 26, 2022
    Publication date: July 27, 2023
    Inventors: Armin Alaghi, Muzaffer Kal, Vincent Lee, Richard Andrew Newcombe
  • Publication number: 20220122285
    Abstract: In one embodiment, a computing system accesses a set of 3D locations associated with features in an environment previously captured by a camera from a previous camera pose. The computing system determines a predicted camera pose using the previous camera pose and motion measurements generated using a motion sensor associated with the camera. The computing system projects the set of 3D locations toward the predicted camera pose and onto a 2D image plane associated with the camera. The computing system generates, based on the projected set of 3D locations on the 2D image plane, an activation map specifying a subset of the pixel sensors of the camera that are to be activated. The computing system instructs, using the activation map, the camera to activate the subset of pixel sensors to capture a new image of the environment. The computing system reads pixel values of the new image.
    Type: Application
    Filed: October 4, 2021
    Publication date: April 21, 2022
    Inventors: Amr Suleiman, Anastasios Mourikis, Armin Alaghi, Andrew Samuel Berkovich, Shlomo Alkalay, Muzaffer Kal, Vincent Lee, Richard Andrew Newcombe
  • Patent number: 11182647
    Abstract: In one embodiment, a method for tracking includes capturing a first frame of the environment using a first camera, identifying, in the first frame, a first patch that corresponds to the first feature, accessing a first local memory of the first camera that stores reference patches identified in one or more previous frames captured by the first camera, and determining that none of the reference patches stored in the first local memory corresponds to the first feature. The method further includes receiving, from a second camera through a data link connecting the second camera with the first camera, a reference patch corresponding to the first feature. The reference patch is identified in a previous frame captured by the second camera and of the second camera. The method may then determine correspondence data between the first patch and the reference patch, and tracks the first feature in the environment based on the determined correspondence data.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: November 23, 2021
    Assignee: Facebook Technologies, LLC.
    Inventors: Muzaffer Kal, Armin Alaghi, Vincent Lee, Richard Andrew Newcombe, Amr Suleiman, Muhammad Huzaifa
  • Publication number: 20210117722
    Abstract: In one embodiment, a method for tracking includes capturing a first frame of the environment using a first camera, identifying, in the first frame, a first patch that corresponds to the first feature, accessing a first local memory of the first camera that stores reference patches identified in one or more previous frames captured by the first camera, and determining that none of the reference patches stored in the first local memory corresponds to the first feature. The method further includes receiving, from a second camera through a data link connecting the second camera with the first camera, a reference patch corresponding to the first feature. The reference patch is identified in a previous frame captured by the second camera and of the second camera. The method may then determine correspondence data between the first patch and the reference patch, and tracks the first feature in the environment based on the determined correspondence data.
    Type: Application
    Filed: October 16, 2019
    Publication date: April 22, 2021
    Inventors: Muzaffer Kal, Armin Alaghi, Vincent Lee, Richard Andrew Newcombe, Amr Suleiman, Muhammad Huzaifa
  • Publication number: 20170336220
    Abstract: A system and method for visual inertial navigation are described. In some embodiments, a device comprises an inertial measurement unit (IMU) sensor, a camera, a radio-based sensor, and a processor. The IMU sensor generates IMU data of the device. The camera generates a plurality of video frames. The radio-based sensor generates radio-based sensor data based on an absolute reference frame relative to the device. The processor is configured to synchronize the plurality of video frames with the IMU data, compute a first estimated spatial state of the device based on the synchronized plurality of video frames with the IMU data, compute a second estimated spatial state of the device based on the radio-based sensor data, and determine a spatial state of the device based on a combination of the first and second estimated spatial states of the device.
    Type: Application
    Filed: May 20, 2016
    Publication date: November 23, 2017
    Inventors: Christopher Broaddus, Muzaffer Kal, Wenyi Zhao, Ali M. Tatari, Dan Bostan, Saud Akram
  • Patent number: D820319
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: June 12, 2018
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Christopher Broaddus, Muzaffer Kal, Wenyi Zhao, Ali M. Tatari, Saud Akram, Pip Tompkin, Denny Liao, Madison Smith, Samuel McClellan