Patents by Inventor Richard Andrew Newcombe

Richard Andrew Newcombe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240129846
    Abstract: In one embodiment, a method includes accessing a map of a building floor plan with locations of access points within the floor plan, the access points being capable of performing wireless communications with wireless devices. Determining a pose of a wireless device within the map using images captured by one or more cameras of the wireless device. Selecting a preferred access point based on the pose of the wireless device, the floor plan, and the locations of the plurality of access points within the floor plan. Configuring wireless communication settings of the wireless device to communicate with the preferred access point based on the pose of the wireless device and the location of the preferred access point within the floor plan.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 18, 2024
    Inventors: Armin Alaghi, Muzaffer Kal, Richard Andrew Newcombe
  • Publication number: 20240119609
    Abstract: A distributed imaging system for augmented reality devices is disclosed. The system includes a computing module in communication with a plurality of spatially distributed sensing devices. The computing module is configured to process input images from the sensing devices based on performing a local feature matching computation to generate corresponding first output images. The computing module is further configured to process the input images based on performing an optical flow correspondence computation to generate corresponding second output images. The computing module is further configured to computationally combine first and second output images to generate third output images.
    Type: Application
    Filed: October 10, 2023
    Publication date: April 11, 2024
    Inventors: Michael Goesele, Richard Andrew Newcombe, Yujia Chen, Florian Eddy Robert Ilg, Daniel Andersen, Chao Li, Simon Gareth Green
  • Patent number: 11816886
    Abstract: A system may include a wearable apparatus dimensioned to be worn by a user about an axial region of the user’s body such that, when the wearable apparatus is worn by the user, the user’s field of view into a local environment is substantially free of a view of the wearable apparatus. The system may also include a machine-perception subsystem that is coupled to the wearable apparatus and that gathers information about the local environment by observing the local environment. Additionally, the system may include an experience-analysis subsystem that infers, based on the information about the local environment and information about the user, contextual information about an experience of the user in the local environment. Furthermore, the system may include a non-visual communication subsystem that outputs the contextual information about the experience of the user. Various other apparatuses, systems, and methods are also disclosed.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: November 14, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Richard Andrew Newcombe, Renzo De Nardi
  • Patent number: 11804010
    Abstract: In one embodiment, a computing system instructs, at a first time, a camera having a plurality of pixel sensors to use the plurality of pixel sensors to capture a first image of an environment comprising an object. The computing system predicts, using at least the first image, a projection of the object appearing in a virtual image plane associated with a predicted camera pose at a second time. The computing system determines, based on the predicted projection of the object, a first region of pixels and a second region of pixels. The computing system generates pixel-activation instructions for the first region of pixels and the second region of pixels. The computing system instructs the camera to capture a second image of the environment at the second time according to the pixel-activation instructions.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: October 31, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Publication number: 20230260200
    Abstract: In one embodiment, a method includes determining a viewing direction of a scene and rendering an image of the scene for the viewing direction, wherein the rendering comprises: for each pixel of the image, casting a view ray into the scene, and for a particular sampling point along the view ray, determining a pixel radiance associated with surface light field (SLF) and opacity, which comprises identifying multiple voxels within a threshold distance to the particular sampling point, wherein each of the voxels is associated with a respective local plane, for each the voxels computing a pixel radiance associated with SLF and opacity based on locations of the particular sampling point and the local plane associated with that voxel, and determining the pixel radiance associated with SLF and opacity for the particular sampling point based on interpolating the pixel radiances associated with SLF and opacity associated with the multiple voxels.
    Type: Application
    Filed: January 27, 2023
    Publication date: August 17, 2023
    Inventors: Samir Aroudj, Michael Goesele, Richard Andrew Newcombe, Tanner Schmidt, Florian Eddy Robert Ilg, Steven John Lovegrove
  • Publication number: 20230237692
    Abstract: A method includes accessing map data of an area of a real environment, the map data comprising three-dimensional feature descriptors describing features visible in the real environment. A plurality of map packages are generated based on the map data, wherein each of the map packages (1) corresponds to a two-dimensional sub-area within the area of the real environment, and (2) comprises a subset of the three-dimensional feature descriptors describing features visible in the sub-area. A first sequence of the plurality of map packages are broadcast through one or more base stations, wherein the first sequence is based on the two-dimensional sub-area of each of the map packages, wherein each of the map packages is configured to be received and used by an artificial-reality device to determine a pose of the artificial-reality device in the associated sub-area based on the associated subset of the three-dimensional feature descriptors.
    Type: Application
    Filed: January 26, 2022
    Publication date: July 27, 2023
    Inventors: Armin Alaghi, Muzaffer Kal, Vincent Lee, Richard Andrew Newcombe
  • Publication number: 20230130770
    Abstract: An Artificial Reality (AR) device captures an image of a physical environment surrounding a user wearing the AR device. A three-dimensional map corresponding to the physical environment is accessed. Then, a pose of the AR device relative to the three-dimensional map is determined based on first features of physical objects captured in the image and second features of object representations in the three-dimensional map. The device determines, using an eye tracker, a gaze of an eye of the user. Based on the gaze and the pose, a region of interest is computed in the three-dimensional map. Representations of physical devices in the region of interest of the three-dimensional map are identified. The device determines an intent of the user to interact with a physical device. A command is issued to the physical device based on the determined intent.
    Type: Application
    Filed: October 20, 2022
    Publication date: April 27, 2023
    Inventors: Daniel Miller, Renzo De Nardi, Richard Andrew Newcombe
  • Publication number: 20230119703
    Abstract: In one embodiment, a computing system instructs, at a first time, a camera having a plurality of pixel sensors to use the plurality of pixel sensors to capture a first image of an environment comprising an object. The computing system predicts, using at least the first image, a projection of the object appearing in a virtual image plane associated with a predicted camera pose at a second time. The computing system determines, based on the predicted projection of the object, a first region of pixels and a second region of pixels. The computing system generates pixel-activation instructions for the first region of pixels and the second region of pixels. The computing system instructs the camera to capture a second image of the environment at the second time according to the pixel-activation instructions.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 20, 2023
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Patent number: 11625862
    Abstract: In one embodiment, a method includes accessing a digital image captured by a camera that is connected to a machine-detectable object, detecting a reflection of the machine-detectable object in the digital image, computing, in response to the detection, a plane that is coincident with a reflective surface associated with the reflection, determining a boundary of the reflective surface in the plane based on at least one of a plurality of cues, and storing information associated with the reflective surface, where the information includes a pose of the reflective surface and the boundary of the reflective surface in a 3D model of a physical environment, and where the information associated with the reflective surface and the 3D model are configured to be used to render a reconstruction of the physical environment.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: April 11, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael Goesele, Julian Straub, Thomas John Whelan, Richard Andrew Newcombe, Steven John Lovegrove
  • Patent number: 11619814
    Abstract: A head-mounted display system may include a wearable frame securable to a user's head. The head-mounted display system may also include a mapping subsystem that maps a local environment of the user when the wearable frame is secured to the user's head. Additionally, the head-mounted display system may include a varifocal display apparatus mounted to the wearable frame and configured to direct computer-generated images toward the user's eyes at a variable focal length. Various other apparatuses, systems, and methods are also disclosed.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: April 4, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Richard Andrew Newcombe, Renzo De Nardi
  • Patent number: 11562534
    Abstract: In one embodiment, a method includes instructing, at a first time, a camera having a plurality of pixel sensors to capture a first image of an environment comprising an object to determine a first object pose; determining, based on the first object pose, a predicted object pose of the object at a second time; generating pixel-activation instructions based on a buffer region around a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with a predicted camera pose, where the size of the buffer region may be dependent on predicted dynamics for the object; instructing, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions, and; determining, based on the second image, a second object pose of the object.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: January 24, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Publication number: 20220139034
    Abstract: In one embodiment, a method includes instructing, at a first time, a camera having a plurality of pixel sensors to capture a first image of an environment comprising an object to determine a first object pose; determining, based on the first object pose, a predicted object pose of the object at a second time; generating pixel-activation instructions based on a buffer region around a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with a predicted camera pose, where the size of the buffer region may be dependent on predicted dynamics for the object; instructing, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions, and; determining, based on the second image, a second object pose of the object.
    Type: Application
    Filed: December 3, 2021
    Publication date: May 5, 2022
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Publication number: 20220122285
    Abstract: In one embodiment, a computing system accesses a set of 3D locations associated with features in an environment previously captured by a camera from a previous camera pose. The computing system determines a predicted camera pose using the previous camera pose and motion measurements generated using a motion sensor associated with the camera. The computing system projects the set of 3D locations toward the predicted camera pose and onto a 2D image plane associated with the camera. The computing system generates, based on the projected set of 3D locations on the 2D image plane, an activation map specifying a subset of the pixel sensors of the camera that are to be activated. The computing system instructs, using the activation map, the camera to activate the subset of pixel sensors to capture a new image of the environment. The computing system reads pixel values of the new image.
    Type: Application
    Filed: October 4, 2021
    Publication date: April 21, 2022
    Inventors: Amr Suleiman, Anastasios Mourikis, Armin Alaghi, Andrew Samuel Berkovich, Shlomo Alkalay, Muzaffer Kal, Vincent Lee, Richard Andrew Newcombe
  • Publication number: 20220083631
    Abstract: A method includes a computing system associated with an AR device querying, based on a location of the AR device, a registry associated with a distributed map network for a first gateway address associated with a first gateway that provides access to a 3D street map in a physical region. The system downloads the 3D street map by connecting to the first gateway using the first gateway address. The system predicts that the AR device will enter a building in the physical region and queries the registry for a second gateway address associated with a second gateway. The system requests, using the second gateway address, access to the second gateway by providing user authentication information. The system downloads a 3D interior map associated with the building through the second gateway and localizes the AR device within the building using the 3D interior map after the AR device enters the building.
    Type: Application
    Filed: December 30, 2020
    Publication date: March 17, 2022
    Inventors: Hao Chen, Richard Andrew Newcombe
  • Patent number: 11244504
    Abstract: In one embodiment, a computing system accesses a plurality of images captured by one or more cameras from a plurality of camera poses. The computing system generates, using the plurality of images, a plurality of semantic segmentations comprising semantic information of one or more objects captured in the plurality of images. The computing system accesses a three-dimensional (3D) model of the one or more objects. The computing system determines, using the plurality of camera poses, a corresponding plurality of virtual camera poses relative to the 3D model of the one or more objects. The computing system generates a semantic 3D model by projecting the semantic information of the plurality of semantic segmentations towards the 3D model using the plurality of virtual camera poses.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: February 8, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Yu Fan Chen, Richard Andrew Newcombe, Lingni Ma
  • Patent number: 11222468
    Abstract: In one embodiment, a method includes instructing, at a first time, a camera with multiple pixel sensors to capture a first image of an environment comprising an object to determine a first object pose of the object. Based on the first object pose, the method determines a predicted object pose of the object at a second time. The method determines a predicted camera pose of the camera at the second time. The method generates pixel-activation instructions based on a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with the predicted camera pose. The method instructs, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions. The method determines, based on the second image, a second object pose of the object.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: January 11, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Patent number: 11217011
    Abstract: In one embodiment, a method includes accessing a digital map of a real-world region, where the digital map includes one or more three-dimensional meshes corresponding to one or more three-dimensional objects within the real-world region, receiving an object query including an identifier for an anchor in the digital map, positional information relative to the anchor, and information associated with a directional vector, determining a position within the digital map based on the identifier for the anchor and the positional information relative to the anchor, determining a three-dimensional mesh in the digital map that intersects with a projection of the directional vector from the determined position within the digital map, identifying metadata associated with the three-dimensional mesh, and sending the metadata to the second computing device.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: January 4, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Mingfei Yan, Yajie Yan, Richard Andrew Newcombe, Yuheng Ren
  • Patent number: 11182647
    Abstract: In one embodiment, a method for tracking includes capturing a first frame of the environment using a first camera, identifying, in the first frame, a first patch that corresponds to the first feature, accessing a first local memory of the first camera that stores reference patches identified in one or more previous frames captured by the first camera, and determining that none of the reference patches stored in the first local memory corresponds to the first feature. The method further includes receiving, from a second camera through a data link connecting the second camera with the first camera, a reference patch corresponding to the first feature. The reference patch is identified in a previous frame captured by the second camera and of the second camera. The method may then determine correspondence data between the first patch and the reference patch, and tracks the first feature in the environment based on the determined correspondence data.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: November 23, 2021
    Assignee: Facebook Technologies, LLC.
    Inventors: Muzaffer Kal, Armin Alaghi, Vincent Lee, Richard Andrew Newcombe, Amr Suleiman, Muhammad Huzaifa
  • Patent number: 11132834
    Abstract: The disclosed computer-implemented method may include receiving, from a first device in an environment, real-time data associated with the environment and generating map data for the environment based on the real-time data received from the first device. The method may include creating, by merging the map data of the first device with aggregate map data associated with at least one other device, a joint anchor graph that is free of identifiable information, and hosting the joint anchor graph for a shared artificial reality session between the first device and the at least one other device. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: September 28, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Yuheng Ren, Yajie Yan
  • Patent number: 11102467
    Abstract: A depth camera assembly (DCA) captures data describing depth information in a local area. The DCA includes an array detector, a controller, and an illumination source. The array detector includes a detector that is overlaid with a lens array. The detector includes a plurality of pixels, the plurality of pixels are divided into a plurality of different pixel groups. The lens array includes a plurality of lens stacks and each lens stack overlays a different pixel group. The array detector captures one or more composite images of the local area illuminated with the light from the illumination source. The controller determines depth information for objects in the local area using the one or more composite images.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: August 24, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Nicholas Daniel Trail, Renzo De Nardi, Richard Andrew Newcombe