Patents by Inventor Steven John Lovegrove

Steven John Lovegrove has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11804010
    Abstract: In one embodiment, a computing system instructs, at a first time, a camera having a plurality of pixel sensors to use the plurality of pixel sensors to capture a first image of an environment comprising an object. The computing system predicts, using at least the first image, a projection of the object appearing in a virtual image plane associated with a predicted camera pose at a second time. The computing system determines, based on the predicted projection of the object, a first region of pixels and a second region of pixels. The computing system generates pixel-activation instructions for the first region of pixels and the second region of pixels. The computing system instructs the camera to capture a second image of the environment at the second time according to the pixel-activation instructions.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: October 31, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Patent number: 11762080
    Abstract: A method includes receiving a first wireless signal detected by a first device in an environment, the first wireless signal including a first distortion pattern caused by an object moving in the environment, receiving a second wireless signal detected by a second device in the environment, the second wireless signal including a second distortion pattern caused by the object moving in the environment, determining, by comparing the first distortion pattern to the second distortion pattern, that the first distortion pattern and the second distortion pattern correspond to a same movement event associated with the object moving in the environment, determining a timing offset between the first device and the second device based on information associated with the first distortion pattern and the second distortion pattern, and determining, based on the timing offset, temporal correspondences between data generated by the first device and data generated by the second device.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: September 19, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Minh Phuoc Vo, Kiran Kumar Somasundaram, Steven John Lovegrove
  • Publication number: 20230260200
    Abstract: In one embodiment, a method includes determining a viewing direction of a scene and rendering an image of the scene for the viewing direction, wherein the rendering comprises: for each pixel of the image, casting a view ray into the scene, and for a particular sampling point along the view ray, determining a pixel radiance associated with surface light field (SLF) and opacity, which comprises identifying multiple voxels within a threshold distance to the particular sampling point, wherein each of the voxels is associated with a respective local plane, for each the voxels computing a pixel radiance associated with SLF and opacity based on locations of the particular sampling point and the local plane associated with that voxel, and determining the pixel radiance associated with SLF and opacity for the particular sampling point based on interpolating the pixel radiances associated with SLF and opacity associated with the multiple voxels.
    Type: Application
    Filed: January 27, 2023
    Publication date: August 17, 2023
    Inventors: Samir Aroudj, Michael Goesele, Richard Andrew Newcombe, Tanner Schmidt, Florian Eddy Robert Ilg, Steven John Lovegrove
  • Patent number: 11676293
    Abstract: A method for depth sensing from an image of a projected pattern is performed at an electronic device with one or more processors and memory. The method includes receiving an image of a projection of an illumination pattern; for a portion of the image, selecting a candidate image of a plurality of candidate images by comparing the portion of the image with a plurality of candidate images; and determining a depth for the portion of the image based on depth information associated with the selected candidate image. Related electronic devices and computer readable storage medium are also disclosed.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: June 13, 2023
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: James Steven Hegarty, Zijian Wang, Steven John Lovegrove, Yongjun Kim, Rajesh Lachhmandas Chhabria
  • Publication number: 20230169686
    Abstract: In one embodiment, a method includes accessing a calibration model for a camera rig. The method includes accessing multiple observations of an environment captured by the camera rig from multiple poses in the environment. The method includes generating an environmental model including geometry of the environment based on at least the observations, the poses, and the calibration model. The method includes determining, for one or more of the poses, one or more predicted observations of the environment based on the environmental model and the poses. The method includes comparing the predicted observations to the observations corresponding to the poses from which the predicted observations were determined. The method includes revising the calibration model based on the comparison. The method includes revising the environmental model based on at least a set of observations of the environment and the revised calibration model.
    Type: Application
    Filed: October 31, 2022
    Publication date: June 1, 2023
    Inventors: Steven John Lovegrove, Yuheng Ren
  • Publication number: 20230119703
    Abstract: In one embodiment, a computing system instructs, at a first time, a camera having a plurality of pixel sensors to use the plurality of pixel sensors to capture a first image of an environment comprising an object. The computing system predicts, using at least the first image, a projection of the object appearing in a virtual image plane associated with a predicted camera pose at a second time. The computing system determines, based on the predicted projection of the object, a first region of pixels and a second region of pixels. The computing system generates pixel-activation instructions for the first region of pixels and the second region of pixels. The computing system instructs the camera to capture a second image of the environment at the second time according to the pixel-activation instructions.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 20, 2023
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Patent number: 11625862
    Abstract: In one embodiment, a method includes accessing a digital image captured by a camera that is connected to a machine-detectable object, detecting a reflection of the machine-detectable object in the digital image, computing, in response to the detection, a plane that is coincident with a reflective surface associated with the reflection, determining a boundary of the reflective surface in the plane based on at least one of a plurality of cues, and storing information associated with the reflective surface, where the information includes a pose of the reflective surface and the boundary of the reflective surface in a 3D model of a physical environment, and where the information associated with the reflective surface and the 3D model are configured to be used to render a reconstruction of the physical environment.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: April 11, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael Goesele, Julian Straub, Thomas John Whelan, Richard Andrew Newcombe, Steven John Lovegrove
  • Patent number: 11587254
    Abstract: Raycast-based calibration techniques are described for determining calibration parameters associated with components of a head mounted display (HMD) of an augmented reality (AR) system having one or more off-axis reflective combiners. In an example, a system comprises an image capture device and a processor executing a calibration engine. The calibration engine is configured to determine correspondences between target points and camera pixels based on images of the target acquired through an optical system, the optical system including optical surfaces and an optical combiner. Each optical surface is defined by a difference of optical index on opposing sides of the surface. At least one calibration parameter for the optical system is determined by mapping rays from each camera pixel to each target point via raytracing through the optical system, the raytracing being based on the index differences, shapes, and positions of the optical surfaces relative to the one or more cameras.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: February 21, 2023
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Huixuan Tang, Hauke Malte Strasdat, Qi Guo, Steven John Lovegrove
  • Patent number: 11587210
    Abstract: In one embodiment, a method includes a computer system accessing a curvilinear image captured using a camera lens, generating multiple rectilinear images from the curvilinear image based at least in part on one or more calibration parameters associated with the camera lens, identifying semantic information in one or more of the rectilinear images by processing each of the multiple rectilinear images using a machine-learning model configured to identify semantic information in rectilinear images, and identifying semantic information in the curvilinear image based on the identified semantic information in the one or more rectilinear images.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: February 21, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Yu Fan Chen, Kiran Kumar Somasundaram, Steven John Lovegrove, Yujun Shen
  • Patent number: 11562534
    Abstract: In one embodiment, a method includes instructing, at a first time, a camera having a plurality of pixel sensors to capture a first image of an environment comprising an object to determine a first object pose; determining, based on the first object pose, a predicted object pose of the object at a second time; generating pixel-activation instructions based on a buffer region around a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with a predicted camera pose, where the size of the buffer region may be dependent on predicted dynamics for the object; instructing, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions, and; determining, based on the second image, a second object pose of the object.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: January 24, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Patent number: 11488324
    Abstract: In one embodiment, a method includes accessing a calibration model for a camera rig. The method includes accessing multiple observations of an environment captured by the camera rig from multiple poses in the environment. The method includes generating an environmental model including geometry of the environment based on at least the observations, the poses, and the calibration model. The method includes determining, for one or more of the poses, one or more predicted observations of the environment based on the environmental model and the poses. The method includes comparing the predicted observations to the observations corresponding to the poses from which the predicted observations were determined. The method includes revising the calibration model based on the comparison. The method includes revising the environmental model based on at least a set of observations of the environment and the revised calibration model.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: November 1, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Steven John Lovegrove, Yuheng Ren
  • Publication number: 20220239844
    Abstract: In one embodiment, a method includes initializing latent codes respectively associated with times associated with frames in a training video of a scene captured by a camera. For each of the frames, a system (1) generates rendered pixel values for a set of pixels in the frame by querying NeRF using the latent code associated with the frame, a camera viewpoint associated with the frame, and ray directions associated with the set of pixels, and (2) updates the latent code associated with the frame and the NeRF based on comparisons between the rendered pixel values and original pixel values for the set of pixels. Once trained, the system renders output frames for an output video of the scene, wherein each output frame is rendered by querying the updated NeRF using one of the updated latent codes corresponding to a desired time associated with the output frame.
    Type: Application
    Filed: January 7, 2022
    Publication date: July 28, 2022
    Inventors: Zhaoyang Lv, Miroslava Slavcheva, Tianye Li, Michael Zollhoefer, Simon Gareth Green, Tanner Schmidt, Michael Goesele, Steven John Lovegrove, Christoph Lassner, Changil Kim
  • Publication number: 20220164971
    Abstract: A method for depth sensing from an image of a projected pattern is performed at an electronic device with one or more processors and memory. The method includes receiving an image of a projection of an illumination pattern; for a portion of the image, selecting a candidate image of a plurality of candidate images by comparing the portion of the image with a plurality of candidate images; and determining a depth for the portion of the image based on depth information associated with the selected candidate image. Related electronic devices and computer readable storage medium are also disclosed.
    Type: Application
    Filed: December 15, 2020
    Publication date: May 26, 2022
    Inventors: James Steven HEGARTY, Zijian WANG, Steven John LOVEGROVE, Yongjun KIM, Rajesh Lachhmandas CHHABRIA
  • Publication number: 20220139034
    Abstract: In one embodiment, a method includes instructing, at a first time, a camera having a plurality of pixel sensors to capture a first image of an environment comprising an object to determine a first object pose; determining, based on the first object pose, a predicted object pose of the object at a second time; generating pixel-activation instructions based on a buffer region around a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with a predicted camera pose, where the size of the buffer region may be dependent on predicted dynamics for the object; instructing, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions, and; determining, based on the second image, a second object pose of the object.
    Type: Application
    Filed: December 3, 2021
    Publication date: May 5, 2022
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Publication number: 20220082679
    Abstract: A method includes receiving a first wireless signal detected by a first device in an environment, the first wireless signal including a first distortion pattern caused by an object moving in the environment, receiving a second wireless signal detected by a second device in the environment, the second wireless signal including a second distortion pattern caused by the object moving in the environment, determining, by comparing the first distortion pattern to the second distortion pattern, that the first distortion pattern and the second distortion pattern correspond to a same movement event associated with the object moving in the environment, determining a timing offset between the first device and the second device based on information associated with the first distortion pattern and the second distortion pattern, and determining, based on the timing offset, temporal correspondences between data generated by the first device and data generated by the second device.
    Type: Application
    Filed: December 2, 2020
    Publication date: March 17, 2022
    Inventors: Minh Phuoc Vo, Kiran Kumar Somasundaram, Steven John Lovegrove
  • Patent number: 11222468
    Abstract: In one embodiment, a method includes instructing, at a first time, a camera with multiple pixel sensors to capture a first image of an environment comprising an object to determine a first object pose of the object. Based on the first object pose, the method determines a predicted object pose of the object at a second time. The method determines a predicted camera pose of the camera at the second time. The method generates pixel-activation instructions based on a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with the predicted camera pose. The method instructs, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions. The method determines, based on the second image, a second object pose of the object.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: January 11, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Patent number: 11042749
    Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment and determining, from the real-time data, current object data for the environment. The current object data may include both state data and relationship data for objects in the environment. The method may also include determining object deltas between the current object data and prior object data from an event graph. The prior object data may include prior state data and prior relationship data for the objects. The method may include detecting an unknown state for one of the objects, inferring a state for the object based on the event graph, and updating the event graph based on the object deltas and the inferred state. The method may further include sending updated event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: June 22, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Publication number: 20210183102
    Abstract: Raycast-based calibration techniques are described for determining calibration parameters associated with components of a head mounted display (HMD) of an augmented reality (AR) system having one or more off-axis reflective combiners. In an example, a system comprises an image capture device and a processor executing a calibration engine. The calibration engine is configured to determine correspondences between target points and camera pixels based on images of the target acquired through an optical system, the optical system including optical surfaces and an optical combiner. Each optical surface is defined by a difference of optical index on opposing sides of the surface. At least one calibration parameter for the optical system is determined by mapping rays from each camera pixel to each target point via raytracing through the optical system, the raytracing being based on the index differences, shapes, and positions of the optical surfaces relative to the one or more cameras.
    Type: Application
    Filed: June 17, 2020
    Publication date: June 17, 2021
    Inventors: Huixuan Tang, Hauke Malte Strasdat, Qi Guo, Steven John Lovegrove
  • Patent number: 11004222
    Abstract: A depth measurement assembly (DMA) includes a structured light emitter, an augmented camera, and a controller. The structured light emitter projects structured light into a local area under instructions from the controller. The augmented camera generates image data of an object illuminated with the structured light pattern projected by the structured light emitter in accordance with camera instructions generated by the controller. The augmented camera includes a high speed computation tracking sensor that comprises a plurality of augmented photodetectors. Each augmented photodetector converts light to data and stores the data in its own memory unit. The controller receives the image data and determines depth information of the object in the local area based in part on the image data. The depth measurement unit can be incorporated into a head-mounted display (HMD).
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: May 11, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Xinqiao Liu, Richard Andrew Newcombe, Steven John Lovegrove, Renzo De Nardi
  • Patent number: 10930077
    Abstract: The disclosed computer-implemented method may include determining a local position and a local orientation of a local device in an environment and receiving, by the local device and from a mapping system, object data for objects within the environment. The object data may include position data and orientation data for the objects and relationship data between the objects. The method may also include deriving, based on the object data received from the mapping system, and the local position and orientation of the local device, a contextual rendering of the objects that provides contextual data that modifies a user's view of the environment. The method may include displaying, using the local device, the contextual rendering of at least one of the plurality of objects to modify the user's view of the environment. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: February 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren