Patents by Inventor Steven John Lovegrove
Steven John Lovegrove has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12243273Abstract: In one embodiment, a method includes initializing latent codes respectively associated with times associated with frames in a training video of a scene captured by a camera. For each of the frames, a system (1) generates rendered pixel values for a set of pixels in the frame by querying NeRF using the latent code associated with the frame, a camera viewpoint associated with the frame, and ray directions associated with the set of pixels, and (2) updates the latent code associated with the frame and the NeRF based on comparisons between the rendered pixel values and original pixel values for the set of pixels. Once trained, the system renders output frames for an output video of the scene, wherein each output frame is rendered by querying the updated NeRF using one of the updated latent codes corresponding to a desired time associated with the output frame.Type: GrantFiled: January 7, 2022Date of Patent: March 4, 2025Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Zhaoyang Lv, Miroslava Slavcheva, Tianye Li, Michael Zollhoefer, Simon Gareth Green, Tanner Schmidt, Michael Goesele, Steven John Lovegrove, Christoph Lassner, Changil Kim
-
Patent number: 11804010Abstract: In one embodiment, a computing system instructs, at a first time, a camera having a plurality of pixel sensors to use the plurality of pixel sensors to capture a first image of an environment comprising an object. The computing system predicts, using at least the first image, a projection of the object appearing in a virtual image plane associated with a predicted camera pose at a second time. The computing system determines, based on the predicted projection of the object, a first region of pixels and a second region of pixels. The computing system generates pixel-activation instructions for the first region of pixels and the second region of pixels. The computing system instructs the camera to capture a second image of the environment at the second time according to the pixel-activation instructions.Type: GrantFiled: December 21, 2022Date of Patent: October 31, 2023Assignee: Meta Platforms Technologies, LLCInventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
-
Patent number: 11762080Abstract: A method includes receiving a first wireless signal detected by a first device in an environment, the first wireless signal including a first distortion pattern caused by an object moving in the environment, receiving a second wireless signal detected by a second device in the environment, the second wireless signal including a second distortion pattern caused by the object moving in the environment, determining, by comparing the first distortion pattern to the second distortion pattern, that the first distortion pattern and the second distortion pattern correspond to a same movement event associated with the object moving in the environment, determining a timing offset between the first device and the second device based on information associated with the first distortion pattern and the second distortion pattern, and determining, based on the timing offset, temporal correspondences between data generated by the first device and data generated by the second device.Type: GrantFiled: December 2, 2020Date of Patent: September 19, 2023Assignee: Meta Platforms Technologies, LLCInventors: Minh Phuoc Vo, Kiran Kumar Somasundaram, Steven John Lovegrove
-
Publication number: 20230260200Abstract: In one embodiment, a method includes determining a viewing direction of a scene and rendering an image of the scene for the viewing direction, wherein the rendering comprises: for each pixel of the image, casting a view ray into the scene, and for a particular sampling point along the view ray, determining a pixel radiance associated with surface light field (SLF) and opacity, which comprises identifying multiple voxels within a threshold distance to the particular sampling point, wherein each of the voxels is associated with a respective local plane, for each the voxels computing a pixel radiance associated with SLF and opacity based on locations of the particular sampling point and the local plane associated with that voxel, and determining the pixel radiance associated with SLF and opacity for the particular sampling point based on interpolating the pixel radiances associated with SLF and opacity associated with the multiple voxels.Type: ApplicationFiled: January 27, 2023Publication date: August 17, 2023Inventors: Samir Aroudj, Michael Goesele, Richard Andrew Newcombe, Tanner Schmidt, Florian Eddy Robert Ilg, Steven John Lovegrove
-
Patent number: 11676293Abstract: A method for depth sensing from an image of a projected pattern is performed at an electronic device with one or more processors and memory. The method includes receiving an image of a projection of an illumination pattern; for a portion of the image, selecting a candidate image of a plurality of candidate images by comparing the portion of the image with a plurality of candidate images; and determining a depth for the portion of the image based on depth information associated with the selected candidate image. Related electronic devices and computer readable storage medium are also disclosed.Type: GrantFiled: December 15, 2020Date of Patent: June 13, 2023Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: James Steven Hegarty, Zijian Wang, Steven John Lovegrove, Yongjun Kim, Rajesh Lachhmandas Chhabria
-
Publication number: 20230169686Abstract: In one embodiment, a method includes accessing a calibration model for a camera rig. The method includes accessing multiple observations of an environment captured by the camera rig from multiple poses in the environment. The method includes generating an environmental model including geometry of the environment based on at least the observations, the poses, and the calibration model. The method includes determining, for one or more of the poses, one or more predicted observations of the environment based on the environmental model and the poses. The method includes comparing the predicted observations to the observations corresponding to the poses from which the predicted observations were determined. The method includes revising the calibration model based on the comparison. The method includes revising the environmental model based on at least a set of observations of the environment and the revised calibration model.Type: ApplicationFiled: October 31, 2022Publication date: June 1, 2023Inventors: Steven John Lovegrove, Yuheng Ren
-
Publication number: 20230119703Abstract: In one embodiment, a computing system instructs, at a first time, a camera having a plurality of pixel sensors to use the plurality of pixel sensors to capture a first image of an environment comprising an object. The computing system predicts, using at least the first image, a projection of the object appearing in a virtual image plane associated with a predicted camera pose at a second time. The computing system determines, based on the predicted projection of the object, a first region of pixels and a second region of pixels. The computing system generates pixel-activation instructions for the first region of pixels and the second region of pixels. The computing system instructs the camera to capture a second image of the environment at the second time according to the pixel-activation instructions.Type: ApplicationFiled: December 21, 2022Publication date: April 20, 2023Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
-
Patent number: 11625862Abstract: In one embodiment, a method includes accessing a digital image captured by a camera that is connected to a machine-detectable object, detecting a reflection of the machine-detectable object in the digital image, computing, in response to the detection, a plane that is coincident with a reflective surface associated with the reflection, determining a boundary of the reflective surface in the plane based on at least one of a plurality of cues, and storing information associated with the reflective surface, where the information includes a pose of the reflective surface and the boundary of the reflective surface in a 3D model of a physical environment, and where the information associated with the reflective surface and the 3D model are configured to be used to render a reconstruction of the physical environment.Type: GrantFiled: October 16, 2020Date of Patent: April 11, 2023Assignee: Meta Platforms Technologies, LLCInventors: Michael Goesele, Julian Straub, Thomas John Whelan, Richard Andrew Newcombe, Steven John Lovegrove
-
Patent number: 11587254Abstract: Raycast-based calibration techniques are described for determining calibration parameters associated with components of a head mounted display (HMD) of an augmented reality (AR) system having one or more off-axis reflective combiners. In an example, a system comprises an image capture device and a processor executing a calibration engine. The calibration engine is configured to determine correspondences between target points and camera pixels based on images of the target acquired through an optical system, the optical system including optical surfaces and an optical combiner. Each optical surface is defined by a difference of optical index on opposing sides of the surface. At least one calibration parameter for the optical system is determined by mapping rays from each camera pixel to each target point via raytracing through the optical system, the raytracing being based on the index differences, shapes, and positions of the optical surfaces relative to the one or more cameras.Type: GrantFiled: June 17, 2020Date of Patent: February 21, 2023Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Huixuan Tang, Hauke Malte Strasdat, Qi Guo, Steven John Lovegrove
-
Patent number: 11587210Abstract: In one embodiment, a method includes a computer system accessing a curvilinear image captured using a camera lens, generating multiple rectilinear images from the curvilinear image based at least in part on one or more calibration parameters associated with the camera lens, identifying semantic information in one or more of the rectilinear images by processing each of the multiple rectilinear images using a machine-learning model configured to identify semantic information in rectilinear images, and identifying semantic information in the curvilinear image based on the identified semantic information in the one or more rectilinear images.Type: GrantFiled: March 11, 2020Date of Patent: February 21, 2023Assignee: Meta Platforms Technologies, LLCInventors: Yu Fan Chen, Kiran Kumar Somasundaram, Steven John Lovegrove, Yujun Shen
-
Patent number: 11562534Abstract: In one embodiment, a method includes instructing, at a first time, a camera having a plurality of pixel sensors to capture a first image of an environment comprising an object to determine a first object pose; determining, based on the first object pose, a predicted object pose of the object at a second time; generating pixel-activation instructions based on a buffer region around a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with a predicted camera pose, where the size of the buffer region may be dependent on predicted dynamics for the object; instructing, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions, and; determining, based on the second image, a second object pose of the object.Type: GrantFiled: December 3, 2021Date of Patent: January 24, 2023Assignee: Meta Platforms Technologies, LLCInventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
-
Patent number: 11488324Abstract: In one embodiment, a method includes accessing a calibration model for a camera rig. The method includes accessing multiple observations of an environment captured by the camera rig from multiple poses in the environment. The method includes generating an environmental model including geometry of the environment based on at least the observations, the poses, and the calibration model. The method includes determining, for one or more of the poses, one or more predicted observations of the environment based on the environmental model and the poses. The method includes comparing the predicted observations to the observations corresponding to the poses from which the predicted observations were determined. The method includes revising the calibration model based on the comparison. The method includes revising the environmental model based on at least a set of observations of the environment and the revised calibration model.Type: GrantFiled: July 22, 2019Date of Patent: November 1, 2022Assignee: Meta Platforms Technologies, LLCInventors: Steven John Lovegrove, Yuheng Ren
-
Publication number: 20220239844Abstract: In one embodiment, a method includes initializing latent codes respectively associated with times associated with frames in a training video of a scene captured by a camera. For each of the frames, a system (1) generates rendered pixel values for a set of pixels in the frame by querying NeRF using the latent code associated with the frame, a camera viewpoint associated with the frame, and ray directions associated with the set of pixels, and (2) updates the latent code associated with the frame and the NeRF based on comparisons between the rendered pixel values and original pixel values for the set of pixels. Once trained, the system renders output frames for an output video of the scene, wherein each output frame is rendered by querying the updated NeRF using one of the updated latent codes corresponding to a desired time associated with the output frame.Type: ApplicationFiled: January 7, 2022Publication date: July 28, 2022Inventors: Zhaoyang Lv, Miroslava Slavcheva, Tianye Li, Michael Zollhoefer, Simon Gareth Green, Tanner Schmidt, Michael Goesele, Steven John Lovegrove, Christoph Lassner, Changil Kim
-
Publication number: 20220164971Abstract: A method for depth sensing from an image of a projected pattern is performed at an electronic device with one or more processors and memory. The method includes receiving an image of a projection of an illumination pattern; for a portion of the image, selecting a candidate image of a plurality of candidate images by comparing the portion of the image with a plurality of candidate images; and determining a depth for the portion of the image based on depth information associated with the selected candidate image. Related electronic devices and computer readable storage medium are also disclosed.Type: ApplicationFiled: December 15, 2020Publication date: May 26, 2022Inventors: James Steven HEGARTY, Zijian WANG, Steven John LOVEGROVE, Yongjun KIM, Rajesh Lachhmandas CHHABRIA
-
Publication number: 20220139034Abstract: In one embodiment, a method includes instructing, at a first time, a camera having a plurality of pixel sensors to capture a first image of an environment comprising an object to determine a first object pose; determining, based on the first object pose, a predicted object pose of the object at a second time; generating pixel-activation instructions based on a buffer region around a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with a predicted camera pose, where the size of the buffer region may be dependent on predicted dynamics for the object; instructing, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions, and; determining, based on the second image, a second object pose of the object.Type: ApplicationFiled: December 3, 2021Publication date: May 5, 2022Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
-
Publication number: 20220082679Abstract: A method includes receiving a first wireless signal detected by a first device in an environment, the first wireless signal including a first distortion pattern caused by an object moving in the environment, receiving a second wireless signal detected by a second device in the environment, the second wireless signal including a second distortion pattern caused by the object moving in the environment, determining, by comparing the first distortion pattern to the second distortion pattern, that the first distortion pattern and the second distortion pattern correspond to a same movement event associated with the object moving in the environment, determining a timing offset between the first device and the second device based on information associated with the first distortion pattern and the second distortion pattern, and determining, based on the timing offset, temporal correspondences between data generated by the first device and data generated by the second device.Type: ApplicationFiled: December 2, 2020Publication date: March 17, 2022Inventors: Minh Phuoc Vo, Kiran Kumar Somasundaram, Steven John Lovegrove
-
Patent number: 11222468Abstract: In one embodiment, a method includes instructing, at a first time, a camera with multiple pixel sensors to capture a first image of an environment comprising an object to determine a first object pose of the object. Based on the first object pose, the method determines a predicted object pose of the object at a second time. The method determines a predicted camera pose of the camera at the second time. The method generates pixel-activation instructions based on a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with the predicted camera pose. The method instructs, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions. The method determines, based on the second image, a second object pose of the object.Type: GrantFiled: November 2, 2020Date of Patent: January 11, 2022Assignee: Facebook Technologies, LLC.Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
-
Patent number: 11042749Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment and determining, from the real-time data, current object data for the environment. The current object data may include both state data and relationship data for objects in the environment. The method may also include determining object deltas between the current object data and prior object data from an event graph. The prior object data may include prior state data and prior relationship data for the objects. The method may include detecting an unknown state for one of the objects, inferring a state for the object based on the event graph, and updating the event graph based on the object deltas and the inferred state. The method may further include sending updated event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: March 18, 2020Date of Patent: June 22, 2021Assignee: Facebook Technologies, LLCInventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
-
Publication number: 20210183102Abstract: Raycast-based calibration techniques are described for determining calibration parameters associated with components of a head mounted display (HMD) of an augmented reality (AR) system having one or more off-axis reflective combiners. In an example, a system comprises an image capture device and a processor executing a calibration engine. The calibration engine is configured to determine correspondences between target points and camera pixels based on images of the target acquired through an optical system, the optical system including optical surfaces and an optical combiner. Each optical surface is defined by a difference of optical index on opposing sides of the surface. At least one calibration parameter for the optical system is determined by mapping rays from each camera pixel to each target point via raytracing through the optical system, the raytracing being based on the index differences, shapes, and positions of the optical surfaces relative to the one or more cameras.Type: ApplicationFiled: June 17, 2020Publication date: June 17, 2021Inventors: Huixuan Tang, Hauke Malte Strasdat, Qi Guo, Steven John Lovegrove
-
Patent number: 11004222Abstract: A depth measurement assembly (DMA) includes a structured light emitter, an augmented camera, and a controller. The structured light emitter projects structured light into a local area under instructions from the controller. The augmented camera generates image data of an object illuminated with the structured light pattern projected by the structured light emitter in accordance with camera instructions generated by the controller. The augmented camera includes a high speed computation tracking sensor that comprises a plurality of augmented photodetectors. Each augmented photodetector converts light to data and stores the data in its own memory unit. The controller receives the image data and determines depth information of the object in the local area based in part on the image data. The depth measurement unit can be incorporated into a head-mounted display (HMD).Type: GrantFiled: May 4, 2020Date of Patent: May 11, 2021Assignee: Facebook Technologies, LLCInventors: Xinqiao Liu, Richard Andrew Newcombe, Steven John Lovegrove, Renzo De Nardi