Patents by Inventor Yuheng Ren

Yuheng Ren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11132834
    Abstract: The disclosed computer-implemented method may include receiving, from a first device in an environment, real-time data associated with the environment and generating map data for the environment based on the real-time data received from the first device. The method may include creating, by merging the map data of the first device with aggregate map data associated with at least one other device, a joint anchor graph that is free of identifiable information, and hosting the joint anchor graph for a shared artificial reality session between the first device and the at least one other device. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: September 28, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Yuheng Ren, Yajie Yan
  • Patent number: 11132841
    Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: September 28, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
  • Patent number: 11042749
    Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment and determining, from the real-time data, current object data for the environment. The current object data may include both state data and relationship data for objects in the environment. The method may also include determining object deltas between the current object data and prior object data from an event graph. The prior object data may include prior state data and prior relationship data for the objects. The method may include detecting an unknown state for one of the objects, inferring a state for the object based on the event graph, and updating the event graph based on the object deltas and the inferred state. The method may further include sending updated event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: June 22, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Patent number: 10930077
    Abstract: The disclosed computer-implemented method may include determining a local position and a local orientation of a local device in an environment and receiving, by the local device and from a mapping system, object data for objects within the environment. The object data may include position data and orientation data for the objects and relationship data between the objects. The method may also include deriving, based on the object data received from the mapping system, and the local position and orientation of the local device, a contextual rendering of the objects that provides contextual data that modifies a user's view of the environment. The method may include displaying, using the local device, the contextual rendering of at least one of the plurality of objects to modify the user's view of the environment. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: February 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Publication number: 20210042994
    Abstract: The disclosed computer-implemented method may include receiving, from a first device in an environment, real-time data associated with the environment and generating map data for the environment based on the real-time data received from the first device. The method may include creating, by merging the map data of the first device with aggregate map data associated with at least one other device, a joint anchor graph that is free of identifiable information, and hosting the joint anchor graph for a shared artificial reality session between the first device and the at least one other device. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: August 9, 2019
    Publication date: February 11, 2021
    Inventors: Richard Andrew Newcombe, Yuheng Ren, Yajie Yan
  • Publication number: 20210027492
    Abstract: In one embodiment, a method includes accessing a calibration model for a camera rig. The method includes accessing multiple observations of an environment captured by the camera rig from multiple poses in the environment. The method includes generating an environmental model including geometry of the environment based on at least the observations, the poses, and the calibration model. The method includes determining, for one or more of the poses, one or more predicted observations of the environment based on the environmental model and the poses. The method includes comparing the predicted observations to the observations corresponding to the poses from which the predicted observations were determined. The method includes revising the calibration model based on the comparison. The method includes revising the environmental model based on at least a set of observations of the environment and the revised calibration model.
    Type: Application
    Filed: July 22, 2019
    Publication date: January 28, 2021
    Inventors: Steven John Lovegrove, Yuheng Ren
  • Patent number: 10901215
    Abstract: The disclosed computer-implemented method may include comprising identifying, within a real-world environment, a position of a user relative to a safety boundary. The position of the user is identified by a head-mounted display system comprising a display device. The display device is configured to at least partially obscure visibility of the real-world environment to the user. The method may further include selecting, based on the position of the user, at least a portion of a model of the real-world environment, rendering the portion of the model of the real-world environment, and displaying the rendered portion of the model of the real-world environment via the display device as a notification of the position of the user relative to the safety boundary. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: January 26, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Newcombe, Simon Gareth Green, Steven John Lovegrove, Renzo De Nardi, Yuheng Ren, Thomas John Whelan
  • Patent number: 10846913
    Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: November 24, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10818029
    Abstract: This present disclosure relates to systems and processes for capturing an unstructured light field in a plurality of images. In particular embodiments, a plurality of keypoints are identified on a first keyframe in a plurality of captured images. A first convex hull is computed from all keypoints in the first keyframe and merged with previous convex hulls corresponding to previous keyframes to form a convex hull union. Each keypoint is tracked from the first keyframe to a second image. The second image is adjusted to compensate for camera rotation during capture, and a second convex hull is computed from all keypoints in the second image. If the overlapping region between the second convex hull and the convex hull union is equal to, or less than, a predetermined size, the second image is designated as a new keyframe, and the convex hull union is augmented with the second convex hull.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: October 27, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Publication number: 20200334902
    Abstract: In one embodiment, a method includes accessing a digital map of a real-world region, where the digital map includes one or more three-dimensional meshes corresponding to one or more three-dimensional objects within the real-world region, receiving an object query including an identifier for an anchor in the digital map, positional information relative to the anchor, and information associated with a directional vector, determining a position within the digital map based on the identifier for the anchor and the positional information relative to the anchor, determining a three-dimensional mesh in the digital map that intersects with a projection of the directional vector from the determined position within the digital map, identifying metadata associated with the three-dimensional mesh, and sending the metadata to the second computing device.
    Type: Application
    Filed: April 19, 2019
    Publication date: October 22, 2020
    Inventors: Mingfei Yan, Yajie Yan, Richard Andrew Newcombe, Yuheng Ren
  • Patent number: 10726560
    Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a 3D projection of an object in a virtual reality or augmented reality environment comprises obtaining a sequence of images along a camera translation using a single lens camera. Each image contains a portion of overlapping subject matter, including the object. The object is segmented from the sequence of images using a trained segmenting neural network to form a sequence of segmented object images, to which an art-style transfer is applied using a trained transfer neural network. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are mapped to a rotation range for display in the virtual reality or augmented reality environment.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: July 28, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
  • Patent number: 10719939
    Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a three-dimensional (3D) projection of an object is provided. A sequence of images along a camera translation may be obtained using a single lens camera. Each image contains at least a portion of overlapping subject matter, which includes the object. The object is semantically segmented from the sequence of images using a trained neural network to form a sequence of segmented object images, which are then refined using fine-grained segmentation. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are then mapped to a rotation range for display in the virtual reality or augmented reality environment.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: July 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
  • Publication number: 20200218898
    Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment and determining, from the real-time data, current object data for the environment. The current object data may include both state data and relationship data for objects in the environment. The method may also include determining object deltas between the current object data and prior object data from an event graph. The prior object data may include prior state data and prior relationship data for the objects. The method may include detecting an unknown state for one of the objects, inferring a state for the object based on the event graph, and updating the event graph based on the object deltas and the inferred state. The method may further include sending updated event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: March 18, 2020
    Publication date: July 9, 2020
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Publication number: 20200175764
    Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 4, 2020
    Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
  • Patent number: 10650574
    Abstract: Various embodiments of the present disclosure relate generally to systems and processes for generating stereo pairs for virtual reality. According to particular embodiments, a method comprises obtaining a monocular sequence of images using the single lens camera during a capture mode. The sequence of images is captured along a camera translation. Each image in the sequence of images contains at least a portion of overlapping subject matter, which includes an object. The method further comprises generating stereo pairs, for one or more points along the camera translation, for virtual reality using the sequence of images. Generating the stereo pairs may include: selecting frames for each stereo pair based on a spatial baseline; interpolating virtual images in between captured images in the sequence of images; correcting selected frames by rotating the images; and rendering the selected frames by assigning each image in the selected frames to left and right eyes.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: May 12, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10635905
    Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment. The method may also include determining, from the real-time data, current mapping and object data. The current mapping data may include coordinate data for the environment and the current object data may include both state data and relationship data for objects in the environment. The method may also include determining mapping deltas between the current mapping data and baseline map data and determining object deltas between the current object data and an event graph. The event graph may include prior state data and prior relationship data for objects. The method may also include updating the baseline map data and the event graph based on the deltas and sending updated baseline map data and event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: April 28, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Publication number: 20200089953
    Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment. The method may also include determining, from the real-time data, current mapping and object data. The current mapping data may include coordinate data for the environment and the current object data may include both state data and relationship data for objects in the environment. The method may also include determining mapping deltas between the current mapping data and baseline map data and determining object deltas between the current object data and an event graph. The event graph may include prior state data and prior relationship data for objects. The method may also include updating the baseline map data and the event graph based on the deltas and sending updated baseline map data and event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: September 14, 2018
    Publication date: March 19, 2020
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Patent number: 10586378
    Abstract: The present disclosure describes systems and processes for image sequence stabilization. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a camera rotation value and a focal length value are calculated from two randomly sampled keypoints on the first image and two corresponding keypoints on the second image. An optimal camera rotation and focal length pair corresponding to an optimal transformation for producing an image warp for image sequence stabilization is determined. The image warp for image sequence stabilization is constructed using the optimal camera and focal length pair.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: March 10, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Publication number: 20200027263
    Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.
    Type: Application
    Filed: September 27, 2019
    Publication date: January 23, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10540773
    Abstract: Various embodiments of the present invention relate generally to systems and processes for interpolating images of an object. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a plurality of transformations are computed using two randomly sampled keypoint correspondences, each of which includes a keypoint on the first image and a corresponding keypoint on the second image. An optimal subset of transformations is determined from the plurality of transformations based on predetermined criteria, and transformation parameters corresponding to the optimal subset of transformations is calculated and stored for on-the-fly interpolation.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: January 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Yuheng Ren