Patents by Inventor Christopher A. Peri

Christopher A. Peri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240129448
    Abstract: A method includes obtaining a 2D image captured using an imaging sensor. The 2D image is associated with an imaging sensor pose. The method also includes providing the 2D image, the imaging sensor pose, and one or more additional imaging sensor poses to at least one machine learning model that is trained to generate a texture map and a depth map for the imaging sensor pose and each additional imaging sensor pose. The method further includes generating a stereo image pair based on the texture maps and the depth maps. The stereo image pair represents a 2.5D view of the 2D image. The 2.5D view includes a pair of images each including multiple collections of pixels and, for each collection of pixels, a common depth associated with the pixels in the collection of pixels. In addition, the method includes initiating display of the stereo image pair on an XR device.
    Type: Application
    Filed: July 17, 2023
    Publication date: April 18, 2024
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11960345
    Abstract: A method includes obtaining a request for one of multiple operational modes from an application installed on an extended reality (XR) device or an XR runtime/renderer of the XR device. The method also includes selecting a first mode of the operational modes, based at least partly on a real-time system performance of the XR device. The method also includes publishing the selected first mode to the XR runtime/renderer or the application. The method also includes performing a task related to at least one of image rendering or computer vision calculations for the application, using an algorithm associated with the selected first mode.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Moiz Kaizar Sonasath, Lu Luo
  • Patent number: 11961184
    Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11913016
    Abstract: Methods of making synthetic yeast cells by mating together two diploid (or higher ploidy) yeast species or hybrids to generate multi-ploid yeast hybrids are provided herein. The synthetic yeast cells made by this process and kits for performing the process are also provided.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: February 27, 2024
    Assignee: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: William Gerald Alexander, David Peris Navarro, Christopher Todd Hittinger
  • Patent number: 11900845
    Abstract: A system and method for display distortion calibration are configured to capture distortion with image patterns and calibrate distortion with ray tracing for an optical pipeline with lenses. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes generating an image pattern to encode display image pixels by encoding display distortion associated with a plurality of image patterns. The method also includes determining a distortion of the image pattern resulting from a lens on a head-mounted display (HMD) and decoding the distorted image patterns to obtain distortion of pixels on a display. A lookup table is created of angular distortion of all the pixels on the display. The method further includes providing a compensation factor for the distortion by creating distortion correction based on the lookup table of angular distortion.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: February 13, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20240046583
    Abstract: A method includes obtaining images of a scene and corresponding position data of a device that captures the images. The method also includes determining position data and direction data associated with camera rays passing through keyframes of the images. The method further includes using a position-dependent multilayer perceptron (MLP) and a direction-dependent MLP to create sparse feature vectors. The method also includes storing the sparse feature vectors in at least one data structure. The method further includes receiving a request to render the scene on an augmented reality (AR) device associated with a viewing direction. In addition, the method includes rendering the scene associated with the viewing direction using the sparse feature vectors in the at least one data structure.
    Type: Application
    Filed: July 17, 2023
    Publication date: February 8, 2024
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230410414
    Abstract: A method of video transformation for a video see-through (VST) augmented reality (AR) device includes obtaining video frames from multiple cameras associated with the VST AR device, where each video frame is associated with position data. The method also includes generating camera viewpoint depth maps associated with the video frames based on the video frames and the position data. The method further includes performing depth re-projection to transform the video frames from camera viewpoints to rendering viewpoints using the camera viewpoint depth maps. The method also includes performing hole filling of one or more holes created in one or more occlusion areas of at least one of the transformed video frames during the depth re-projection to generate at least one hole-filled video frame. In addition, the method includes displaying the transformed video frames including the at least one hole-filled video frame on multiple displays associated with the VST AR device.
    Type: Application
    Filed: November 4, 2022
    Publication date: December 21, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11741671
    Abstract: Generating a 3D scene reconstruction using depth fusion can include creating a high-resolution sparse depth map by mapping sensor depths from a low-resolution depth map to points corresponding to pixels of a high-resolution color image of a scene. The high-resolution sparse depth map can have the same resolution as the high-resolution color image. A fused sparse depth map can be produced by combining the high-resolution sparse depth map with sparse depths reconstructed from the high-resolution color image. The high-resolution dense depth map can be generated based on fused sparse depths of the fused sparse depth map.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: August 29, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11741676
    Abstract: A method includes obtaining scene data, wherein the scene data includes image data of a scene and depth data of the scene, and the depth data includes depth measurement values of points of a point cloud. The method further includes defining a first detection area, wherein the first detection area includes a spatially defined subset of the scene data, defining a plane model based on points of the point cloud within the first detection area, and defining a plane based on the plane model. The method includes determining at least one value of a usable size of the plane based on points of the point cloud, comparing at least one value of a characteristic size of a digital object to the at least one value of the usable size of the plane, and generating a display including the digital object positioned upon the plane based on the plane model.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: August 29, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Yingen Xiong, Haihui Guan
  • Publication number: 20230245373
    Abstract: A method includes receiving, from a camera, one or more frames of image data of a scene comprising a background and one or more three-dimensional objects, wherein each frame comprises a raster of pixels of image data; detecting layer information of the scene, wherein the layer information is associated with a depth-based distribution of the pixels in the one or more frames; and determining a multi-layer model for the scene, the multi-layer model comprising a plurality of discrete layers comprising first and second discrete layers, wherein each discrete layer is associated with a unique depth value relative to the camera. The method further includes mapping the pixels to the layers of the plurality of discrete layers; rendering the pixels as a first image of the scene as viewed from a first perspective; and rendering the pixels as a second image of the scene as viewed from a second perspective.
    Type: Application
    Filed: June 8, 2022
    Publication date: August 3, 2023
    Inventors: Yingen Xiong, Christopher Peri
  • Publication number: 20230245322
    Abstract: In one embodiment, a method includes identifying, in each image of a stereoscopic pair of images of a scene at a particular time, every pixel as either a static pixel corresponding to a portion of a scene that does not have local motion at that time or a dynamic pixel corresponding to a portion of a scene that has local motion at that time. For each static pixel, the method includes comparing each of a plurality of depth calculations for the pixel, and when the depth calculations differ by at least a threshold amount, then re-labeling that pixel as a dynamic pixel. For each dynamic pixel, the method includes comparing a geometric 3D calculation for the pixel with a temporal 3D calculation for that pixel, and when the geometric 3D calculation and the temporal 3D calculation are within a threshold amount, then re-labeling the pixel as a static pixel.
    Type: Application
    Filed: July 28, 2022
    Publication date: August 3, 2023
    Inventors: Yingen Xiong, Christopher Peri
  • Publication number: 20230245396
    Abstract: A method includes receiving depth data of a real-world scene from a depth sensor, receiving image data of the scene from an image sensor, receiving movement data of the depth and image sensors from an IMU, and determining an initial 6DOF pose of an apparatus based on the depth data, image data, and/or movement data. The method also includes passing the 6DOF pose to a back end to obtain an optimized pose and generating, based on the optimized pose, image data, and depth data, a three-dimensional reconstruction of the scene. The reconstruction includes a dense depth map, a dense surface mesh, and/or one or more semantically segmented objects. The method further includes passing the reconstruction to a front end and rendering, at the front end, an XR frame. The XR frame includes a three-dimensional XR object projected on one or more surfaces of the scene.
    Type: Application
    Filed: July 6, 2022
    Publication date: August 3, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11715182
    Abstract: In one embodiment, a method includes accessing a plurality of image frames captured by one or more cameras, classifying one or more first objects detected in one or more first image frames of the plurality of image frames as undesirable, applying a pixel filtering to the one or more first image frames to replace one or more first pixel sets associated with the one or more first objects with pixels from one or more second image frames of the plurality of image frames to generate a final image frame, providing the final image frame for display.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: August 1, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Euisuk Chung, Yingen Xiong, Lu Luo
  • Patent number: 11704877
    Abstract: A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: July 18, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Yingen Xiong, Lu Luo
  • Publication number: 20230215108
    Abstract: A system and method for adaptive volume-based scene reconstruction for XR platform application are provided. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes determining a processor computation load. The method also includes, based on the determined computation load, adjusting one or more parameters for the 3D scene reconstruction to compensate for the determined computation load. The method further includes rendering a reconstructed 3D scene.
    Type: Application
    Filed: April 6, 2022
    Publication date: July 6, 2023
    Inventors: Christopher A Peri, Divi Schmidt, Yingen Xiong, Lu Luo
  • Publication number: 20230215075
    Abstract: A method for deferred rendering on an extended reality (XR) device includes establishing a transport session for content on the XR device with a server. The method also includes performing a loop configuration for the content based on the transport session between the XR device and the server. The method further includes providing pose information based on parameters of the loop configuration to the server. The method also includes receiving pre-rendered content based on the pose information from the server. In addition, the method includes processing and displaying the pre-rendered content on the XR device.
    Type: Application
    Filed: October 20, 2022
    Publication date: July 6, 2023
    Inventors: Christopher A. Peri, Eric Ho Ching Yip
  • Patent number: 11688073
    Abstract: A method includes accessing image data and depth data corresponding to image frames to be displayed on an extended reality (XR) display device, and determining sets of feature points corresponding to the image frames based on a multi-layer sampling of the image data and the depth data. The method further includes generating a set of sparse feature points based on an integration of the sets of feature points. The set of sparse feature points are determined based on relative changes in depth data with respect to the sets of feature points. The method further includes generating a set of sparse depth points based on the set of sparse feature points and the depth data and sending the set of sparse depth points to the XR display device for reconstruction of a dense depth map corresponding to the image frames utilizing the set of sparse depth points.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: June 27, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Yingen Xiong
  • Patent number: 11670063
    Abstract: An electronic device that reprojects two-dimensional (2D) images to three-dimensional (3D) images includes a memory configured to store instructions, and a processor configured to execute the instructions to: propagate an intensity for at least one pixel of an image based on a depth guide of neighboring pixels of the at least one pixel, wherein the at least one pixel is considered a hole during 2D to 3D image reprojection; propagate a color for the at least one pixel based on an intensity guide of the neighboring pixels of the at least one pixel; and compute at least one weight for the at least one pixel based on the intensity and color propagation.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: June 6, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230137199
    Abstract: A system and method for display distortion calibration are configured to capture distortion with image patterns and calibrate distortion with ray tracing for an optical pipeline with lenses. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes generating an image pattern to encode display image pixels by encoding display distortion associated with a plurality of image patterns. The method also includes determining a distortion of the image pattern resulting from a lens on a head-mounted display (HMD) and decoding the distorted image patterns to obtain distortion of pixels on a display. A lookup table is created of angular distortion of all the pixels on the display. The method further includes providing a compensation factor for the distortion by creating distortion correction based on the lookup table of angular distortion.
    Type: Application
    Filed: March 16, 2022
    Publication date: May 4, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230140170
    Abstract: A method includes obtaining first and second image data of a real-world scene, performing feature extraction to obtain first and second feature maps, and performing pose tracking based on at least one of the first image data, second image data, and pose data to obtain a 6DOF pose of an apparatus. The method also includes generating, based on the 6DOF pose, first feature map, and second feature map, a disparity map between the image data and generating an initial depth map based on the disparity map. The method further includes generating a dense depth map based on the initial depth map and a camera model and generating, based on the dense depth map, a three-dimensional reconstruction of at least pail of the scene. In addition, the method includes rendering an AR or XR display that includes one or more virtual objects positioned to contact one or more surfaces of the reconstruction.
    Type: Application
    Filed: July 6, 2022
    Publication date: May 4, 2023
    Inventors: Yingen Xiong, Christopher A. Peri