Patents by Inventor Christopher A. Peri
Christopher A. Peri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12154219Abstract: A method of video transformation for a video see-through (VST) augmented reality (AR) device includes obtaining video frames from multiple cameras associated with the VST AR device, where each video frame is associated with position data. The method also includes generating camera viewpoint depth maps associated with the video frames based on the video frames and the position data. The method further includes performing depth re-projection to transform the video frames from camera viewpoints to rendering viewpoints using the camera viewpoint depth maps. The method also includes performing hole filling of one or more holes created in one or more occlusion areas of at least one of the transformed video frames during the depth re-projection to generate at least one hole-filled video frame. In addition, the method includes displaying the transformed video frames including the at least one hole-filled video frame on multiple displays associated with the VST AR device.Type: GrantFiled: November 4, 2022Date of Patent: November 26, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20240346779Abstract: A method includes determining that an inter-pupillary distance (IPD) between display lenses of a video see-through (VST) extended reality (XR) device has been adjusted with respect to a default IPD. The method also includes obtaining an image captured using a see-through camera of the VST XR device. The see-through camera is configured to capture images of a three-dimensional (3D) scene. The method further includes transforming the image to match a viewpoint of a corresponding one of the display lenses according to a change in IPD with respect to the default IPD in order to generate a transformed image. The method also includes correcting distortions in the transformed image based on one or more lens distortion coefficients corresponding to the change in IPD in order to generate a corrected image. In addition, the method includes initiating presentation of the corrected image on a display panel of the VST XR device.Type: ApplicationFiled: April 9, 2024Publication date: October 17, 2024Inventors: Yingen Xiong, Christopher A. Peri
-
Patent number: 12062145Abstract: A method includes receiving depth data of a real-world scene from a depth sensor, receiving image data of the scene from an image sensor, receiving movement data of the depth and image sensors from an IMU, and determining an initial 6DOF pose of an apparatus based on the depth data, image data, and/or movement data. The method also includes passing the 6DOF pose to a back end to obtain an optimized pose and generating, based on the optimized pose, image data, and depth data, a three-dimensional reconstruction of the scene. The reconstruction includes a dense depth map, a dense surface mesh, and/or one or more semantically segmented objects. The method further includes passing the reconstruction to a front end and rendering, at the front end, an XR frame. The XR frame includes a three-dimensional XR object projected on one or more surfaces of the scene.Type: GrantFiled: July 6, 2022Date of Patent: August 13, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20240233098Abstract: A method includes obtaining an image captured by a see-through camera of a video see-through (VST) augmented reality (AR) device. The method also includes identifying a first distortion created by at least one see-through camera lens and identifying a second distortion created by at least one display lens of the VST AR device. The method further includes determining a combined distortion based on the first distortion and the second distortion. The method also includes pre-warping the image to offset the combined distortion such that the image is not distorted when the pre-warped image is viewed by a user through the at least one display lens of the VST AR device. In addition, the method includes presenting the pre-warped image to the user on at least one display of the VST AR device.Type: ApplicationFiled: July 27, 2023Publication date: July 11, 2024Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20240223742Abstract: A method includes obtaining images of a scene captured using a stereo pair of imaging sensors of an XR device and depth data associated with the images, where the scene includes multiple objects. The method also includes obtaining volume-based 3D models of the objects. The method further includes, for one or more first objects, performing depth-based reprojection of the one or more 3D models of the one or more first objects to left and right virtual views based on one or more depths of the one or more first objects. The method also includes, for one or more second objects, performing constant-depth reprojection of the one or more 3D models of the one or more second objects to the left and right virtual views based on a specified depth. In addition, the method includes rendering the left and right virtual views for presentation by the XR device.Type: ApplicationFiled: December 1, 2023Publication date: July 4, 2024Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20240223739Abstract: A method includes obtaining first and second image frames of a scene. The method also includes providing the first image frame as input to an object segmentation model, where the object segmentation model is trained to generate first object segmentation predictions for objects in the scene and a depth or disparity map based on the first image frame. The method further includes generating second object segmentation predictions for the objects in the scene based on the second image frame. The method also includes determining boundaries of the objects in the scene based on the first and second object segmentation predictions. In addition, the method includes generating a virtual view for presentation on a display of an extended reality (XR) device based on the boundaries of the objects in the scene.Type: ApplicationFiled: July 27, 2023Publication date: July 4, 2024Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20240129448Abstract: A method includes obtaining a 2D image captured using an imaging sensor. The 2D image is associated with an imaging sensor pose. The method also includes providing the 2D image, the imaging sensor pose, and one or more additional imaging sensor poses to at least one machine learning model that is trained to generate a texture map and a depth map for the imaging sensor pose and each additional imaging sensor pose. The method further includes generating a stereo image pair based on the texture maps and the depth maps. The stereo image pair represents a 2.5D view of the 2D image. The 2.5D view includes a pair of images each including multiple collections of pixels and, for each collection of pixels, a common depth associated with the pixels in the collection of pixels. In addition, the method includes initiating display of the stereo image pair on an XR device.Type: ApplicationFiled: July 17, 2023Publication date: April 18, 2024Inventors: Yingen Xiong, Christopher A. Peri
-
Patent number: 11961184Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.Type: GrantFiled: March 16, 2022Date of Patent: April 16, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yingen Xiong, Christopher A. Peri
-
Patent number: 11960345Abstract: A method includes obtaining a request for one of multiple operational modes from an application installed on an extended reality (XR) device or an XR runtime/renderer of the XR device. The method also includes selecting a first mode of the operational modes, based at least partly on a real-time system performance of the XR device. The method also includes publishing the selected first mode to the XR runtime/renderer or the application. The method also includes performing a task related to at least one of image rendering or computer vision calculations for the application, using an algorithm associated with the selected first mode.Type: GrantFiled: May 23, 2022Date of Patent: April 16, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Christopher A. Peri, Moiz Kaizar Sonasath, Lu Luo
-
Patent number: 11900845Abstract: A system and method for display distortion calibration are configured to capture distortion with image patterns and calibrate distortion with ray tracing for an optical pipeline with lenses. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes generating an image pattern to encode display image pixels by encoding display distortion associated with a plurality of image patterns. The method also includes determining a distortion of the image pattern resulting from a lens on a head-mounted display (HMD) and decoding the distorted image patterns to obtain distortion of pixels on a display. A lookup table is created of angular distortion of all the pixels on the display. The method further includes providing a compensation factor for the distortion by creating distortion correction based on the lookup table of angular distortion.Type: GrantFiled: March 16, 2022Date of Patent: February 13, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20240046583Abstract: A method includes obtaining images of a scene and corresponding position data of a device that captures the images. The method also includes determining position data and direction data associated with camera rays passing through keyframes of the images. The method further includes using a position-dependent multilayer perceptron (MLP) and a direction-dependent MLP to create sparse feature vectors. The method also includes storing the sparse feature vectors in at least one data structure. The method further includes receiving a request to render the scene on an augmented reality (AR) device associated with a viewing direction. In addition, the method includes rendering the scene associated with the viewing direction using the sparse feature vectors in the at least one data structure.Type: ApplicationFiled: July 17, 2023Publication date: February 8, 2024Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20230410414Abstract: A method of video transformation for a video see-through (VST) augmented reality (AR) device includes obtaining video frames from multiple cameras associated with the VST AR device, where each video frame is associated with position data. The method also includes generating camera viewpoint depth maps associated with the video frames based on the video frames and the position data. The method further includes performing depth re-projection to transform the video frames from camera viewpoints to rendering viewpoints using the camera viewpoint depth maps. The method also includes performing hole filling of one or more holes created in one or more occlusion areas of at least one of the transformed video frames during the depth re-projection to generate at least one hole-filled video frame. In addition, the method includes displaying the transformed video frames including the at least one hole-filled video frame on multiple displays associated with the VST AR device.Type: ApplicationFiled: November 4, 2022Publication date: December 21, 2023Inventors: Yingen Xiong, Christopher A. Peri
-
Patent number: 11741671Abstract: Generating a 3D scene reconstruction using depth fusion can include creating a high-resolution sparse depth map by mapping sensor depths from a low-resolution depth map to points corresponding to pixels of a high-resolution color image of a scene. The high-resolution sparse depth map can have the same resolution as the high-resolution color image. A fused sparse depth map can be produced by combining the high-resolution sparse depth map with sparse depths reconstructed from the high-resolution color image. The high-resolution dense depth map can be generated based on fused sparse depths of the fused sparse depth map.Type: GrantFiled: October 13, 2021Date of Patent: August 29, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yingen Xiong, Christopher A. Peri
-
Patent number: 11741676Abstract: A method includes obtaining scene data, wherein the scene data includes image data of a scene and depth data of the scene, and the depth data includes depth measurement values of points of a point cloud. The method further includes defining a first detection area, wherein the first detection area includes a spatially defined subset of the scene data, defining a plane model based on points of the point cloud within the first detection area, and defining a plane based on the plane model. The method includes determining at least one value of a usable size of the plane based on points of the point cloud, comparing at least one value of a characteristic size of a digital object to the at least one value of the usable size of the plane, and generating a display including the digital object positioned upon the plane based on the plane model.Type: GrantFiled: June 11, 2021Date of Patent: August 29, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Christopher A. Peri, Yingen Xiong, Haihui Guan
-
Publication number: 20230245373Abstract: A method includes receiving, from a camera, one or more frames of image data of a scene comprising a background and one or more three-dimensional objects, wherein each frame comprises a raster of pixels of image data; detecting layer information of the scene, wherein the layer information is associated with a depth-based distribution of the pixels in the one or more frames; and determining a multi-layer model for the scene, the multi-layer model comprising a plurality of discrete layers comprising first and second discrete layers, wherein each discrete layer is associated with a unique depth value relative to the camera. The method further includes mapping the pixels to the layers of the plurality of discrete layers; rendering the pixels as a first image of the scene as viewed from a first perspective; and rendering the pixels as a second image of the scene as viewed from a second perspective.Type: ApplicationFiled: June 8, 2022Publication date: August 3, 2023Inventors: Yingen Xiong, Christopher Peri
-
Publication number: 20230245396Abstract: A method includes receiving depth data of a real-world scene from a depth sensor, receiving image data of the scene from an image sensor, receiving movement data of the depth and image sensors from an IMU, and determining an initial 6DOF pose of an apparatus based on the depth data, image data, and/or movement data. The method also includes passing the 6DOF pose to a back end to obtain an optimized pose and generating, based on the optimized pose, image data, and depth data, a three-dimensional reconstruction of the scene. The reconstruction includes a dense depth map, a dense surface mesh, and/or one or more semantically segmented objects. The method further includes passing the reconstruction to a front end and rendering, at the front end, an XR frame. The XR frame includes a three-dimensional XR object projected on one or more surfaces of the scene.Type: ApplicationFiled: July 6, 2022Publication date: August 3, 2023Inventors: Yingen Xiong, Christopher A. Peri
-
Publication number: 20230245322Abstract: In one embodiment, a method includes identifying, in each image of a stereoscopic pair of images of a scene at a particular time, every pixel as either a static pixel corresponding to a portion of a scene that does not have local motion at that time or a dynamic pixel corresponding to a portion of a scene that has local motion at that time. For each static pixel, the method includes comparing each of a plurality of depth calculations for the pixel, and when the depth calculations differ by at least a threshold amount, then re-labeling that pixel as a dynamic pixel. For each dynamic pixel, the method includes comparing a geometric 3D calculation for the pixel with a temporal 3D calculation for that pixel, and when the geometric 3D calculation and the temporal 3D calculation are within a threshold amount, then re-labeling the pixel as a static pixel.Type: ApplicationFiled: July 28, 2022Publication date: August 3, 2023Inventors: Yingen Xiong, Christopher Peri
-
Patent number: 11715182Abstract: In one embodiment, a method includes accessing a plurality of image frames captured by one or more cameras, classifying one or more first objects detected in one or more first image frames of the plurality of image frames as undesirable, applying a pixel filtering to the one or more first image frames to replace one or more first pixel sets associated with the one or more first objects with pixels from one or more second image frames of the plurality of image frames to generate a final image frame, providing the final image frame for display.Type: GrantFiled: March 17, 2022Date of Patent: August 1, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Christopher A. Peri, Euisuk Chung, Yingen Xiong, Lu Luo
-
Patent number: 11704877Abstract: A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.Type: GrantFiled: June 30, 2021Date of Patent: July 18, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Christopher A. Peri, Yingen Xiong, Lu Luo
-
Publication number: 20230215075Abstract: A method for deferred rendering on an extended reality (XR) device includes establishing a transport session for content on the XR device with a server. The method also includes performing a loop configuration for the content based on the transport session between the XR device and the server. The method further includes providing pose information based on parameters of the loop configuration to the server. The method also includes receiving pre-rendered content based on the pose information from the server. In addition, the method includes processing and displaying the pre-rendered content on the XR device.Type: ApplicationFiled: October 20, 2022Publication date: July 6, 2023Inventors: Christopher A. Peri, Eric Ho Ching Yip