Patents by Inventor Yingen Xiong

Yingen Xiong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240129448
    Abstract: A method includes obtaining a 2D image captured using an imaging sensor. The 2D image is associated with an imaging sensor pose. The method also includes providing the 2D image, the imaging sensor pose, and one or more additional imaging sensor poses to at least one machine learning model that is trained to generate a texture map and a depth map for the imaging sensor pose and each additional imaging sensor pose. The method further includes generating a stereo image pair based on the texture maps and the depth maps. The stereo image pair represents a 2.5D view of the 2D image. The 2.5D view includes a pair of images each including multiple collections of pixels and, for each collection of pixels, a common depth associated with the pixels in the collection of pixels. In addition, the method includes initiating display of the stereo image pair on an XR device.
    Type: Application
    Filed: July 17, 2023
    Publication date: April 18, 2024
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11961184
    Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20240121370
    Abstract: A method includes obtaining a stereo image pair including a first image and a second image. The method also includes generating a first feature map of the first image and a second feature map of the second image, the first and second feature maps including extracted positions associated with pixels in the images. The method further includes generating a disparity map between the first and second images based on a dense depth map. The method also includes generating a verified depth map based on a pixelwise comparison of predicted positions and the extracted positions associated with at least some of the pixels in at least one of the images, the predicted positions determined based on the disparity map. In addition, the method includes generating a first virtual view and a second virtual view to present on a display panel of an VST AR device based on the verified depth map.
    Type: Application
    Filed: April 18, 2023
    Publication date: April 11, 2024
    Inventor: Yingen Xiong
  • Publication number: 20240078765
    Abstract: A method includes generating a virtual view image and a virtual depth map based on an image captured using a see-through camera and a corresponding depth map. The virtual view image and the virtual depth map include holes for which image data or depth data cannot be determined. The method also includes searching one or more previous images to locate a region in at least one previous image that includes missing pixels in the holes. The method further includes at least partially filling the holes in the virtual view image and the virtual depth map with image data and depth data associated with the located region to generate a filled virtual view image and a filled virtual depth map. In addition, the method includes generating a virtual view to present on a display panel of a VST AR device using the filled virtual view image and the filled virtual depth map.
    Type: Application
    Filed: July 17, 2023
    Publication date: March 7, 2024
    Inventor: Yingen Xiong
  • Publication number: 20240073514
    Abstract: A method includes, in response to initiating a shooting mode of a camera application on an electronic device, collecting sensor information comprising at least one of: motion data of the electronic device, position data of the electronic device, and image data captured by one or more imaging sensors of the electronic device, wherein the shooting mode represents at least one of: a video record mode and an image capture mode. The method also includes determining, using a trained machine learning model, whether a user intention is to record video or capture an image based on features extracted from the sensor information. The method further includes recording video regardless of the shooting mode in response to determining that the user intention is to record video or capturing the image regardless of the shooting mode in response to determining that the user intention is to capture the image.
    Type: Application
    Filed: April 20, 2023
    Publication date: February 29, 2024
    Inventors: Christopher Anthony Peri, Ravindraraj Mamadgi, Yingen Xiong
  • Publication number: 20240062483
    Abstract: A method includes receiving first and second images from first and second see-through cameras with first and second camera viewpoints. The method also includes generating a first virtual image corresponding to a first virtual viewpoint by applying a first mapping to the first image. The first mapping is based on relative positions of the first camera viewpoint and the first virtual viewpoint corresponding to a first eye of a user. The method further includes generating a second virtual image corresponding to a second virtual viewpoint by applying a second mapping to the second image. The second mapping is based on relative positions of the second camera viewpoint and the second virtual viewpoint corresponding to a second eye of the user. In addition, the method includes presenting the first and second virtual images to the first and second virtual viewpoints on at least one display panel of an augmented reality device.
    Type: Application
    Filed: April 5, 2023
    Publication date: February 22, 2024
    Inventor: Yingen Xiong
  • Patent number: 11900845
    Abstract: A system and method for display distortion calibration are configured to capture distortion with image patterns and calibrate distortion with ray tracing for an optical pipeline with lenses. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes generating an image pattern to encode display image pixels by encoding display distortion associated with a plurality of image patterns. The method also includes determining a distortion of the image pattern resulting from a lens on a head-mounted display (HMD) and decoding the distorted image patterns to obtain distortion of pixels on a display. A lookup table is created of angular distortion of all the pixels on the display. The method further includes providing a compensation factor for the distortion by creating distortion correction based on the lookup table of angular distortion.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: February 13, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20240046576
    Abstract: In one embodiment, a method includes capturing, by a calibration camera, a calibration pattern displayed on a display of a video see-through AR system, where the calibration camera is located at an eye position for viewing content on the video see-through AR system. The method further includes determining, based on the captured calibration pattern, one or more system parameters of the video see-through AR system that represent combined distortion caused by a display lens of the video see-through AR system and caused by a camera lens of the see-through camera; determining, based on the one or more system parameters and on one or more camera parameters that represent distortion caused by the see-through camera, one or more display-lens parameters that represent distortion caused by the display lens; and storing the one or more system parameters and the one more display-lens parameters as a calibration for the system.
    Type: Application
    Filed: April 18, 2023
    Publication date: February 8, 2024
    Inventor: Yingen Xiong
  • Publication number: 20240046575
    Abstract: In one embodiment, a method includes capturing, using a pair of cameras of a video see-through AR system, stereo images of a predetermined calibration object having a predetermined pose and obtaining, by a position sensor, position data for each of the pair of cameras at a time when the stereo images are captured. The method further includes generating a 3D reconstruction of the calibration object. The method further includes performing a registration process by generating, using parameters of each of a pair of stereo virtual cameras, stereo virtual views that include the predetermined calibration object and determining, for each of the stereo virtual views, one or more differences between the predetermined calibration object in the virtual images and a virtual rendering of the calibration object, and then using those differences to either adjust the virtual-camera parameters and repeat the registration process or to store the parameters for the system.
    Type: Application
    Filed: April 18, 2023
    Publication date: February 8, 2024
    Inventor: Yingen Xiong
  • Publication number: 20240046583
    Abstract: A method includes obtaining images of a scene and corresponding position data of a device that captures the images. The method also includes determining position data and direction data associated with camera rays passing through keyframes of the images. The method further includes using a position-dependent multilayer perceptron (MLP) and a direction-dependent MLP to create sparse feature vectors. The method also includes storing the sparse feature vectors in at least one data structure. The method further includes receiving a request to render the scene on an augmented reality (AR) device associated with a viewing direction. In addition, the method includes rendering the scene associated with the viewing direction using the sparse feature vectors in the at least one data structure.
    Type: Application
    Filed: July 17, 2023
    Publication date: February 8, 2024
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20240046577
    Abstract: In one embodiment, a method includes accessing an image captured by a camera of a pair of stereoscopic cameras of a video see-through augmented-reality (AR) system. The method further includes determining, based on each of one or more models of one or more components of the video see-through AR system, a modification to the image and generating, based on the modification to the image, a transformation map for the camera that captured the accessed image. The transformation map identifies frame-independent transformations to apply for rendering a scene based on one or more subsequent images captured by that camera.
    Type: Application
    Filed: April 18, 2023
    Publication date: February 8, 2024
    Inventor: Yingen Xiong
  • Publication number: 20230419595
    Abstract: A method includes capturing an image and associating the image with a camera pose for each of multiple cameras. The method also includes determining, for each camera, a first contribution of the image for a first virtual view for display on a first display and a second contribution of the image for a second virtual view for display on a second display. The method further includes determining, for each camera, a first confidence map for the first virtual view based on the camera pose and a position of the camera in relation to a first virtual camera and a second confidence map for the second virtual view based on the camera pose and the position of the camera in relation to a second virtual camera. In addition, the method includes generating the first virtual view by combining the first contribution using the first confidence map for each of the cameras and the second virtual view by combining the second contribution using the second confidence map for each of the cameras.
    Type: Application
    Filed: November 14, 2022
    Publication date: December 28, 2023
    Inventor: Yingen Xiong
  • Publication number: 20230410414
    Abstract: A method of video transformation for a video see-through (VST) augmented reality (AR) device includes obtaining video frames from multiple cameras associated with the VST AR device, where each video frame is associated with position data. The method also includes generating camera viewpoint depth maps associated with the video frames based on the video frames and the position data. The method further includes performing depth re-projection to transform the video frames from camera viewpoints to rendering viewpoints using the camera viewpoint depth maps. The method also includes performing hole filling of one or more holes created in one or more occlusion areas of at least one of the transformed video frames during the depth re-projection to generate at least one hole-filled video frame. In addition, the method includes displaying the transformed video frames including the at least one hole-filled video frame on multiple displays associated with the VST AR device.
    Type: Application
    Filed: November 4, 2022
    Publication date: December 21, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11756307
    Abstract: Images are obtained using cameras mounted on a vehicle, and at least a portion of the obtained images are displayed on a screen. Motion of the vehicle can be controlled such that it moves toward a physical destination selected from images obtained using cameras mounted on a vehicle.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: September 12, 2023
    Assignee: APPLE INC.
    Inventors: Jared A. Crawford, Yingen Xiong, Marco Pontil
  • Patent number: 11741671
    Abstract: Generating a 3D scene reconstruction using depth fusion can include creating a high-resolution sparse depth map by mapping sensor depths from a low-resolution depth map to points corresponding to pixels of a high-resolution color image of a scene. The high-resolution sparse depth map can have the same resolution as the high-resolution color image. A fused sparse depth map can be produced by combining the high-resolution sparse depth map with sparse depths reconstructed from the high-resolution color image. The high-resolution dense depth map can be generated based on fused sparse depths of the fused sparse depth map.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: August 29, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11741676
    Abstract: A method includes obtaining scene data, wherein the scene data includes image data of a scene and depth data of the scene, and the depth data includes depth measurement values of points of a point cloud. The method further includes defining a first detection area, wherein the first detection area includes a spatially defined subset of the scene data, defining a plane model based on points of the point cloud within the first detection area, and defining a plane based on the plane model. The method includes determining at least one value of a usable size of the plane based on points of the point cloud, comparing at least one value of a characteristic size of a digital object to the at least one value of the usable size of the plane, and generating a display including the digital object positioned upon the plane based on the plane model.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: August 29, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Yingen Xiong, Haihui Guan
  • Publication number: 20230245373
    Abstract: A method includes receiving, from a camera, one or more frames of image data of a scene comprising a background and one or more three-dimensional objects, wherein each frame comprises a raster of pixels of image data; detecting layer information of the scene, wherein the layer information is associated with a depth-based distribution of the pixels in the one or more frames; and determining a multi-layer model for the scene, the multi-layer model comprising a plurality of discrete layers comprising first and second discrete layers, wherein each discrete layer is associated with a unique depth value relative to the camera. The method further includes mapping the pixels to the layers of the plurality of discrete layers; rendering the pixels as a first image of the scene as viewed from a first perspective; and rendering the pixels as a second image of the scene as viewed from a second perspective.
    Type: Application
    Filed: June 8, 2022
    Publication date: August 3, 2023
    Inventors: Yingen Xiong, Christopher Peri
  • Publication number: 20230245396
    Abstract: A method includes receiving depth data of a real-world scene from a depth sensor, receiving image data of the scene from an image sensor, receiving movement data of the depth and image sensors from an IMU, and determining an initial 6DOF pose of an apparatus based on the depth data, image data, and/or movement data. The method also includes passing the 6DOF pose to a back end to obtain an optimized pose and generating, based on the optimized pose, image data, and depth data, a three-dimensional reconstruction of the scene. The reconstruction includes a dense depth map, a dense surface mesh, and/or one or more semantically segmented objects. The method further includes passing the reconstruction to a front end and rendering, at the front end, an XR frame. The XR frame includes a three-dimensional XR object projected on one or more surfaces of the scene.
    Type: Application
    Filed: July 6, 2022
    Publication date: August 3, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230245322
    Abstract: In one embodiment, a method includes identifying, in each image of a stereoscopic pair of images of a scene at a particular time, every pixel as either a static pixel corresponding to a portion of a scene that does not have local motion at that time or a dynamic pixel corresponding to a portion of a scene that has local motion at that time. For each static pixel, the method includes comparing each of a plurality of depth calculations for the pixel, and when the depth calculations differ by at least a threshold amount, then re-labeling that pixel as a dynamic pixel. For each dynamic pixel, the method includes comparing a geometric 3D calculation for the pixel with a temporal 3D calculation for that pixel, and when the geometric 3D calculation and the temporal 3D calculation are within a threshold amount, then re-labeling the pixel as a static pixel.
    Type: Application
    Filed: July 28, 2022
    Publication date: August 3, 2023
    Inventors: Yingen Xiong, Christopher Peri
  • Patent number: 11715182
    Abstract: In one embodiment, a method includes accessing a plurality of image frames captured by one or more cameras, classifying one or more first objects detected in one or more first image frames of the plurality of image frames as undesirable, applying a pixel filtering to the one or more first image frames to replace one or more first pixel sets associated with the one or more first objects with pixels from one or more second image frames of the plurality of image frames to generate a final image frame, providing the final image frame for display.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: August 1, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Euisuk Chung, Yingen Xiong, Lu Luo