Patents by Inventor Yingen Xiong

Yingen Xiong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11715182
    Abstract: In one embodiment, a method includes accessing a plurality of image frames captured by one or more cameras, classifying one or more first objects detected in one or more first image frames of the plurality of image frames as undesirable, applying a pixel filtering to the one or more first image frames to replace one or more first pixel sets associated with the one or more first objects with pixels from one or more second image frames of the plurality of image frames to generate a final image frame, providing the final image frame for display.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: August 1, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Euisuk Chung, Yingen Xiong, Lu Luo
  • Patent number: 11704877
    Abstract: A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: July 18, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Yingen Xiong, Lu Luo
  • Publication number: 20230215108
    Abstract: A system and method for adaptive volume-based scene reconstruction for XR platform application are provided. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes determining a processor computation load. The method also includes, based on the determined computation load, adjusting one or more parameters for the 3D scene reconstruction to compensate for the determined computation load. The method further includes rendering a reconstructed 3D scene.
    Type: Application
    Filed: April 6, 2022
    Publication date: July 6, 2023
    Inventors: Christopher A Peri, Divi Schmidt, Yingen Xiong, Lu Luo
  • Patent number: 11688073
    Abstract: A method includes accessing image data and depth data corresponding to image frames to be displayed on an extended reality (XR) display device, and determining sets of feature points corresponding to the image frames based on a multi-layer sampling of the image data and the depth data. The method further includes generating a set of sparse feature points based on an integration of the sets of feature points. The set of sparse feature points are determined based on relative changes in depth data with respect to the sets of feature points. The method further includes generating a set of sparse depth points based on the set of sparse feature points and the depth data and sending the set of sparse depth points to the XR display device for reconstruction of a dense depth map corresponding to the image frames utilizing the set of sparse depth points.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: June 27, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Yingen Xiong
  • Patent number: 11670063
    Abstract: An electronic device that reprojects two-dimensional (2D) images to three-dimensional (3D) images includes a memory configured to store instructions, and a processor configured to execute the instructions to: propagate an intensity for at least one pixel of an image based on a depth guide of neighboring pixels of the at least one pixel, wherein the at least one pixel is considered a hole during 2D to 3D image reprojection; propagate a color for the at least one pixel based on an intensity guide of the neighboring pixels of the at least one pixel; and compute at least one weight for the at least one pixel based on the intensity and color propagation.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: June 6, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230137199
    Abstract: A system and method for display distortion calibration are configured to capture distortion with image patterns and calibrate distortion with ray tracing for an optical pipeline with lenses. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes generating an image pattern to encode display image pixels by encoding display distortion associated with a plurality of image patterns. The method also includes determining a distortion of the image pattern resulting from a lens on a head-mounted display (HMD) and decoding the distorted image patterns to obtain distortion of pixels on a display. A lookup table is created of angular distortion of all the pixels on the display. The method further includes providing a compensation factor for the distortion by creating distortion correction based on the lookup table of angular distortion.
    Type: Application
    Filed: March 16, 2022
    Publication date: May 4, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230140170
    Abstract: A method includes obtaining first and second image data of a real-world scene, performing feature extraction to obtain first and second feature maps, and performing pose tracking based on at least one of the first image data, second image data, and pose data to obtain a 6DOF pose of an apparatus. The method also includes generating, based on the 6DOF pose, first feature map, and second feature map, a disparity map between the image data and generating an initial depth map based on the disparity map. The method further includes generating a dense depth map based on the initial depth map and a camera model and generating, based on the dense depth map, a three-dimensional reconstruction of at least pail of the scene. In addition, the method includes rendering an AR or XR display that includes one or more virtual objects positioned to contact one or more surfaces of the reconstruction.
    Type: Application
    Filed: July 6, 2022
    Publication date: May 4, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11615594
    Abstract: A method by an extended reality (XR) display device includes accessing image data and sparse depth points corresponding to a plurality of image frames to be displayed on one or more displays of the XR display device. The method further includes determining a plurality of sets of feature points for a current image frame of the plurality of image frames, constructing a cost function configured to propagate the sparse depth points corresponding to the current image frame based on the plurality of sets of feature points, and generating a dense depth map corresponding to the current image frame based on an evaluation of the cost function. The method thus includes rendering the current image frame on the one or more displays of the XR display device based on the dense depth map.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: March 28, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230088963
    Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.
    Type: Application
    Filed: March 16, 2022
    Publication date: March 23, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20230092248
    Abstract: A method includes obtaining, from an image sensor, image data of a real-world scene; obtaining, from a depth sensor, sparse depth data of the real-world scene; and passing the image data to a first neural network to obtain one or more object regions of interest (ROIs) and one or more feature map ROIs. Each object ROI includes at least one detected object. The method also includes passing the image data and sparse depth data to a second neural network to obtain one or more dense depth map ROIs; aligning the one or more object ROIs, one or more feature map ROIs, and one or more dense depth map ROIs; and passing the aligned ROIs to a fully convolutional network to obtain a segmentation of the real-world scene. The segmentation contains one or more pixelwise predictions of one or more objects in the real-world scene.
    Type: Application
    Filed: June 7, 2022
    Publication date: March 23, 2023
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20220406013
    Abstract: Generating a 3D scene reconstruction using depth fusion can include creating a high-resolution sparse depth map by mapping sensor depths from a low-resolution depth map to points corresponding to pixels of a high-resolution color image of a scene. The high-resolution sparse depth map can have the same resolution as the high-resolution color image. A fused sparse depth map can be produced by combining the high-resolution sparse depth map with sparse depths reconstructed from the high-resolution color image. The high-resolution dense depth map can be generated based on fused sparse depths of the fused sparse depth map.
    Type: Application
    Filed: October 13, 2021
    Publication date: December 22, 2022
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11481871
    Abstract: Updating an image during real-time rendering of images by a display device can include determining a depth for each pixel of a color frame received from a source device and corresponding to the image. Each pixel's depth is determined by image-guided propagation of depths of sparse points extracted from a depth map generated at the source device. With respect to pixels corresponding to an extracted sparse depth point, image-guided depth propagation can include retaining the depth of the corresponding sparse depth point unchanged from the source depth map. With respect to each pixel corresponding to a non-sparse depth point, image-guided depth propagation can include propagating to the corresponding non-sparse depth point a depth of a sparse depth point lying within a neighborhood of the non-sparse depth point. Pixel coordinates of the color frame can be transformed for generating a space-warped rendering of the image.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: October 25, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11468587
    Abstract: A method for reconstructing a downsampled depth map includes receiving, at an electronic device, image data to be presented on a display of the electronic device at a first resolution, wherein the image data includes a color image and the downsampled depth map associated with the color image. The method further includes generating a high resolution depth map by calculating, for each point making up the first resolution, a depth value based on a normalized pose difference across a neighborhood of points for the point, a normalized color texture difference across the neighborhood of points for the point, and a normalized spatial difference across the neighborhood of points. Still further, the method includes outputting, on the display, a reprojected image at the first resolution based on the color image and the high resolution depth map. The downsampled depth map is at a resolution less than the first resolution.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: October 11, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Christopher A. Peri, Yingen Xiong
  • Publication number: 20220292631
    Abstract: Updating an image during real-time rendering of images by a display device can include determining a depth for each pixel of a color frame received from a source device and corresponding to the image. Each pixel's depth is determined by image-guided propagation of depths of sparse points extracted from a depth map generated at the source device. With respect to pixels corresponding to an extracted sparse depth point, image-guided depth propagation can include retaining the depth of the corresponding sparse depth point unchanged from the source depth map. With respect to each pixel corresponding to a non-sparse depth point, image-guided depth propagation can include propagating to the corresponding non-sparse depth point a depth of a sparse depth point lying within a neighborhood of the non-sparse depth point. Pixel coordinates of the color frame can be transformed for generating a space-warped rendering of the image.
    Type: Application
    Filed: August 13, 2021
    Publication date: September 15, 2022
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20220270374
    Abstract: Images are obtained using cameras mounted on a vehicle, and at least a portion of the obtained images are displayed on a screen. Motion of the vehicle can be controlled such that it moves toward a physical destination selected from images obtained using cameras mounted on a vehicle.
    Type: Application
    Filed: May 11, 2022
    Publication date: August 25, 2022
    Inventors: Jared A. Crawford, Yingen Xiong, Marco Pontil
  • Publication number: 20220230397
    Abstract: A method by an extended reality (XR) display device includes accessing image data and sparse depth points corresponding to a plurality of image frames to be displayed on one or more displays of the XR display device. The method further includes determining a plurality of sets of feature points for a current image frame of the plurality of image frames, constructing a cost function configured to propagate the sparse depth points corresponding to the current image frame based on the plurality of sets of feature points, and generating a dense depth map corresponding to the current image frame based on an evaluation of the cost function. The method thus includes rendering the current image frame on the one or more displays of the XR display device based on the dense depth map.
    Type: Application
    Filed: June 3, 2021
    Publication date: July 21, 2022
    Inventors: Yingen Xiong, Christopher A. Peri
  • Publication number: 20220230398
    Abstract: A method includes obtaining scene data, wherein the scene data includes image data of a scene and depth data of the scene, and the depth data includes depth measurement values of points of a point cloud. The method further includes defining a first detection area, wherein the first detection area includes a spatially defined subset of the scene data, defining a plane model based on points of the point cloud within the first detection area, and defining a plane based on the plane model. The method includes determining at least one value of a usable size of the plane based on points of the point cloud, comparing at least one value of a characteristic size of a digital object to the at least one value of the usable size of the plane, and generating a display including the digital object positioned upon the plane based on the plane model.
    Type: Application
    Filed: June 11, 2021
    Publication date: July 21, 2022
    Inventors: Christopher A. Peri, Yingen Xiong, Haihui Guan
  • Publication number: 20220207665
    Abstract: In one embodiment, a method includes accessing a plurality of image frames captured by one or more cameras, classifying one or more first objects detected in one or more first image frames of the plurality of image frames as undesirable, applying a pixel filtering to the one or more first image frames to replace one or more first pixel sets associated with the one or more first objects with pixels from one or more second image frames of the plurality of image frames to generate a final image frame, providing the final image frame for display.
    Type: Application
    Filed: March 17, 2022
    Publication date: June 30, 2022
    Inventors: Christopher A. Peri, Euisuk Chung, Yingen Xiong, Lu Luo
  • Publication number: 20220165041
    Abstract: An electronic device that reprojects two-dimensional (2D) images to three-dimensional (3D) images includes a memory configured to store instructions, and a processor configured to execute the instructions to: propagate an intensity for at least one pixel of an image based on a depth guide of neighboring pixels of the at least one pixel, wherein the at least one pixel is considered a hole during 2D to 3D image reprojection; propagate a color for the at least one pixel based on an intensity guide of the neighboring pixels of the at least one pixel; and compute at least one weight for the at least one pixel based on the intensity and color propagation.
    Type: Application
    Filed: August 31, 2021
    Publication date: May 26, 2022
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11341752
    Abstract: Images are obtained using cameras mounted on a vehicle, and at least a portion of the obtained images are displayed on a screen. Motion of the vehicle can be controlled such that it moves toward a physical destination selected from images obtained using cameras mounted on a vehicle.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: May 24, 2022
    Assignee: Apple Inc.
    Inventors: Jared A. Crawford, Yingen Xiong, Marco Pontil