Abstract: A system and method for processing volumetric video is disclosed. The process includes receiving three-dimensional mesh data and texture data defining three-dimensional and textural characteristics of a physical location captured as volumetric video, dividing the three-dimensional mesh data into sub-parts defining a mesh tile, each mesh tile making up only a portion of the three-dimensional mesh data, and identifying a sub-part of the texture data defining a texture tile, the texture tile corresponding to each of the sub-parts of the mesh tiles and including only texture data relevant to an associated mesh tile. The volumetric video may be encoded by encoding each mesh tile independent of the three-dimensional mesh data as a mesh tile video and each texture tile independent of the texture data as a texture tile video.
Type:
Grant
Filed:
June 25, 2020
Date of Patent:
September 14, 2021
Assignee:
HypeVR
Inventors:
Caoyang Jiang, Xiaolin Liu, Jason Juang, Qi Yao, Anthony Tran
Abstract: A method for compressing geometric data and video is disclosed. The method includes receiving video and associated geometric data for a physical location, generating a background video from the video, and generating background geometric data for the geometric data outside of a predetermined distance from a capture point for the video as a skybox sphere at a non-parallax distance. The method further includes generating a geometric shape for a first detected object within the predetermined distance from the capture point from the geometric data, generating shape textures for the geometric shape from the video, and encoding the background video and shape textures as compressed video along with the geometric shape and the background geometric data as encoded volumetric video.
Type:
Grant
Filed:
April 24, 2017
Date of Patent:
June 1, 2021
Assignee:
HypeVR
Inventors:
Jason Juang, Anthony Tran, Jiang Lan, Caoyang Jiang
Abstract: A system for calculating a consensus foreground object, relative to a background within a series of video frames is disclosed. The system relies upon a multi-prong approach that takes a consensus selection for the foreground object reliant upon at least three different models, then outputs the foreground object for application of alpha matting for use in augmented realty or virtual reality filmmaking.
Abstract: There is disclosed a system and method for encoding and decoding a geometry sequence. The method includes performing intraframe and interframe comparisons of geometry within the geometry sequence, selecting one or more faces as index faces and encoding only the index faces, and the differences relative to those index faces as a bit stream for transmission. The method further includes enabling decoding of the faces based upon the prediction type and encoding method selected during the encoding process.
Type:
Grant
Filed:
October 5, 2017
Date of Patent:
June 23, 2020
Assignee:
HypeVR
Inventors:
Caoyang Jiang, Jason Juang, Anthony Tran, Jiang Lan
Abstract: There is disclosed a system and method for streaming of volumetric three-dimensional video content. The system includes a separate rendering server and display device such that the rendering server receives pose and motion data from the mobile device and generates completed frames of video for the mobile device. The frames of video are transmitted to the mobile device for display. Predictive algorithms enable the rendering server to predict display device pose from frame-to-frame to thereby reduce overall latency in communications between the rendering server and display device.
Type:
Grant
Filed:
September 28, 2017
Date of Patent:
November 5, 2019
Assignee:
HypeVR
Inventors:
Caoyang Jiang, Jiang Lan, Jason Juang, Anthony Tran
Abstract: A system for capturing live-action three-dimensional video is disclosed. The system includes pairs of stereo cameras and a LIDAR for generating stereo images and three-dimensional LIDAR data from which three-dimensional data may be derived. A depth-from-stereo algorithm may be used to generate the three-dimensional camera data for the three-dimensional space from the stereo images and may be combined with the three-dimensional LIDAR data taking precedence over the three-dimensional camera data to thereby generate three-dimensional data corresponding to the three-dimensional space.
Abstract: A system for capturing live-action three-dimensional video is disclosed. The system includes pairs of stereo cameras and a LIDAR for generating stereo images and three-dimensional LIDAR data from which three-dimensional data may be derived. A depth-from-stereo algorithm may be used to generate the three-dimensional camera data for the three-dimensional space from the stereo images and may be combined with the three-dimensional LIDAR data taking precedence over the three-dimensional camera data to thereby generate three-dimensional data corresponding to the three-dimensional space.
Abstract: A system for capturing live-action three-dimensional video is disclosed. The system includes pairs of stereo cameras and a LIDAR for generating stereo images and three-dimensional LIDAR data from which three-dimensional data may be derived. A depth-from-stereo algorithm may be used to generate the three-dimensional camera data for the three-dimensional space from the stereo images and may be combined with the three-dimensional LIDAR data taking precedence over the three-dimensional camera data to thereby generate three-dimensional data corresponding to the three-dimensional space.