Signal Formatting Patents (Class 348/43)
  • Patent number: 10838206
    Abstract: A Head-Mounted Display system together with associated techniques for performing accurate and automatic inside-out positional, user body and environment tracking for virtual or mixed reality are disclosed. The system uses computer vision methods and data fusion from multiple sensors to achieve real-time tracking. High frame rate and low latency is achieved by performing part of the processing on the HMD itself.
    Type: Grant
    Filed: February 20, 2017
    Date of Patent: November 17, 2020
    Assignee: Apple Inc.
    Inventors: Simon Fortin-Deschênes, Vincent Chapdelaine-Couture, Yan Côté, Anthony Ghannoum
  • Patent number: 10827198
    Abstract: A motion estimation method and apparatus, and a non-transitory computer-readable storage medium are provided. In the method, for a predicting unit (PU) in a to-be-coded image, a candidate motion vector (MV) list for the PU is constructed based on advanced motion vector prediction (AMVP). A rate distortion (RD) cost value of each MV in the candidate MV list is calculated. A target MV of the AMVP corresponding to the smallest RD cost value of the AMVP is obtained. Integer motion estimation (IME) is performed on the PU based on a mapping point of the target MV of the AMVP in a reference frame. A target MV of the IME is obtained. The target MV of the IME is converted to quarter pixel precision, to obtain a reference target MV of quarter motion estimation (QME). Further, as a final result of a motion estimation process is determined.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: November 3, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Hongshun Zhang
  • Patent number: 10827194
    Abstract: A picture prediction method and a related apparatus are disclosed. The picture prediction method includes: determining motion vector predictors of K pixel samples in a current picture block, where K is an integer greater than 1, the K pixel samples include a first vertex angle pixel sample in the current picture block, a motion vector predictor of the first vertex angle pixel sample is obtained based on a motion vector of a preset first spatially adjacent picture block of the current picture block, and the first spatially adjacent picture block is spatially adjacent to the first vertex angle pixel sample; and performing, based on a non-translational motion model and the motion vector predictors of the K pixel samples, pixel value prediction on the current picture block. Solutions in the embodiments of the present application are helpful in reducing calculation complexity of picture prediction based on a non-translational motion model.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: November 3, 2020
    Assignees: Huawei Technologies Co., Ltd., University of Science and Technology of China
    Inventors: Li Li, Houqiang Li, Zhuoyi Lv, Sixin Lin
  • Patent number: 10816331
    Abstract: The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: October 27, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sing Bing Kang, Shahram Izadi
  • Patent number: 10817989
    Abstract: An artificial intelligence (AI) decoding apparatus includes a memory storing one or more instructions, and a processor configured to execute the stored one or more instructions, to obtain image data corresponding to a first image that is encoded, obtain a second image corresponding to the first image by decoding the obtained image data, determine whether to perform AI up-scaling of the obtained second image, based on the AI up-scaling of the obtained second image being determined to be performed, obtain a third image by performing the AI up-scaling of the obtained second image through an up-scaling deep neural network (DNN), and output the obtained third image, and based on the AI up-scaling of the obtained second image being determined to be not performed, output the obtained second image.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: October 27, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaehwan Kim, Jongseok Lee, Sunyoung Jeon, Kwangpyo Choi, Minseok Choi, Quockhanh Dinh, Youngo Park
  • Patent number: 10819974
    Abstract: An apparatus may include a control unit to determine whether a display state notification indicating a dimensional display process exists. The control unit may set an output form of display data to be displayed on image data to an output form corresponding to a display state indicated by the display state notification.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: October 27, 2020
    Assignee: Saturn Licensing LLC
    Inventors: Shinsuke Takuma, Michio Miyano
  • Patent number: 10805628
    Abstract: A motion vector decoding method includes: determining a prediction motion vector of a to-be-decoded unit based on a motion vector of a prediction unit of the to-be-decoded unit; when a face image in which the to-be-decoded unit is located and at least one of a face image in which a first reference unit is located and a face image in which the prediction unit is located are not face images in a same orientation, performing a first update on the prediction motion vector, where the first update is used to determine a mapping vector that is of the prediction motion vector and that is in a plane of the face image in which the to-be-decoded unit is located; and obtaining a motion vector of the to-be-decoded unit based on the prediction motion vector obtained after the first update.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: October 13, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Xiang Ma, Haitao Yang
  • Patent number: 10803905
    Abstract: A video processing apparatus, a video processing method thereof and a non-transitory computer readable medium are provided. In the method, at least two original video files are obtained, where each original video file is recorded in different shoot direction. Each video frame of the original video file is stitched, to generate multiple stitched video frame. In response to generating each stitched video frame, each stitched video frame is provided for use of playback directly without encoding those stitched video frame into a video file. Accordingly, a real-time and smooth playback effect is achieved.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: October 13, 2020
    Assignee: Acer Incorporated
    Inventors: Chih-Wen Huang, Chao-Kuang Yang
  • Patent number: 10802594
    Abstract: A remote control system includes an object detection unit, an object determination unit, and a static gesture processing unit. The object detection unit detects an object corresponding to an operator according to a depth image including the operator and a face detection result corresponding to the operator. The object determination unit utilizes a combination of a gesture database, a color image of the object, and a two-dimensional image corresponding to the depth map to determine a gesture formed by the object when the operator moves the object to a predetermined position. The operator moves the object to the predetermined position and pulls the object after the operator moves the object to the predetermined position within a first predetermined period. The static gesture processing unit generates a first control command to control an electronic device according to at least one static gesture determined by the object determination unit.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: October 13, 2020
    Assignee: eYs3D Microelectronics, Co.
    Inventor: Chi-Feng Lee
  • Patent number: 10795022
    Abstract: Machine learning is applied to both 2D images from an infrared imager imaging laser reflections from an object and to the 3D depth map of the object that is generated using the 2D images and time of flight (TOF) information. In this way, the 3D depth map accuracy can be improved without increasing laser power or using high resolution imagers.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: October 6, 2020
    Assignees: SONY CORPORATION, Sony Interactive Entertainment Inc.
    Inventors: Peter Shintani, Morio Usami, Kissei Matsumoto, Kazuyuki Shikama, Bibhudendu Mohapatra, Keith Resch
  • Patent number: 10785482
    Abstract: A method for encoding a picture of a video sequence in a bit stream that constrains tile processing overhead is provided. The method includes computing a maximum tile rate for the video sequence, computing a maximum number of tiles for the picture based on the maximum tile rate, and encoding the picture wherein a number of tiles used to encode the picture is enforced to be no more than the maximum number of tiles.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: September 22, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventor: Minhua Zhou
  • Patent number: 10785497
    Abstract: In accordance with a first aspect, the intra prediction direction of a neighboring, intra-predicted block is used in order to predict the extension direction of the wedgelet separation line of a current block, thereby reducing the side information rate necessitated in order to convey the partitioning information. In accordance with a second aspect, the idea is that previously reconstructed samples, i.e. reconstructed values of blocks preceding the current block in accordance with the coding/decoding order allow for at least a prediction of a correct placement of a starting point of the wedgelet separation line, namely by placing the starting point of the wedgelet separation line at a position of a maximum change between consecutive ones of a sequence of reconstructed values of samples of a line of samples extending adjacent to the current block along a circumference thereof. Both aspects may be used individually or in combination.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: September 22, 2020
    Assignee: GE VIDEO COMPRESSION, LLC
    Inventors: Philipp Merkle, Christian Bartnik, Haricharan Lakshman, Detlev Marpe, Karsten Mueller, Thomas Wiegand, Gerhard Tech
  • Patent number: 10785486
    Abstract: Innovations in unified intra block copy (“BC”) and inter prediction modes are presented. In some example implementations, bitstream syntax, semantics of syntax elements and many coding/decoding processes for inter prediction mode are reused or slightly modified to enable intra BC prediction for blocks of a frame. For example, to provide intra BC prediction for a current block of a current picture, a motion compensation process applies a motion vector that indicates a displacement within the current picture, with the current picture being used as a reference picture for the motion compensation process. With this unification of syntax, semantics and coding/decoding processes, various coding/decoding tools designed for inter prediction mode, such as advanced motion vector prediction, merge mode and skip mode, can also be applied when intra BC prediction is used, which simplifies implementation of intra BC prediction.
    Type: Grant
    Filed: June 19, 2014
    Date of Patent: September 22, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bin Li, Ji-Zheng Xu
  • Patent number: 10778895
    Abstract: A video, such as a spherical video, may include motion due to motion of one or more image capture devices during capture of the video. Motion of the image capture devices during the capture of the video may cause the playback of the video to appear jerky/shaky. The video may be stabilized by using both a horizontal feature and a fixed feature captured within the video.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 15, 2020
    Assignee: GoPro, Inc.
    Inventor: Vincent Garcia
  • Patent number: 10778950
    Abstract: An image stitching method using viewpoint transformation and a system therefor are provided. The method includes obtaining images captured by a plurality of cameras included in the camera system, performing viewpoint transformation for each of the images using a depth map for the images, and stitching the images, the viewpoint transformation of which is performed.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: September 15, 2020
    Assignee: CENTER FOR INTEGRATED SMART SENSORS FOUNDATION
    Inventor: Ki Yeong Park
  • Patent number: 10778993
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to derive a composite track. Three-dimensional video data includes a plurality of two-dimensional sub-picture tracks associated with a viewport. A composite track derivation for composing the plurality of two-dimensional sub-picture tracks for the viewport includes data indicative of the plurality of two-dimensional sub-picture tracks belonging to a same group, placement information to compose sample images from the plurality of two-dimensional tracks into a canvas associated with the viewport, and a composition layout operation to adjust the composition if the canvas comprises a composition layout created by two or more of the plurality of two-dimensional sub-picture tracks composed on the canvas. The composite track derivation can be encoded and/or used to decode the three-dimensional video data.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: September 15, 2020
    Assignee: MediaTek Inc.
    Inventors: Xin Wang, Lulin Chen, Shuai Zhao
  • Patent number: 10767975
    Abstract: Examples of the present disclosure describe systems and methods for capturing data to acquire indoor and outdoor geometry. In aspects, a data capture system may be configured to acquire texture data, geometry data, navigation data and/or orientation data to support geolocation and georeferencing within indoor and outdoor environments. The data capture system may further be configured to acquire seamless texture data from a 360° horizontal and vertical perspective to support panoramic video and images.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: September 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zanin Cosic, Hannes Hegenbarth, Martin Ponticelli, Gerald Schweighofer, Mario Sormann
  • Patent number: 10769805
    Abstract: A method, an image processing device, and a system for generating a depth map are proposed. The method includes the following steps. A first original image and a second original image are obtained, and first edge blocks corresponding to the first original image and second edge blocks corresponding to the second original image are obtained. Depth information of edge blocks is generated according to the first edge blocks and the second edge blocks, and depth information of non-edge blocks is set according to the depth information of the edge blocks. The depth map is generated by using the depth information of the edge blocks and the depth information of the non-edge blocks.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: September 8, 2020
    Assignee: Wistron Corporation
    Inventor: Li-Jen Chen
  • Patent number: 10771793
    Abstract: The way of predicting a current block by assigning constant partition values to the partitions of a bi-partitioning of a block is quite effective, especially in case of coding sample arrays such as depth/disparity maps where the content of these sample arrays is mostly composed of plateaus or simple connected regions of similar value separated from each other by steep edges. The transmission of such constant partition values would, however, still need a considerable amount of side information which should be avoided. This side information rate may be further reduced if mean values of values of neighboring samples associated or adjoining the respective partitions are used as predictors for the constant partition values.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: September 8, 2020
    Assignee: GE VIDEO COMPRESSION, LLC
    Inventors: Philipp Merkle, Christian Bartnik, Haricharan Lakshman, Detlev Marpe, Karsten Mueller, Thomas Wiegand, Gerhard Tech
  • Patent number: 10771794
    Abstract: Although wedgelet-based partitioning seems to represent a better tradeoff between side information rate on the one hand and achievable variety in partitioning possibilities on the other hand, compared to contour partitioning, the ability to alleviate the constraints of the partitioning to the extent that the partitions have to be wedgelet partitions, enables applying relatively uncomplex statistical analysis onto overlaid spatially sampled texture information in order to derive a good predictor for the bi-segmentation in a depth/disparity map. Thus, in accordance with a first aspect it is exactly the increase of the freedom which alleviates the signaling overhead provided that co-located texture information in form of a picture is present. Another aspect pertains the possibility to save side information rate involved with signaling a respective coding mode supporting irregular partitioning.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: September 8, 2020
    Assignee: GE VIDEO COMPRESSION, LLC
    Inventors: Philipp Merkle, Christian Bartnik, Haricharan Lakshman, Detlev Marpe, Karsten Mueller, Thomas Wiegand, Gerhard Tech
  • Patent number: 10771758
    Abstract: Techniques related to generating a virtual view from multi-view images for presentation to a viewer are discussed. Such techniques include determining, based on a viewer position relative to a display region, first and second crop positions of planar image and cropping the planar image to a cropped planar image to fill the display region using the first and second crop positions such that the first and second crop positions define an asymmetric frustum between the cropped planar image and a virtual window corresponding to the display region.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Oscar Nestares, Kalpana Seshadrinathan, Vladan Popovic, Horst Haussecker
  • Patent number: 10764615
    Abstract: Convenience is achieved in performing processing depending on decoding capability in a reception side. High-frame-rate ultra-high-definition image data is processed to obtain first image data for acquisition of a base-frame-rate high-definition image, second image data for acquisition of a high-frame-rate high-definition image by being used with the first image data, third image data for acquisition of a base-frame-rate ultra-high-definition image by being used with the first image data, and fourth image data for acquisition of a high-frame-rate ultra-high-definition image by being used with the first to third image data. A container is transmitted including a predetermined number of video streams including encoded image data of the first to fourth image data. Information is inserted into the container, the information corresponding to information that is inserted into each of the predetermined number of video streams and associated with image data included in the video streams.
    Type: Grant
    Filed: February 6, 2017
    Date of Patent: September 1, 2020
    Assignee: SONY CORPORATION
    Inventor: Ikuo Tsukagoshi
  • Patent number: 10764560
    Abstract: A system for producing 3D video using first and second cameras on first and second axes. The second camera has a field of view (FOV) overlapping with the first camera's FOV. The second axis is at a convergence angle relative to the first axis. A control computer changes the inter-camera distance by effectively moving the second camera laterally, and changes convergence angle by effectively rotating the second camera. The control computer automatically calculates the inter-camera distance and convergence angle based on the distance of a user to the display screen, working distance of the cameras, zoom settings, and size of the screen, and effectively moves the second camera accordingly. A keystone correction is performed to account for the camera projections, the frames are rotationally aligned, and the corrected/aligned frames are combined to produce a 3D image frame that is displayed on a 3D display screen.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 1, 2020
    Inventors: Piotr Kuchnio, Christopher Thomas Jamieson, Siu Wai Jacky Mak, Tammy Kee-Wai Lee, Yusuf Bismilla, Sam Anthony Leitch
  • Patent number: 10762712
    Abstract: Augmented reality (AR) telepresence systems and methods are disclosed for obtaining a 3D model of a physical location from a 3D-capture system comprising one or more 3D depth sensors disposed throughout the physical location, generating a truncated 3D model of the physical location, the truncated 3D model corresponding to the intersection of the generated 3D model and a field of view of a user terminal camera at the physical location, and transmitting the truncated 3D model to a remote location.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: September 1, 2020
    Assignee: PCMS Holdings, Inc.
    Inventors: Seppo T. Valli, Pekka K. Siltanen
  • Patent number: 10755471
    Abstract: To make it possible to obtain a natural virtual viewpoint image in which a structure or the like existing within an image capturing scene is represented three-dimensionally so as to be the same as a real one while suppressing a network load at the time of transmission of multi-viewpoint image data. The generation device according to the present invention generates a virtual viewpoint image based on three-dimensional shape data corresponding to an object, three-dimensional shape data corresponding to a structure, background data corresponding to a background different at least from the object and the structure, and information indicating a virtual viewpoint.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: August 25, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Naoki Umemura
  • Patent number: 10748341
    Abstract: A terminal device includes a memory configured to store computer-readable instructions and a processor configured to perform the computer-readable instructions. The processor is configured to: cause a real space camera in a real space to capture a real space image including a real player; cause a virtual space camera in a virtual space to capture a virtual space image including a virtual object, the real player performing an instruction input to the virtual object; create a composite image that is formed by composing part of the virtual space image stored in the memory and a player image in the real space image stored in the memory; and output the composite image to a display so that the display is configured to display the composite image.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: August 18, 2020
    Assignee: GungHo Online Entertainment, Inc.
    Inventor: Hiroyuki Ogasawara
  • Patent number: 10742964
    Abstract: Methods and apparatus for using a display in a manner which results in a user perceiving a higher resolution than would be perceived if a user viewed the display from a head on position are described. In some embodiments one or more displays are mounted at an angle, e.g., sometimes in range a range from an angle above 0 degrees to 45 relative to a user's face and thus eyes. The user sees more pixels, e.g., dots corresponding to light emitting elements, per square inch of eye area than the user would see if the user were viewing the display head on due to the angle at which the display or displays are mounted. The methods and display mounting arrangement are well suited for use in head mounted displays, e.g., Virtual Reality (VR) displays for stereoscopic viewing (e.g., 3D) and/or non-stereoscopic viewing of displayed images.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: August 11, 2020
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10735752
    Abstract: Disclosed is technology associated with video encoding and decoding having a structure including one or more layers (quality, spatial, and view) and technology associated with a method that predicts an higher layer signal by using one or more reference layers in encoding and decoding an higher layer. In more detail, an interlayer prediction is capable of being performed by considering a characteristic of each layer by separating a spatial and quality reference layer list constituted by spatial and quality layers to be referred at the same view as a target layer and a view reference layer list constituted by the same spatial and quality layers as a target layer in encoding and decoding encoding and decoding pictures of an higher layer in encoding and decoding encoded and decoded pictures of the higher layer to improve encoding and decoding efficiency.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: August 4, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jung Won Kang, Ha Hyun Lee, Jin Ho Lee, Jin Soo Choi, Jin Woong Kim
  • Patent number: 10721452
    Abstract: A three dimensional system including rendering with variable displacement.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: July 21, 2020
    Assignee: VEFXi Corporation
    Inventors: Craig Peterson, Markus Roberts, Sergey Lomov, Manuel Muro, Pat Doyle
  • Patent number: 10721453
    Abstract: Stereoscopic images are subsampled and placed in a “checkerboard” pattern in an image. The image is encoded in a monoscopic video format. The monoscopic video is transmitted to a device where the “checkerboard” is decoded. Portions of the checkerboard (e.g., “black” portions) are used to reconstruct one of the stereoscopic images and the other portion of the checkerboard (e.g., “white” portions) are used to reconstruct the other image. The subsamples are, for example, taken from the image in a location coincident to the checkerboard position in which the subsamples are encoded.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: July 21, 2020
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Walter J. Husak, David Ruhoff, Alexandros Tourapis, Athanasios Leontaris
  • Patent number: 10721487
    Abstract: Motion compensation requires a significant amount of memory bandwidth, especially for smaller prediction unit sizes. The worst case bandwidth requirements can occur when bi-predicted 4×8 or 8×4 PUs are used. To reduce the memory bandwidth requirements for such smaller PUs, methods are provided for restricting inter-coded PUs of small block sizes to be coded only in a uni-predictive mode, i.e., forward prediction or backward prediction. More specifically, PUs of specified restricted sizes in bi-predicted slices (B slices) are forced to be uni-predicted.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: July 21, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventor: Minhua Zhou
  • Patent number: 10715782
    Abstract: A three dimensional system including a marker mode.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: July 14, 2020
    Assignee: VEFXi Corporation
    Inventor: Craig Peterson
  • Patent number: 10708606
    Abstract: A method for decoding a multilayer video signal, according to the present invention, is characterized by: selecting candidate reference pictures for a current picture from among corresponding pictures in one or more reference layers by using either a maximum temporal level indicator for a current layer or a temporal level identifier for the current picture belonging to the current layer; inducing the number of active reference layer pictures for the current picture on the basis of the number of the candidate reference pictures; obtaining a reference layer identifier on the basis of the number of the active reference layer pictures; determining an active reference picture for use in the interlayer prediction of the current picture by using the reference layer identifier; and performing interlayer prediction on the current picture by using the active reference picture.
    Type: Grant
    Filed: January 6, 2015
    Date of Patent: July 7, 2020
    Assignee: KT CORPORATION
    Inventors: Bae Keun Lee, Joo Young Kim
  • Patent number: 10694182
    Abstract: Information available from coding/decoding the base layer, i.e. base-layer hints, is exploited to render the motion-compensated prediction of the enhancement layer more efficient by more efficiently coding the enhancement layer motion parameters.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: June 23, 2020
    Assignee: GE VIDEO COMPRESSION, LLC
    Inventors: Tobias Hinz, Haricharan Lakshman, Jan Stegemann, Philipp Helle, Mischa Siekmann, Karsten Suehring, Detlev Marpe, Heiko Schwarz, Christian Bartnik, Ali Atef Ibrahim Khairat Abdelhamid, Heiner Kirchhoffer, Thomas Wiegand
  • Patent number: 10694081
    Abstract: The present disclosure relates to an information processing apparatus and an information processing method that are configured to be capable of efficiently acquiring information for use in generating three-dimensional data from two-dimensional image data. A grouping block sorts two or more virtual cameras for acquiring two-dimensional image data into two or more groups. A global table generation block generates a global table in which group information related with each of two or more groups is registered. A group table generation block generates, for each group, a group table in which camera information for use in generating three-dimensional data from two-dimensional image data acquired by a virtual camera sorted into a group is registered. The present disclosure is applicable to an encoding apparatus and the like, for example.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: June 23, 2020
    Assignee: SONY CORPORATION
    Inventor: Kendai Furukawa
  • Patent number: 10687119
    Abstract: An apparatus, method, and computer readable medium for a virtual reality (VR) client and a contents server for live virtual reality events are provided. The contents server receives feeds from each 360° camera at a venue. The contents server transmits a primary first 360° video in a first stream and converted secondary non-360° videos in a second stream to a VR client. The VR client determines relative positions of each non-360° video in order to display rendered thumbnails of the non-360° videos in the first 360° video. The VR client transmits a selection of a non-360° video to the contents server. The contents server transmits the second 360° video related to the selection in a first stream and the converted first non-360° video in the second stream.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: June 16, 2020
    Inventors: Jonghyun Kahng, Seokhyun Yoon
  • Patent number: 10679373
    Abstract: A system-on-chip is provided which is configured for real-time depth estimation of video data. The system-on-chip includes a monoscopic depth estimator configured to perform monoscopic depth estimation from monoscopic-type video data, and a stereoscopic depth estimator configured to perform stereoscopic depth estimation from stereoscopic-type video data. The system-on-chip is reconfigurable to perform either the monoscopic depth estimation or the stereoscopic depth estimation on the basis of configuration data defining a selected depth estimation mode. Both depth estimators include shared circuits which are instantiated in hardware and reconfigurable to account for differences in the functionality of the circuit in each depth estimator.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: June 9, 2020
    Assignee: ULTRA-D COÖPERATIEF U.A.
    Inventor: Abraham Karel Riemens
  • Patent number: 10681332
    Abstract: A viewing direction may define an angle/visual portion of a spherical video at which a viewing window is directed. A trajectory of viewing direction may include changes in viewing directions for playback of spherical video. Abrupt changes in the viewing directions may result in jerky or shaky views of the spherical video. Changes in the viewing directions may be stabilized to provide stabilized views of the spherical video. Amount of stabilization may be limited by a margin constraint.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: June 9, 2020
    Assignee: GoPro, Inc.
    Inventors: Daryl Stimm, William Edward MacDonald, Kyler William Schwartz
  • Patent number: 10681369
    Abstract: According to the present invention, there is provided a method of encoding a three-dimensional (3D) image, the method comprising: determining a prediction mode for a current block as an inter prediction mode; determining whether a reference block corresponding to the current block in a reference picture has motion information; when the reference block has the motion information, deriving motion information on the current block for each sub prediction block in the current block; and deriving a prediction sample for the current block based on the motion information on the current block.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: June 9, 2020
    Assignee: UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY
    Inventors: Gwang Hoon Park, Min Seong Lee, Young Su Heo, Yoon Jin Lee
  • Patent number: 10666977
    Abstract: Embodiments of the present invention provide methods and devices for coding and decoding a depth map, including: determining to perform simplified depth coding (SDC) decoding according to a flag of an SDC mode; determining a size of an image block and a maximum prediction size; determining an intra-frame prediction mode; in a case in which the size of the image block is greater than the maximum prediction size, splitting the image block to obtain N split image blocks; and performing the SDC decoding on the N split image blocks by using the intra-frame prediction mode. In this way, processing efficiency of coding and decoding a depth map can be improved.
    Type: Grant
    Filed: October 9, 2015
    Date of Patent: May 26, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Xiaozhen Zheng
  • Patent number: 10659760
    Abstract: A method of processing a file including video data, including processing a file including fisheye video data, the file including a syntax structure including a plurality of syntax elements that specify attributes of the fisheye video data, wherein the plurality of syntax elements include one or more bits that indicate fisheye video type information, determining, based on the one or more bits of the syntax structure, the fisheye video type information for the fisheye video data, outputting, based on the determination, the fisheye video data for rendering.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: May 19, 2020
    Assignee: QUALCOMM Incorporated
    Inventor: Yekui Wang
  • Patent number: 10659814
    Abstract: The present invention relates to 3D video coding device and method. A decoding method, according to the present invention, provides a 3D video decoding method. A decoding method comprises the steps of: obtaining a disparity value on the basis of a reference view and a predetermined value; deriving movement information of a current block in a depth picture on the basis of the disparity value; and generating a prediction sample of the current block on the basis of the movement information, wherein the reference view is a view of a reference picture in a reference picture list. According to the present invention, even when a base view cannot be accessed, a disparity vector can be derived on the basis of an available reference view index in a decoded picture buffer (DPB), and coding efficiency can be enhanced.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: May 19, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Junghak Nam, Sehoon Yea, Jungdong Seo, Sunmi Yoo
  • Patent number: 10659813
    Abstract: Disclosed is a method for coding and decoding depth information, and the method includes: data in original Depth Look-up Tables (DLTs) of selected views are combined to establish a unified DLT, a value of the number of elements in the unified DLT and each element in the unified DLT are coded respectively and a coded value and each coded element are transmitted to a decoder; and depth information of each view is coded by taking the unified DLT as a whole or in portions and coded depth information of each view is transmitted to the decoder. Further disclosed are a system and device for coding and decoding depth information; by means of the disclosure, it is possible to reduce redundancy of coding information and improve coding and decoding efficiency.
    Type: Grant
    Filed: March 25, 2014
    Date of Patent: May 19, 2020
    Assignee: ZTE CORPORATION
    Inventors: Hongwei Li, Ming Li, Ping Wu, Guoqiang Shang, Yutang Xie
  • Patent number: 10659698
    Abstract: A computer-implemented system and method of configuring a path of a virtual camera. The method comprises receiving user steering information to control the path of the virtual camera in a scene; determining a primary target based upon a field of view of the virtual camera; and estimating a future path and a corresponding future field of view of the virtual camera, based on the received steering information. The method further comprises determining a secondary target of the scene proximate to the estimated future path of the virtual camera based on a preferred perspective of the secondary target; and configuring the path to capture the secondary target from the preferred perspective.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: May 19, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Berty Jacques Alain Bhuruth
  • Patent number: 10648883
    Abstract: A method for developing a virtual testing model of a subject for use in simulated aerodynamic testing comprises providing a computer generated generic 3D mesh of the subject, identifying a dimension of the subject and at least one reference point on the subject, imaging the subject to develop point cloud data representing at least the subject's outer surface and adapting the generic 3D mesh to the subject. The generic 3D mesh is adapted by modifying it to have a corresponding dimension and at least one corresponding reference point, and applying at least a portion of the point cloud data from the imaged subject's outer surface at selected locations to scale the generic 3D mesh to correspond to the subject, thereby developing the virtual testing model specific to the subject.
    Type: Grant
    Filed: October 19, 2017
    Date of Patent: May 12, 2020
    Inventor: Jay White
  • Patent number: 10650574
    Abstract: Various embodiments of the present disclosure relate generally to systems and processes for generating stereo pairs for virtual reality. According to particular embodiments, a method comprises obtaining a monocular sequence of images using the single lens camera during a capture mode. The sequence of images is captured along a camera translation. Each image in the sequence of images contains at least a portion of overlapping subject matter, which includes an object. The method further comprises generating stereo pairs, for one or more points along the camera translation, for virtual reality using the sequence of images. Generating the stereo pairs may include: selecting frames for each stereo pair based on a spatial baseline; interpolating virtual images in between captured images in the sequence of images; correcting selected frames by rotating the images; and rendering the selected frames by assigning each image in the selected frames to left and right eyes.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: May 12, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10638119
    Abstract: A display image for a display panel (503) of an autostereoscopic display projecting the display image in a plurality of view cones is generated. A source (803) provides a three dimensional representation of a scene to be displayed and a generator (805) generates the display image from the representation. For each pixel, the generator (805) determines a scene viewpoint direction indication reflecting a view point direction for the scene in response to a direction mapping function and a view cone projection direction indication reflecting a projection direction for the pixel within the view cones. The direction mapping function reflects a relationship between view cone projection directions and scene view point directions. The pixel value corresponding to the view point direction is then generated from the three dimensional representation. In addition, a processor (809) determines a viewer characteristic; and an adapter (811) for adapts the direction mapping function in response to the viewer characteristic.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: April 28, 2020
    Assignee: Koninklijke Philips N.V.
    Inventor: Bart Kroon
  • Patent number: 10638123
    Abstract: To enhance a mono-output-only controller such as a mobile OS to support selective mono/stereo/mixed output, a stereo controller is instantiated in communication with the mono controller. The stereo controller coordinates stereo output, but calls and adapts functions already present in the mono controller for creating surface and image buffers, rendering, compositing, and/or merging. For content designated for 2D display, left and right surfaces are rendered from a mono perspective; for content designated for 3D display, left and right surfaces are rendered from left and right stereo perspectives, respectively. Some, all, or none of available content may be delivered to a stereo display in 3D, with a remainder delivered in 2D, and with comparable content still delivered in 2D to the mono display. The stereo controller is an add-on; the mono controller need not be replaced, removed, deactivated, or modified, facilitating transparency and backward compatibility.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: April 28, 2020
    Assignee: Atheer, Inc.
    Inventors: Mario Kosmiskas, Nathan Abercrombie, Sleiman Itani
  • Patent number: 10631008
    Abstract: The invention relates to a method, where a first and second stereoscopic image are formed comprising a left eye image and a right eye image, a first central image region and a first peripheral image region are determined in the first stereoscopic image, the first central image region comprising a first central scene feature and the first peripheral image region comprising a first peripheral scene feature, and in the second stereoscopic image a second central image region and a second peripheral image region in said second stereoscopic image are determined, and based on said determining that said second central image region comprises said first peripheral scene feature, encoding said first stereoscopic image such that said first peripheral image region is encoded with a reduced quality with respect to said first central image region.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: April 21, 2020
    Assignee: Nokia Technologies Oy
    Inventor: Payman Aflaki Beni
  • Patent number: 10630973
    Abstract: The invention concerns the encoding of at least one current integral image (IIj) captured by an image capture device, comprising the steps consisting of: —decomposing (C1) the current integral image into at least one frame (Vu) representing a given perspective of a scene and, from at least one image capturing parameter associated with the image capture device, —encoding (C2) said at least one frame, —decoding (C4) said at least one frame, —recomposing (C5) the current integral image from said at least one decoded frame by applying an inverse decomposition of said decomposition of the integral image and from said at least one image capturing parameter associated with the image capture device, said encoding method being characterised in that it implements the steps consisting of: —determining (C6) a residual integral image by comparing said at least one current integral image with said recomposed integral image, —encoding (C7) the data associated with the residual integral image and said at least one image capt
    Type: Grant
    Filed: September 21, 2015
    Date of Patent: April 21, 2020
    Assignee: Orange
    Inventors: Joel Jung, Antoine Dricot