Signal Formatting Patents (Class 348/43)
-
Patent number: 11140373Abstract: A 360-degree video data processing method performed by a 360-degree video reception apparatus, according to the present invention, comprises the steps of: receiving 360-degree video data for a plurality of views; deriving metadata and information on a packed picture; decoding the packed picture based on the information on the packed picture; deriving a specific packed region for a target view from the packed picture based on the metadata; deriving a projected picture of the target view based on the specific packed region and the metadata; and rendering the projected picture based on the metadata, wherein the metadata includes multiview region-wise packing information, and wherein the multiview region-wise packing information includes information about a packed region in the packed picture and information about the target view.Type: GrantFiled: April 3, 2019Date of Patent: October 5, 2021Assignee: LG ELECTRONICS INC.Inventors: Hyunmook Oh, Sejin Oh
-
Patent number: 11138804Abstract: Methods and apparatuses are described for intelligent smoothing of 3D alternative reality applications for secondary 2D viewing. A computing device receives a first data set corresponding to a first position of an alternative reality viewing device. The computing device generates a 3D virtual environment for display on the alternative reality viewing device using the first data set, and a 2D rendering of the virtual environment for display on a display device using the first data set. The computing device receives a second data set corresponding to a second position of the alternative reality viewing device after movement of the alternative reality viewing device. The computing device determines whether a difference between the first data set and the second data set is above a threshold. The computing device updates the 2D rendering of the virtual environment on the display device using the second data set, when the difference is above the threshold value.Type: GrantFiled: July 29, 2020Date of Patent: October 5, 2021Assignee: FMR LLCInventors: Adam Schouela, David Martin, Brian Lough, James Andersen, Cecelia Brooks
-
Patent number: 11140403Abstract: A method and apparatus for video decoding includes decoding a binary coded syntax element carrying an identification of a picture segment in a high level syntax structure comprising fixed length codewords and reconstructing the picture segment.Type: GrantFiled: May 6, 2019Date of Patent: October 5, 2021Assignee: TENCENT AMERICA LLCInventors: Byeongdoo Choi, Stephan Wenger, Shan Liu
-
Patent number: 11128890Abstract: The invention relates to a method and apparatus for implementing the method. The method comprises determining movement of a multicamera device between a first time and a second time, the multicamera comprising at least a first camera and a second camera; selecting a frame from the first camera at the first time; and entering the selected frame to a reference frame list of a frame from the second camera at the second time; where position and direction of the first camera at the first time is the same as position and direction of the second camera at the second time, and wherein the first camera and the second camera are different.Type: GrantFiled: July 13, 2017Date of Patent: September 21, 2021Assignee: NOKIA TECHNOLOGIES OYInventor: Payman Aflaki Beni
-
Patent number: 11122229Abstract: The present disclosure relates to a solid-state imaging device, a signal processing method therefor, and an electronic apparatus enabling sensitivity correction in which a sensitivity difference between solid-state imaging devices is suppressed. The solid-state imaging device includes a pixel unit in which one microlens is formed for a plurality of pixels in a manner such that a boundary of the microlens coincides with boundaries of the pixels. The correction circuit corrects a sensitivity difference between the pixels inside the pixel unit based on a correction coefficient. The present disclosure is applicable to, for example, a solid-state imaging device and the like.Type: GrantFiled: October 1, 2019Date of Patent: September 14, 2021Assignee: SONY CORPORATIONInventors: Youji Sakioka, Masayuki Tachi, Hisashi Nishimaki, Taishin Yoshida
-
Patent number: 11120567Abstract: A depth map generation device for merging multiple depth maps includes at least three image capturers, a depth map generator, and a mixer. The at least three image capturers form at least two image capture pairs. The depth map generator is coupled to the at least three image capturers for generating a depth map corresponding to each image capturer pair of the at least two image capture pairs according to an image pair captured by the each image capturer. The mixer is coupled to the depth map generator for merging at least two depth maps corresponding to the at least two image capturer pairs to generate a final depth map, wherein the at least two depth maps have different characteristics.Type: GrantFiled: April 8, 2020Date of Patent: September 14, 2021Assignee: eYs3D Microelectronics, Co.Inventor: Chi-Feng Lee
-
Patent number: 11107271Abstract: An apparatus includes a memory configured to store multiple sets of image data. Each of the sets corresponds to a respective portion of a surface of an object and a respective portion of a structured light pattern projected onto the surface. The apparatus includes a processor configured to perform structured light reconstruction of the sets, including matching a first group of image pixels that correspond to a projected pixel of the structured light pattern in a first set of image data with a second group of image pixels that correspond to the projected pixel in a second set of image data. The processor is configured to perform stereo reconstruction of the sets, including matching one or more features detected within the first group of image pixels with one or more features detected within the second group of image pixels, to generate three-dimensional point data of the surface.Type: GrantFiled: November 5, 2019Date of Patent: August 31, 2021Assignee: THE BOEING COMPANYInventors: Luke C. Ingram, Anthony W. Baker
-
Patent number: 11095867Abstract: Systems and methods are provided that involve processing video to identify a plurality of people in the video; obtaining a plurality of gaze part affinity fields (PAFs) and torso PAFs from the identified plurality of people; determining orthogonal vectors from first vectors derived from the torso PAFs; determining an intersection between second vectors derived from the gaze PAFs and the orthogonal vectors; and changing a viewpoint of the video based on the intersection.Type: GrantFiled: February 13, 2020Date of Patent: August 17, 2021Assignee: FUJIFILM BUSINESS INNOVATION CORP.Inventors: Hu-Cheng Lee, Lyndon Kennedy, David Ayman Shamma
-
Patent number: 11094130Abstract: The embodiments relate to a method, and a technical equipment for implementing the method. The method comprises generating a bitstream defining a presentation, the presentation comprising an omnidirectional visual media content; and indicating in the bitstream a definition for an external media to be overlaid on the omnidirectional visual media content during rendering; wherein the definition comprises at least an overlay placement information for the external media on the omnidirectional visual media content. The embodiments also relate to a method and technical equipment for decoding the bitstream.Type: GrantFiled: January 17, 2020Date of Patent: August 17, 2021Assignee: Nokia Technologies OyInventors: Igor Curcio, Sujeet Mate, Kashyap Kammachi Sreedhar, Emre Aksu, Miska Hannuksela, Ari Hourunranta
-
Patent number: 11089213Abstract: Reproduction position information of a free viewpoint video is formatted to be easy for a viewer to understand, and managed. Words of a photographer, the words being uttered when the photographer was impressed at a time of shooting a scenery of the real world, are used as a tag of the shot video, and the words are managed in association with a viewpoint position and a viewing direction, and shooting time. Alternatively, words of the viewer viewing the free viewpoint video, the words being uttered when the viewer was impressed by a certain scene, are used as a tag of the shot video, and the words are managed in association with a viewpoint position and a viewing direction, and shooting time corresponding to the reproduction position. The viewer can cue a desired free viewpoint video by using a tag with which tagging has been performed by the viewer himself/herself or another person.Type: GrantFiled: May 13, 2016Date of Patent: August 10, 2021Inventor: Yuichi Hasegawa
-
Patent number: 11079475Abstract: An optical device is disclosed. An optical device of the present invention that generates a depth map for an object comprises: a projector for irradiating electromagnetic waves onto an object; an optical receiver for receiving reflected waves reflected from the object; and a processor for generating a first depth map for the object on the basis of the direction of arrival of the received reflected waves, generating a second depth map for the object on the basis of the arrival time of the received reflected waves, and combining the first depth map and the second depth map and generating a depth map for the object.Type: GrantFiled: February 24, 2016Date of Patent: August 3, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Sergii Gryshchenko, Iegor Vdovychenko, Ivan Safonov, Andrii But, Vitaliy Bulygin, Volodymyr Khristyan
-
Patent number: 11070783Abstract: A three dimensional system including rendering with variable displacement.Type: GrantFiled: July 9, 2020Date of Patent: July 20, 2021Assignee: VEFXi CorporationInventors: Craig Peterson, Markus Roberts, Sergey Lomov, Manuel Muro, Pat Doyle
-
Patent number: 11059581Abstract: A method for adaptive mission execution by an unmanned aerial vehicle includes receiving a set of pre-calculated mission parameters corresponding to an initial UAV mission; collecting UAV operation data during flight of the unmanned aerial vehicle; calculating a set of modified mission parameters from the set of pre-calculated mission parameters and the UAV operation data, the set of modified mission parameters corresponding to a modified UAV mission; and executing the modified UAV mission on the unmanned aerial vehicle.Type: GrantFiled: December 21, 2018Date of Patent: July 13, 2021Assignee: DroneDeploy, Inc.Inventors: Michael Winn, Jonathan Millin, Nicholas Pilkington, Jeremy Eastwood
-
Patent number: 11064218Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.Type: GrantFiled: March 19, 2020Date of Patent: July 13, 2021Assignees: Electronics and Telecommunications Research Institute, Poznan University of TechnologyInventors: Gwang Soon Lee, Jun Young Jeong, Hong Chang Shin, Kug Jin Yun, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski
-
Patent number: 11049266Abstract: An apparatus comprises a processor to divide a first point cloud data set frame representing a three dimensional space at a first point in time into a matrix of blocks, determine at least one three dimensional (3D) motion vector for at least a subset of blocks in the matrix of blocks, generate a predicted second point cloud data set frame representing a prediction of the three dimensional space at a second point in time by applying the at least one 3D motion vector to the subset of blocks in the matrix of blocks, compare the predicted second point cloud data set frame to a second point cloud data set frame representing a prediction of the three dimensional space at a second point in time to generate a prediction error parameter, and encode the second point cloud data set frame as a function of the first point cloud data set frame and the at least one three dimensional (3D) motion vector when the prediction error factor is beneath an error threshold to produce an encoded second point cloud data set frame.Type: GrantFiled: July 31, 2018Date of Patent: June 29, 2021Assignee: INTEL CORPORATIONInventors: Scott Janus, Barnan Das, Hugues Labbe, Jong Dae Oh, Gokcen Cilingir, James Holland, Narayan Biswal, Yi-Jen Chiu, Qian Xu, Mayuresh Varerkar, Sang-Hee Lee, Stanley Baran, Srikanth Potluri, Jason Ross, Maruthi Sandeep Maddipatla
-
Patent number: 11044454Abstract: Multi-layered frame-compatible video delivery is described. Multi-layered encoding and decoding methods, comprising a base layer and at least one enhancement layer with reference processing, are provided. In addition, multi-layered encoding and decoding methods with inter-layer dependencies are described. Encoding and decoding methods that are capable of frame-compatible 3D video delivery are also described.Type: GrantFiled: October 24, 2018Date of Patent: June 22, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Athanasios Leontaris, Alexandras Tourapis, Peshala V. Pahalawatta, Kevin Stec, Walter Husak
-
Patent number: 11037031Abstract: An image recognition method includes: determining a first feature map of the current frame image by using a convolutional neural network based on a type of a current frame image; determining a second feature map of a key frame image before the current frame image; performing feature alignment on the first feature map and the second feature map to obtain a first aligned feature map; fusing the first feature map and the first aligned feature map to obtain a first fused feature map; and recognizing content in the current frame image based on the first fused feature map.Type: GrantFiled: February 12, 2020Date of Patent: June 15, 2021Assignee: Beijing Horizon Robotics Technology Research and Development Co., Ltd.Inventors: Chaoxu Guo, Qian Zhang, Guoli Wang, Chang Huang
-
Patent number: 11039116Abstract: A subtitle-embedding method for a virtual-reality (VR) video is provided. The method includes the following steps: obtaining a VR video; in response to execution of a display operation of the VR video, analyzing a current stereoscopic image of the VR video to obtain at least one object and an object parallax corresponding to the object in the current stereoscopic image; adjusting a subtitle parallax of a subtitle to be superimposed onto the current stereoscopic image according to the object parallax, wherein the subtitle parallax is greater than the object parallax; and superimposing the subtitle onto the current stereoscopic image using the calculated subtitle parallax.Type: GrantFiled: October 29, 2019Date of Patent: June 15, 2021Assignee: HTC CorporationInventor: Kuan-Wei Li
-
Patent number: 11032555Abstract: The way of predicting a current block by assigning constant partition values to the partitions of a bi-partitioning of a block is quite effective, especially in case of coding sample arrays such as depth/disparity maps where the content of these sample arrays is mostly composed of plateaus or simple connected regions of similar value separated from each other by steep edges. The transmission of such constant partition values would, however, still need a considerable amount of side information which should be avoided. This side information rate may be further reduced if mean values of values of neighboring samples associated or adjoining the respective partitions are used as predictors for the constant partition values.Type: GrantFiled: September 4, 2020Date of Patent: June 8, 2021Assignee: GE Video Compression, LLCInventors: Philipp Merkle, Christian Bartnik, Haricharan Lakshman, Detlev Marpe, Karsten Mueller, Thomas Wiegand, Gerhard Tech
-
Patent number: 11032562Abstract: In accordance with a first aspect, the intra prediction direction of a neighboring, intra-predicted block is used in order to predict the extension direction of the wedgelet separation line of a current block, thereby reducing the side information rate necessitated in order to convey the partitioning information. In accordance with a second aspect, the idea is that previously reconstructed samples, i.e. reconstructed values of blocks preceding the current block in accordance with the coding/decoding order allow for at least a prediction of a correct placement of a starting point of the wedgelet separation line, namely by placing the starting point of the wedgelet separation line at a position of a maximum change between consecutive ones of a sequence of reconstructed values of samples of a line of samples extending adjacent to the current block along a circumference thereof. Both aspects may be used individually or in combination.Type: GrantFiled: September 3, 2020Date of Patent: June 8, 2021Assignee: GE Video Compression, LLCInventors: Philipp Merkle, Christian Bartnik, Haricharan Lakshman, Detlev Marpe, Karsten Mueller, Thomas Wiegand, Gerhard Tech
-
Patent number: 11032530Abstract: Improved techniques for generating depth maps are disclosed. A stereo pair of images of an environment is accessed. This stereo pair of images includes first and second texture images. A signal to noise ratio (SNR) is identified within one or both of those images. Based on the SNR, which may be based on the texture image quality or the quality of the stereo match, there is a process of selectively computing and imposing a smoothness penalty against a smoothness term of a cost function used by a stereo depth matching algorithm. A depth map is generated by using the stereo depth matching algorithm to perform stereo depth matching on the stereo pair of images. The stereo depth matching algorithm performs the stereo depth matching using the smoothness penalty.Type: GrantFiled: May 15, 2020Date of Patent: June 8, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
-
Patent number: 11024088Abstract: A method for creating an augmented reality scene, the method comprising, by a computing device with a processor and a memory, receiving a first video image data and a second video image data; calculating an error value for a current pose between the two images by comparing the pixel colors in the first video image data and the second video image data; warping pixel coordinates into a second video image data through the use of the map of depth hypotheses for each pixel; varying the pose between the first video image data and the second video image data to find a warp that corresponds to a minimum error value; calculating, using the estimated poses, a new depth measurement for each pixel that is visible in both the first video image data and the second video image data.Type: GrantFiled: May 26, 2017Date of Patent: June 1, 2021Assignee: HoloBuilder, Inc.Inventors: Simon Heinen, Lars Tholen, Mostafa Akbari-Hochberg, Gloria Indra Dhewani Abidin
-
Patent number: 11023748Abstract: A method of estimating a position includes: determining a first rotation reference point based on localization information estimated for a target and map data; estimating a second rotation reference point from image data associated with a front view of the target; and correcting the localization information using a rotation parameter calculated based on the first rotation reference point and the second rotation reference point.Type: GrantFiled: March 7, 2019Date of Patent: June 1, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: DongHoon Sagong, Hyun Sung Chang, Minjung Son
-
Patent number: 11025883Abstract: A method (300) and apparatus (400) for three-dimensional television (3DTV) image adjustment includes loading (342, 344) default 2D-to-3D image setting values from a default settings memory to a user adjustment settings memory, annunciating (346) the default 2D-to-3D image setting values, receiving (361, 362) a 2D-to-3D image settings value adjustment, saving (370) the 2D-to-3D image settings value adjustment in the user adjustment settings memory, and applying (390) the 2D-to-3D image settings value adjustment to a 2D-to-3D converted image. These methods and apparatuses allow individual users to set 3DTV image settings to their personal preferences to compensate for brightness reductions caused by 3DTV glasses, depth perception sensitivities, and other image quality factors.Type: GrantFiled: August 19, 2019Date of Patent: June 1, 2021Assignee: Google Technology Holdings LLCInventors: Ajay K. Luthra, Michael A. Grossman, Jae Hoon Kim, Arjun Ramamurthy
-
Patent number: 11025880Abstract: A region of interest (ROI)-based virtual reality (VR) content streaming server and a method are disclosed. The streaming server includes a communication unit that receives a request signal for a media presentation description (MPD) file regarding tiles of a tiled video, ROI) information, and a request signal for a segment file from an electronic device, and transmits the MPD file and the segment file corresponding to the request signal to the electronic device. The streaming server further includes a controller configured to, when the request signal for the MPD file is received, control the MPD file corresponding to the request signal to be transmitted to the electronic device.Type: GrantFiled: December 13, 2019Date of Patent: June 1, 2021Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Junhwan Jang, Woochool Park, Youngwha Kim, Jinwook Yang, Sangpil Yoon, Hyunwook Kim, Eunkyung Cho, Minsu Choi, Junsuk Lee, Jaeyoung Yang
-
Patent number: 11012675Abstract: A method for automatic selection of viewpoint characteristics and trajectories in volumetric video presentations includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a set of desired viewpoint characteristics for a volumetric video traversal of the scene, determining a trajectory through the plurality of video streams that is consistent with the set of desired viewpoint characteristics, rendering a volumetric video traversal that follows the trajectory, wherein the rendering comprises compositing the plurality of video streams, and publishing the volumetric video traversal for viewing on a user endpoint device.Type: GrantFiled: April 16, 2019Date of Patent: May 18, 2021Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: David Crawford Gibbon, Tan Xu, Zhu Liu, Behzad Shahraray, Eric Zavesky
-
Patent number: 11002840Abstract: Embodiments of the present disclosure provide a multi-sensor calibration method, a multi-sensor calibration device, a computer device, a medium and a vehicle. The method includes: acquiring data acquired by each of at least three sensors in a same time period in a traveling process of a vehicle; determining a trajectory of each of the at least three sensors according to the data acquired by each of at least three sensors; and performing a joint calibration on the at least three sensors by performing trajectory alignment on the trajectories of the at least three sensors.Type: GrantFiled: September 18, 2019Date of Patent: May 11, 2021Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Shirui Li, Yuanfan Xie, Xun Zhou, Liang Wang
-
Patent number: 11004257Abstract: A method and apparatus for image conversion according to an embodiment of the present disclosure includes receiving original image data, separating the original image data into a front view image and a back view image for performing 3D conversion processing of the original image data, and generating a converted 3D image by restoring a background space between the front view image and the back view image using a 3D conversion processing neural network. The 3D conversion processing neural network according to the present disclosure may be a deep neural network generated by machine learning, and input and output of images may be performed in an Internet of things environment using a 5G network.Type: GrantFiled: January 24, 2020Date of Patent: May 11, 2021Assignee: LG ELECTRONICS INC.Inventors: Soo Hyun Han, Jung In Kwon, Min Cheol Shin, Seung Hyun Song
-
Patent number: 10999535Abstract: A non-transitory computer-readable recording medium having stored therein a video generation program for causing a computer to execute a process comprising: tracking a position of a three-dimensional model of each of a plurality of subjects in a three-dimensional space generated by combining, for the subjects, a plurality of imaging frames captured by a plurality of cameras in a plurality of directions; obtaining positional information of the subjects included in the three-dimensional space; obtaining information related to the subjects; and generating synthesized video by combining the information related to the subjects with a background area near the subjects based on the positional information of the subjects among areas of free viewpoint video generated on the basis of the three-dimensional model of each of the subjects.Type: GrantFiled: December 2, 2019Date of Patent: May 4, 2021Assignee: FUJITSU LIMITEDInventors: Takaharu Shuden, Kazumi Kubota
-
Patent number: 10996757Abstract: Virtual reality apparatus includes a display generator to generate images of a virtual environment, including a virtual representation of a display object and at least part of an avatar, for display to a user; a haptic interface including one or more actuators to provide a physical interaction with the user in response to a haptic interaction signal; a detector arrangement configured to detect two or more haptic detections applicable to a current configuration of the avatar relative to the object in the virtual environment; and a haptic generator to generate the haptic interaction signal in dependence upon the two or more haptic detections.Type: GrantFiled: February 20, 2018Date of Patent: May 4, 2021Assignee: Sony Interactive Entertainment Inc.Inventors: Sharwin Winesh Raghoebardajal, Simon Mark Benson
-
Patent number: 10992964Abstract: This application provides a method for determining a coding tree node split mode and a coding device. The method includes the step of determining a non-split based coding cost for coding a current image area corresponding to the current node and determining a binary tree split based coding cost for coding the current image area. The method further includes determining, based on the non-split based coding cost and the binary tree split based coding cost, whether a triple tree split based coding cost for coding the current image area needs to be obtained. If the triple tree split based coding cost needs to be obtained, a triple tree split is performed on the current node. The method then determines a split mode of the current node that corresponds to a minimum coding cost.Type: GrantFiled: March 13, 2020Date of Patent: April 27, 2021Assignee: Huawei Technologies Co., Ltd.Inventors: Yin Zhao, Haitao Yang, Shan Liu
-
Patent number: 10992959Abstract: First and second pluralities of residual elements useable to reconstruct first and second respective parts of a representation of a signal are obtained. A transformation operation is performed to generate at least one correlation element. The transformation operation involves at least one residual element in the first plurality and at least one residual element in the second plurality. The at least one correlation element is dependent on an extent of correlation between the at least one residual element in the first plurality and the at least one residual element in the second plurality. The transformation operation is performed prior to the at least one correlation element being encoded.Type: GrantFiled: March 7, 2019Date of Patent: April 27, 2021Assignee: V-NOVA INTERNATIONAL LIMITEDInventor: Ivan Damnjanovic
-
Patent number: 10986352Abstract: Although wedgelet-based partitioning seems to represent a better tradeoff between side information rate on the one hand and achievable variety in partitioning possibilities on the other hand, compared to contour partitioning, the ability to alleviate the constraints of the partitioning to the extent that the partitions have to be wedgelet partitions, enables applying relatively uncomplex statistical analysis onto overlaid spatially sampled texture information in order to derive a good predictor for the bi-segmentation in a depth/disparity map. Thus, in accordance with a first aspect it is exactly the increase of the freedom which alleviates the signaling overhead provided that co-located texture information in form of a picture is present. Another aspect pertains the possibility to save side information rate involved with signaling a respective coding mode supporting irregular partitioning.Type: GrantFiled: September 4, 2020Date of Patent: April 20, 2021Assignee: GE Video Compression, LLCInventors: Philipp Merkle, Christian Bartnik, Haricharan Lakshman, Detlev Marpe, Karsten Mueller, Thomas Wiegand, Gerhard Tech
-
Patent number: 10979727Abstract: A method comprising: decoding, from a bitstream, a first encoded region of first picture into a first preliminary reconstructed region; forming a first reconstructed region from the first preliminary reconstructed region, wherein the forming comprises resampling and/or rearranging the first preliminary reconstructed region, wherein the rearranging comprises relocating, rotating and/or mirroring; and decoding at least a second region, wherein the first reconstructed region is used as a reference for prediction in decoding the at least second region and the second region either belongs to a second picture and is spatially collocated with the first reconstructed region or belongs to the first picture.Type: GrantFiled: June 15, 2017Date of Patent: April 13, 2021Assignee: Nokia Technologies OyInventors: Miska Hannuksela, Jani Lainema, Alireza Aminlou, Ramin Ghaznavi Youvalari
-
Patent number: 10979688Abstract: A system includes one or more memories and a control module. The one or more memories stores first pixel data corresponding to a first image frame and second pixel data corresponding to a second image frame. The control module performs stereo vision matching including: accessing the first pixel data and the second pixel data; determining initial disparity values, where the initial disparity values indicate differences between the first pixel data and the second pixel data; determining cost aggregation values based on the initial disparity values; determining matching merit values for the first pixel data based on the cost aggregation values; based on the matching merit values, determining first histogram aggregation values for first pixels of the first image frame; and refining the initial disparity values based on the first aggregation values. The control module also estimates a depth of a feature or an object based on the refined disparity values.Type: GrantFiled: January 23, 2020Date of Patent: April 13, 2021Assignee: Marvell Asia Pte, Ltd.Inventor: Huai Dong Li
-
Patent number: 10971192Abstract: A computer-implemented method, comprising: obtaining motion indicators for a plurality of samples of a video stream; obtaining an anomaly state for each of a plurality of time windows of the video stream, each of the time windows spanning a subset of the samples, by (i) obtaining estimated statistical parameters for the given time window based on measured statistical parameters characterizing the motion indicators for the samples in at least one time window of the video stream that precedes the given time window and (ii) determining the anomaly state for the given time window based on the plurality of motion indicators obtained for the samples in the given time window and the estimated statistical parameters; and processing the video stream based on the anomaly state for various ones of the time windows.Type: GrantFiled: November 7, 2019Date of Patent: April 6, 2021Inventor: Sean Lawlor
-
Patent number: 10971087Abstract: According to an aspect of the present disclosure, there is provided a display device including a display panel having a left-eye display area and a right-eye display area, and a timing controller configured to receive video signals having a first frequency to generate video data having a second frequency, where the second frequency is lower than the first frequency.Type: GrantFiled: September 19, 2018Date of Patent: April 6, 2021Assignee: LG DISPLAY CO., LTD.Inventor: Woongjin Seo
-
Patent number: 10972737Abstract: A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.Type: GrantFiled: August 15, 2019Date of Patent: April 6, 2021Inventors: Matthew Hamilton, Chuck Rumbolt, Donovan Benoit, Matthew Troke, Robert Lockyer
-
Patent number: 10972714Abstract: Image data of a photographic image is inputted, and based on information related to a distance from a focal plane when capture is performed, the sharpness of an image that the inputted image data represents is controlled. Image data resulting from the sharpness control being performed is outputted. In the sharpness control, the sharpness control amount is set in accordance with the luminance of a peripheral region that neighbors an in-focus region that is determined to be in-focus in the image.Type: GrantFiled: February 14, 2019Date of Patent: April 6, 2021Assignee: Canon Kabushiki KaishaInventors: Yumi Yanai, Shinichi Miyazaki, Yuji Akiyama
-
Patent number: 10964089Abstract: A method for coding view-dependent texture attributes of points in a 3D point cloud prior to transmission and decoding includes creating a supplemental enhancement information (SIE) message for the 3D point cloud prior to transmission and decoding. The SEI message includes parameters related to texture attributes of individual points in the 3D point cloud for a plurality of viewing states at an initial time, such that when the SIE message is received at a decoder, the decoder is enabled to use the message to classify the texture attributes and apply one or more texture attributes to individual points such that the texture of each individual point in the decoded 3D point cloud is a correct representation of texture of that individual point in the 3D point cloud prior to transmission and decoding for each of the viewing states.Type: GrantFiled: January 31, 2020Date of Patent: March 30, 2021Assignee: SONY CORPORATIONInventor: Danillo Graziosi
-
Patent number: 10957067Abstract: A control apparatus capable of efficiently detecting a target object even when the target object is shielded by other objects is provided. An object recognition unit 114 recognizes a target object 80 present in a 3D environment 4 by using measurement data acquired from a sensor 12. An information generation unit 116 generates 3D environmental information by integrating a plurality of measurement data. A position determination unit 120 determines an optimal position of the sensor 12 for performing the next measurement. A sensor control unit 140 moves the sensor 12 to the determined optimal position. The position determination unit 120 determines, by using the 3D environmental information, a position of the sensor 12 where the sensor 12 can take an image in which a size of an area shielded by at least one first object is larger as the optimal position.Type: GrantFiled: March 20, 2019Date of Patent: March 23, 2021Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Kei Yoshikawa, Shintaro Yoshizawa, Yuto Mori
-
Patent number: 10958920Abstract: In some embodiments, an encoder device is disclosed to generate single-channel standard dynamic range/high dynamic range content predictors. The device receives a standard dynamic range image content and a representation of a high dynamic range image content. The device determines a first mapping function to map the standard dynamic range image content to the high dynamic range image content. The device generates a single channel prediction metadata based on the first mapping function, such that a decoder device can subsequently render a predicted high dynamic range image content by applying the metadata to transform the standard dynamic range image content to the predicted high definition image content.Type: GrantFiled: July 24, 2018Date of Patent: March 23, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Guan-Ming Su, Tao Chen, Qian Chen
-
Patent number: 10944955Abstract: A multiple viewpoint image capturing system includes: a plurality of cameras that capture videos in a predetermined space from different positions; a circumstance sensing unit that senses at least one of circumstances of the respective cameras and circumstances of the predetermined space, and outputs the sensed circumstances in a form of capturing circumstance information; and an event detector that detects a predetermined event based on the capturing circumstance information, determines whether to perform camera calibration in a case of detecting the predetermined event, and outputs camera calibration information that indicates the camera calibration to be performed in a case of determining that the camera calibration is to be performed.Type: GrantFiled: February 25, 2019Date of Patent: March 9, 2021Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Tatsuya Koyama, Toshiyasu Sugio, Toru Matsunobu, Satoshi Yoshikawa
-
Patent number: 10945000Abstract: There is provided a file generation apparatus and a file generation method as well as a reproduction apparatus and a reproduction method by which a file in which quality information of a depth-related image at least including a depth image is efficiently stored can be managed. An MPD file generation unit generates an MPD file. The MPD file is a management file that manages a file in which quality information representative of quality of a plurality of depth-related images including at least a depth image is disposed in a form divided in a plurality of tracks or subsamples and the management file in which a correspondence relationship between the respective tracks and associationID that specifies the depth-related images is described. The present disclosure can be applied when a segment file and an MPD file of a video content are distributed, for example, in a method that complies with MPEG-DASH.Type: GrantFiled: February 8, 2017Date of Patent: March 9, 2021Assignee: SONY CORPORATIONInventors: Mitsuru Katsumata, Mitsuhiro Hirabayashi, Toshiya Hamada
-
Patent number: 10939087Abstract: A video server is configured to convert frame data of a spherical image to frame data of a equirectangular image such that a first area corresponding to a field of view received from a client device is a middle area of the equirectangular image. The video server is further configured to scale the first area at a first resolution, scale a second area of the equirectangular image adjacent to the first area at a second resolution smaller than the first resolution, scale a third area of the equirectangular image that is adjacent to the first area and is not adjacent to the second area, at a third resolution smaller than the first resolution, and rearrange the scaled first area, second area and third area such that the scaled second area and the scaled third area are adjacent to each other, to generate reformatted equirectangular image frame data to be encoded.Type: GrantFiled: November 20, 2019Date of Patent: March 2, 2021Assignee: AlcaCruz Inc.Inventors: SangYoo Ha, SungBae Kim
-
Patent number: 10931980Abstract: An apparatus for providing a 360 degree virtual reality broadcasting service uses an image acquired through a 3D virtual reality (VR) camera to transmit a 360 degree full image stream having first resolution and transmits an image stream of a view of interest (Vol) region having second resolution different from the first resolution according to a user selection mode and move information if the user selection mode and the movement information are received from a receiving apparatus.Type: GrantFiled: July 19, 2017Date of Patent: February 23, 2021Assignee: Electronics and Telecommunications Research InstituteInventors: Gun Bang, Kug Jin Yun, Youngsoo Park
-
Patent number: 10931883Abstract: An example method includes setting an exposure time of a camera of a distance sensor to a first value, instructing the camera to acquire a first image of an object in a field of view of the camera, where the first image is acquired while the exposure time is set to the first value, instructing a pattern projector of the distance sensor to project a pattern of light onto the object, setting the exposure time of the camera to a second value that is different than the first value, and instructing the camera to acquire a second image of the object, where the second image includes the pattern of light, and where the second image is acquired while the exposure time is set to the second value.Type: GrantFiled: March 14, 2019Date of Patent: February 23, 2021Assignee: MAGIK EYE INC.Inventor: Akiteru Kimura
-
Patent number: 10924748Abstract: Although wedgelet-based partitioning seems to represent a better tradeoff between side information rate on the one hand and achievable variety in partitioning possibilities on the other hand, compared to contour partitioning, the ability to alleviate the constraints of the partitioning to the extent that the partitions have to be wedgelet partitions, enables applying relatively uncomplex statistical analysis onto overlaid spatially sampled texture information in order to derive a good predictor for the bi-segmentation in a depth/disparity map. Thus, in accordance with a first aspect it is exactly the increase of the freedom which alleviates the signaling overhead provided that co-located texture information in form of a picture is present. Another aspect pertains the possibility to save side information rate involved with signaling a respective coding mode supporting irregular partitioning.Type: GrantFiled: September 4, 2020Date of Patent: February 16, 2021Assignee: GE Video Compression, LLCInventors: Philipp Merkle, Christian Bartnik, Haricharan Lakshman, Detlev Marpe, Karsten Mueller, Thomas Wiegand, Gerhard Tech
-
Patent number: 10916022Abstract: Disclosed are a texture synthesis method and a device using the same. The texture synthesis method adopts a source labeled diagram and a target labeled diagram in combination to guide the texture synthesis process, so that the texture synthesis is in a controlled state, thus effectively improving the accuracy and efficiency of the computer in processing complex texture information such as textures composed of multiple materials or textures involving non-uniform gradients. Meanwhile, determination of the accuracy of the texture features of the labeled diagram is introduced to the process of producing the labeled diagram, and the labeled diagram with low accuracy is re-abstracted and -segmented so that the classification of the texture features would be more accurate. This interactive iterative method improves the accuracy of the labeled diagram generation process. The device using this method also provides the same technical effects.Type: GrantFiled: March 27, 2017Date of Patent: February 9, 2021Assignee: SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGY CHINESE ACADEMY OF SCIENCESInventors: Huajie Shi, Yang Zhou, Hui Huang
-
Patent number: 10911779Abstract: A moving image encoding/decoding apparatus that performs encoding/decoding while predicting a multiview moving image including moving images of a plurality of different views includes: a corresponding region setting unit that sets a corresponding region on a depth map for an encoding/decoding target region; a region dividing unit that sets a prediction region that is one of regions obtained by dividing the encoding/decoding target region; a disparity vector generation unit that generates, for the prediction region, a disparity vector for a reference view using depth information for a region within the corresponding region that corresponds to the prediction region; a motion information generation unit that generates motion information in the prediction region from the reference view motion information based on the disparity vector for the reference view; and a prediction image generation unit that generates a prediction image for the prediction region using the motion information in the prediction region.Type: GrantFiled: October 15, 2014Date of Patent: February 2, 2021Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shinya Shimizu, Shiori Sugimoto, Akira Kojima