Abstract: A three-dimensional motion obtaining apparatus includes: a light source; a charge amount obtaining circuit that includes pixels and obtains, for each of the pixels, a first charge amount under a first exposure pattern and a second charge amount under a second exposure pattern having an exposure period that at least partially overlaps an exposure period of the first exposure pattern; and a processor that controls a light emission pattern for the light source, the first exposure pattern, and the second exposure pattern. The processor estimates a distance to a subject for each of the pixels on the basis of the light emission pattern and on the basis of the first charge amount and the second charge amount of each of the pixels obtained by the charge amount obtaining circuit, and estimates an optical flow for each of the pixels on the basis of the first exposure pattern, the second exposure pattern, and the first charge amount and the second charge amount obtained by the charge amount obtaining circuit.
Abstract: A video system for coding a stream of video data that includes a stream of video frames divides each video frame into a matrix of a plurality of subblocks, wherein each subblock includes a plurality of pixels. The video system operates in accordance with nine prediction modes. Each prediction mode determines a prediction mode according to which a present subblock is to be coded. One of the nine prediction modes is selected to encode the present subblock, wherein the selected prediction mode provides for a minimum error value in the present subblock.
Abstract: Systems and methods for integrated graphics rendering are disclosed. In certain embodiments, the systems and methods utilize a graphics engine, a video encoding engine, and remote client coding engine to render graphics over a network. The systems and methods involve the generation of per-pixel motion vectors, which are converted to per-block motion vectors at the graphics engine. The graphics engine injects these per-block motion vectors into a video encoding engine, such that the video encoding engine may convert those vectors into encoded video data for transmission to the remote client coding engine.
Abstract: A video encoding apparatus comprises an encoder encoding input video; a decoder decoding video data, and a filter to compensate for a pixel value of the encoded video data. An adaptive loop filter (ALF) parameter predictor generates an ALF filter parameter using the decoded video data. The ALF filter parameter is applied to an ALF filter to compensate a current pixel by using a pixel adjacent to the current pixel and a filter coefficient with respect to the neighboring pixel; a sample adaptive offset (SAO) filter unit applied to the decoded video data compensates for a current pixel by using at least one of an edge offset and a band offset; an ALF filter unit applies the ALF filter parameter, the ALF filter to video data to which the SAO filter has been applied; and an entropy encoder performs entropy encoding on the ALF filter parameter.
Type:
Grant
Filed:
March 25, 2019
Date of Patent:
June 30, 2020
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Sung-dae Kim, Ki-won Yoo, Jae-moon Kim, Sang-kwon Na
Abstract: Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.
Abstract: An image processing apparatus includes processing circuitry configured to generate a map formed of pixels that indicate information including left and right position information, distance information, and frequency values associated with the pixels; detect a pixel block formed of a plurality of the pixels having a common feature amount in the map; and generate a detection frame defining a search target region used for detecting a body from the distance information, based on the information indicated by the pixels forming the detected pixel block.
Abstract: Sampling grid information may be determined for multi-layer video coding systems. The sampling grid information may be used to align the video layers of a coding system. Sampling grid correction may be performed based on the sampling grid information. The sampling grids may also be detected. In some embodiments, a sampling grid precision may also be detected and/or signaled.
Abstract: An apparatus configured to provide a view around a vehicle, the apparatus including: a plurality of cameras attached to a body of the vehicle and configured to acquire respective images of surroundings of the vehicle, wherein at least one of the plurality of cameras is attached to a moving part of the body of the vehicle; a display unit; and at least one processor. The at least one processor is configured to: obtain movement information of the moving part of the body of the vehicle; generate an around view image by compositing respective images acquired by the plurality of cameras; correct the around view image based on the movement information of the moving part of the body of the vehicle; and control the display unit to display the corrected around view image.
Abstract: A method including selecting a first definition and a second definition from multiple levels of definition, and determining a first resolution and a second resolution that respectively correspond to the first definition and the second definition; transcoding a first part of content of a media file based on the first resolution and the second resolution and according to a preset transcoding rule, and recording a first real-time quantization parameter (QP) value and a second real-time QP value in the transcoding process; determining whether the first real-time QP value and the second real-time QP value meet a preset detection rule; if the determining result is no, adjusting the first resolution and/or adjusting the second resolution; and transcoding the non-transcoded part in the media file according to the adjusted first resolution and/or second resolution. The transcoding method and apparatus increase the transcoding speed of a media file.
Abstract: Techniques for controlling optical behavior of a multi-view display apparatus comprising a first layer comprising first optical elements and a second layer comprising second optical elements.
Type:
Grant
Filed:
July 30, 2018
Date of Patent:
May 5, 2020
Assignee:
Lumii, Inc.
Inventors:
Thomas Anthony Baran, Matthew Waggener Hirsch, Daniel Leithinger
Abstract: A system is provided for use with a video input signal and a video unit. The video input signal can be one of a two dimensional video signal and a three dimensional video signal. The video unit can display a three dimensional video and a two dimensional video. The system includes a receiver portion, a processing portion, a switching portion and an output portion. The receiver portion can receive the video input signal. The processing portion can output a first signal in a first mode of operation and can output a second signal in a second mode of operation, wherein the first signal is based on the video input signal and the second signal is based on the video input signal. The switching portion can switch the processing portion from the first mode of operation to the second mode of operation.
Abstract: A system mounted on a vehicle for detecting an obstruction on a surface of a window of the vehicle, a primary camera is mounted inside the vehicle behind the window. The primary camera is configured to acquire images of the environment through the window. A secondary camera is focused on an external surface of the window, and operates to image the obstruction. A portion of the window, i.e. window region is subtended respectively by the field of view of the primary camera and the field of view of the secondary camera. A processor processes respective sequences of image data from both the primary camera and the secondary camera.
Abstract: The present invention provides a method and a device for deriving an inter-view motion merging candidate. A method for deriving an inter-view motion merging candidate, according to an embodiment of the present invention, can comprise the steps of: on the basis of encoding information of an inter-view reference block derived by means of a variation vector of a current block, determining whether or not inter-view motion merging of the current block is possible; and, if inter-view motion merging of the current block is not possible, generating an inter-view motion merging candidate of the current block by using encoding information of an adjacent block that is spatially adjacent to the inter-view reference block.
Abstract: A dome type surveillance camera includes a planar base, and a plurality of cameras. Each of the plurality of cameras is connected to the base to be movable along a circumference on the base. The plurality of cameras includes three or more cameras. Each of the plurality of cameras is rotatable in both right and left directions with an imaging direction. The imaging direction is an optical axis direction of each of the plurality of cameras as an axis such that a panoramic image formed by images captured by each of the plurality of cameras is continuous in a horizontal direction when the plurality of cameras are disposed so as to be equidistant to each other on a semicircular arc.
Abstract: A method receives an image from a video. The image is split into a first set of first blocks of a first size and then the first blocks are split into a second set of second blocks of a second size. The method tests a first set of down-sampling patterns for the second set of second blocks in a first block in the set of first blocks to determine if a quality of reconstruction of the down-sampled second blocks meets a threshold associated with the respective first set of down sampling patterns. Second blocks satisfying the threshold are down-sampled using the first set of down-sampling patterns. Also, the method tests a second set of down-sampling patterns for the first block using a second set of down-sampling patterns to select one of the second set of down sampling patterns to use to down-sample second blocks that did not satisfy the threshold.
Abstract: An entropy encoder includes an entropy encoding circuit and a size determining circuit. The entropy encoding circuit receives symbols of a pixel group, and entropy encodes data derived from the symbols of the pixel group to generate a bitstream segment which is composed of a first bitstream portion and a second bitstream portion. The first bitstream portion contains encoded magnitude data of the symbols of the pixel group, and the second bitstream portion contains encoded sign data of at least a portion of the symbols of the pixel group. The size determining circuit determines a size of a bitstream portion, wherein the bitstream portion comprises at least one of the first bitstream portion and the second bitstream portion.
Abstract: Systems and methods for reducing latency through motion estimation and compensation techniques are disclosed. The systems and methods include a client device that uses transmitted lookup tables from a remote server to match user input to motion vectors, and tag and sum those motion vectors. When a remote server transmits encoded video frames to the client, the client decodes those video frames and applies the summed motion vectors to the decoded frames to estimate motion in those frames. In certain embodiments, the systems and methods generate motion vectors at a server based on predetermined criteria and transmit the generated motion vectors and one or more invalidators to a client, which caches those motion vectors and invalidators. The server instructs the client to receive input from a user, and use that input to match to cached motion vectors or invalidators. Based on that comparison, the client then applies the matched motion vectors or invalidators to effect motion compensation in a graphic interface.
Abstract: Systems and methods for reducing latency through motion estimation and compensation techniques are disclosed. The systems and methods include a client device that uses transmitted lookup tables from a remote server to match user input to motion vectors, and tag and sum those motion vectors. When a remote server transmits encoded video frames to the client, the client decodes those video frames and applies the summed motion vectors to the decoded frames to estimate motion in those frames. In certain embodiments, the systems and methods generate motion vectors at a server based on predetermined criteria and transmit the generated motion vectors and one or more invalidators to a client, which caches those motion vectors and invalidators. The server instructs the client to receive input from a user, and use that input to match to cached motion vectors or invalidators. Based on that comparison, the client then applies the matched motion vectors or invalidators to effect motion compensation in a graphic interface.
Abstract: The present invention relates to an image processing device and method enabling deterioration in encoding efficiency to be suppressed. A control information adding unit 184 embeds one picture worth of control information held in a control information holding unit 183 into a slice header of a predetermined slice, in encoded data held in an encoded data holding unit 182. For example, the control information adding unit 184 embeds one picture worth of control information in the slice header of the first-transmitted slice in the frame to be processed in the encoded data. The control information adding unit 184 outputs encoded data to which the control information has been added, in a predetermined order. The present invention can be applied to, for example, an image processing device.
Abstract: A moving picture coding apparatus includes a counter unit which counts the number of pictures following an intra coded picture; and a motion estimation unit which compares respectively only reference pictures which are the intra coded picture or the following pictures, selected from among a reference picture Ref1, a reference picture Ref2 and a reference picture Ref3 stored in memories, with a picture signal, and determines the reference picture whose inter picture differential value is smallest.
Type:
Grant
Filed:
February 25, 2019
Date of Patent:
March 3, 2020
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA