Abstract: A video coder performs a motion-compensated prediction both in the base layer and in an enhancement layer to determine motion data of the enhancement layer by using the motion data from the base layer and/or to predict sequences of residual error pictures after the motion-compensated prediction in the enhancement layer by using sequences of residual error pictures from the base layer via an intermediate layer predictor. On the decoder side, an intermediate layer combiner is used for canceling this intermediate layer prediction. Thereby, the data rate is improved compared to scalability schemes without intermediate layer prediction with the same picture quality.
Type:
Grant
Filed:
February 24, 2005
Date of Patent:
July 10, 2012
Assignee:
Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
Inventors:
Heiko Schwarz, Detlev Marpe, Thomas Wiegand
Abstract: A wavelet transform encoding apparatus includes a coefficient encoding unit which encodes each group of multiple wavelet transform coefficients LH, HL, and HH located spatially at the same position within multiple high-frequency subbands belonging to the same hierarchy. At the time, the coefficient encoding unit calculates an encoding parameter for the wavelet transform coefficient of an encoding object based on multiple encoded vicinal wavelet transform coefficients within the multiple high-frequency subbands belonging to the same hierarchy, and encodes the wavelet transform coefficient of the encoding object into a variable-length code by utilizing the calculated encoding parameter.
Abstract: According to one embodiment, an electronic apparatus includes a housing, a display module, and a reception member. The housing includes a first inner surface and a second inner surface opposed to the first inner surface. The display module includes a unit being stacked on the first inner surface and accommodated in the housing and a panel being stacked on the unit and opposed to the second inner surface. The reception member projects from the first inner surface. The reception member includes a base and an extended portion. The base faces a side of the unit. The extended portion projects from the top of the base and faces a side of the panel. The extended portion includes a first periphery and a second periphery inclined toward the first periphery.
Abstract: Rate-QP estimation for a B picture is disclosed which involves: providing an input group of pictures (GOP); selecting an input B picture within the GOP; and outputting, to a computer readable medium, a bit rate corrected Rate-QP, R(QP), for the input B picture. The outputting step may involve calculating intra/non-intra luma and chroma Rate-QP estimates from corresponding intra/non-intra luma and chroma histograms; offsetting the intra/non-intra chroma Rate-QP estimate to form respective offset intra/non-intra chroma estimates; and setting a bit rate corrected Rate-QP for the input B picture to a corrected sum of the previous estimates. The histograms are formed using an input of the lowest SATD forward, backward, and bidirectional prediction coefficients, and the intra prediction coefficients, where an intra/non-intra mode is selected, which results in a lowest SATD for each macroblock in the GOP. The methods may be implemented into a computer program, possibly resident in advanced video encoders.
Type:
Grant
Filed:
April 15, 2008
Date of Patent:
June 12, 2012
Assignees:
Sony Corporation, Sony Electronics Inc.
Abstract: A communication system for multiple users whereby an automatic indication of away status is prompted immediately upon a user's departure from the vicinity of a computer or other medium. In a preferred embodiment, this is accomplished, in an instant messaging environment, via a video camera arrangement whereby, upon there being a detection of a user's absence from the immediate vicinity, an automatic prompt is made to indicate away status for the user.
Type:
Grant
Filed:
July 22, 2008
Date of Patent:
May 29, 2012
Assignee:
International Business Machines Corporation
Abstract: A method for acquiring surveillance data corresponding to a region of interest comprising: installing a plurality of vehicle-mounted recording system on a plurality of vehicles; capturing visual data of exterior perimeters of the vehicles having vehicle-mounted recording systems when the vehicles are in motion wherein the visual data is marked with location data and time data; storing the visual data, the location data, and the time data so that each portion of the visual data is locatable by at least one of a time of video data capture and a location of video data capture; transmitting a request for recorded surveillance data corresponding to a region of interest; receiving a reply transmission to the request for recorded surveillance data, the reply transmission comprising surveillance data corresponding to at least a portion of the region of interest recorded by at least one of the vehicle-mounted recording systems.
Abstract: Thumbnails are used, facilitating any operation on an object that changes from time to time as in a moving picture, thereby to prevent failure in capturing important objects.
Abstract: In accordance with an example embodiment of the present invention, the present invention provides method and apparatus for motion compensated prediction. Apart from translational motion, zoom motion is taken into account by sampling an interpolated frame with one or more selected sampling rates to generate one or more zoom reference frames; matching a frame with the zoom reference frames; and determining one or more motion data.
Type:
Grant
Filed:
October 16, 2009
Date of Patent:
May 1, 2012
Assignee:
Hong Kong Applied Science and Technology Research Institute Company Limited
Inventors:
Lai-Man Po, Ka Man Wong, Kwok Wai Cheung, Ka Ho Ng, Yu Liu
Abstract: A system and method of determining three-dimensional data for an object by performing optical flow analysis to calculate surface profile analysis for a group of monocular component images that vary wavelength distribution, and determining three-dimensional data relating to the object from the component images and the surface profile analysis. Stereo image features are obtained from a single monocular polychromatic raw image or multiple monocular grayscale images having known spectral imaging collection data. Apparatus are described which allow three-dimensional data for an object to be determined from one or more two-dimensional images of the object.
Type:
Grant
Filed:
November 5, 2010
Date of Patent:
April 3, 2012
Assignee:
The United States of America as represented by the Secretary of the Army
Inventors:
Ronald Everett Meyers, David Lawrence Rosen, Keith Scott Deacon
Abstract: A deblocking unit may include a buffer, an edge mask generator, and a deblocking filter. The buffer may store video data including blocks. The blocks may correspond to at least a portion of a macroblock. The edge mask generator may generate a particular edge mask that defines edges between blocks to be deblocked. The edge mask generator may include an edge mask memory to store a number edge masks and logic to choose the particular edge mask among the number of edge masks. The logic may choose based on a type of the video data in the buffer and a position offset of the macroblock. The deblocking filter may deblock edges between blocks of video data in the buffer based on the particular edge mask from the edge mask generator.
Abstract: A plurality of modules interact to form an adaptive network in which each module transmits and receives data signals indicative of proximity of objects. A central computer accumulates the data produced or received and relayed by each module for analyzing proximity responses to transmit through the adaptive network control signals to a selectively-addressed module to respond to computer analyses of the data accumulated from modules forming the adaptive network. Interactions of local processors in modules that sense an intrusion determine the location and path of movements of the intruding object and control cameras in the modules to retrieve video images of the intruding object.
Abstract: A method and apparatus are disclosed for interpolating an object reference pixel in an annular image. In one embodiment, reference pixels selected based on a distorted shape of the annular image are arranged in a direction of distortion of the annular image and an object reference pixel in the annular image is interpolated based on the selected reference pixels.
Type:
Grant
Filed:
August 11, 2005
Date of Patent:
March 6, 2012
Assignees:
Samsung Electronics Co., Ltd., Industry Academic Cooperation Foundation Kyunghee University
Abstract: Provided are a multi-display supporting multi-view video object-based encoding apparatus and method, and an object-based transmission/reception system and method using the encoding apparatus and method.
Type:
Grant
Filed:
December 11, 2002
Date of Patent:
February 14, 2012
Assignee:
Electronics and Telecommunications Research Institute
Inventors:
Yun jung Choi, Suk-Hee Choi, Kug Jin Yun, Jinhwan Lee, Young Kwon Hahm, Chieteuk Anh
Abstract: Disclosed are methods and devices for compressing and decompressing video data streams, according to which the statistical relationship between image symbols and the context assigned thereto is used for compression. Particularly disclosed is a context-sensitive encoding unit in which the image symbols filed in an image storage are assigned to different encoding branches via a context switch, where they are encoded and compressed by a Golomb encoder and a run length encoder.
Type:
Grant
Filed:
February 4, 2003
Date of Patent:
February 14, 2012
Assignee:
Siemens Aktiengesellschaft
Inventors:
Gero Bäse, Klaus Illgner-Fehns, Robert Kutka, Jürgen Pandel
Abstract: A method for encoding a first set of pixels in a first image in a sequence of images is described. From a set of encoding modes, the method selects a first mode for encoding the first set of pixels. The method then determines whether encoding the first set of pixels in the first mode satisfies a set of quality criteria. The method foregoes encoding the first set of pixels in a second mode from the set of encoding modes, when the first mode encoding satisfies the set of quality criteria. The method also provides a video encoding method that examines several different methods for encoding a set of pixels in a first image. From a list of possible encoding modes, the method eliminates a set of encoding modes that are not likely to provide a suitable encoding solution. The method then examines different encoding solutions based on the remaining encoding modes in the list.
Abstract: An integrated circuit receives a compressed input stream having a first compression format. A media processing module converts the compressed input stream to an intermediary compression format for processing without fully decompressing the compressed input stream. After processing, a compressed output stream having a second compression format is generated from the intermediary compression format. Processing is dynamically adjusted responsive to changing network conditions. Optionally, the integrated circuit can receive live, raw video, partially encode it into the intermediary compression format, process it with the media process module as well as take the intermediary compression format, decode and output the live, raw video.
Abstract: Apparatus, systems and techniques based on an integer transform for encoding and decoding video or image signals, including transform of encoding and decoding of image and video signals and generation of an order-2N transform W from an order-N transform T in the field of image and video coding. For example, a retrieving unit is configured to retrieve an order-N transform T, where N is an integer; a deriving unit is configured to derive an order-2N transform W from the retrieved order-N transform T, and a transforming unit configured to generate an order-2N data Z using the derived transform W.
Abstract: A three dimensional imaging system is disclosed which includes a three dimensional display (12), three-dimensional calibration equipment (16), and one or more two-dimensional (15) or three dimensional (14) image scanners. The three-dimensional display (12) uses optical pulses (32a-32k) and a non linear optical mixer (18) to display a three-dimensional image (17). The three-dimensional image (17) is generated in voxels of the display volume (28) as the optical mixer (18) sweeps the display volume (28). The three-dimensional calibration equipment (16) uses a hologram projected proximal to a desired object (164) to calibrate optical imaging devices (162a-162c) and to simplify the combination of the images from one or more optical imaging devices (162a-162c) into three-dimensional information. The three-dimensional image scanner (14) employs optical pulses (136, 138) and a non-linear optical mixer (128) to acquire three-dimensional images of a desired object (134).
Type:
Grant
Filed:
December 22, 2004
Date of Patent:
January 17, 2012
Assignee:
The Trustees of The Stevens Institute of Technology
Abstract: A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe.
Type:
Grant
Filed:
July 15, 2005
Date of Patent:
January 17, 2012
Assignee:
Microsoft Corporation
Inventors:
Simon Winder, Matthew Uyttendaele, Charles Zitnick, III, Richard Szeliski, Sing Bing Kang