Abstract: An apparatus for encoding a digital video signal to reduce a transmission rate of the digital video signal, which comprises a feature point based motion compensation circuit for selecting a set of feature points from the reconstructed reference frame to detect a set of motion vectors between a current frame and a original reference frame corresponding to the set of feature points by using a feature point based motion estimation, and for generating a second predicted frame based on the set of motion vectors and the reconstructed reference frame. The feature point based motion estimation employs a convergence process in which a displacement of each of the feature points are given to a motion vector thereof and the six triangles of each of the hexagon are affine-transformed independently using the displacements of their vertex feature points. If the displacements provide a better PSNR, the motion vector of the subject feature point is sequentially updated.
Abstract: Encoding data are given to a variable-length decoding circuit and are decoded, and are given to a memory so as to be stored therein. The same data are read out from the memory, whereby decoding processing of the encoding data is executed twice within one frame period. Decoding data due to the twice decoding processings are stored in the memory at an output part thereof. Data of odd fields are read out in display order in the first half of the display period of one frame. Data of even fields are read out in display order in the latter half of the display period of one frame. Thus, even in case where restored image data of a B-picture are outputted in interlacing, a memory capacity can be reduced.
Abstract: In a predictive coding apparatus, an interframe/interfield predictive signal is generated in which low frequency components are depressed. A predictive error signal is generated on the basis of an input video signal and the interframe/interfield predictive signal. The predictive error signal is then interframe/interfield-coded. In a predictive decoding apparatus, a code transmitted from the coding apparatus is interframe/interfield-decoded to obtain a decoded signal. An interframe/interfield predictive signal is generated in which low frequency components are depressed. The video signal is then reproduced by adding the decoded signal and the predictive signal.
Abstract: A video system that determines an indication of interpolation errors for each of a set of available subsampling modes for encoding a target video frame that contains a set of pixel data values corresponding to an image scene is described. Each available subsampling mode provides a differing degree of subsampling on the target video frame. The video system determines a selected subsampling mode from the available subsampling modes such that the selected subsampling mode provides an attainable degree of subsampling with minimal loss in image quality.
Abstract: When a digital video signal is compressed, the amount of resulting data is predicted and the quantization level is controlled so as to obtain a constant bit rate. The prediction is made by computing a linear combination of a standard deviation and a number of non-zero coefficients, or by computing a sum of absolute differences between adjacent pixel values, or by computing a dynamic range of pixel values. The bit rate can also be controlled by deleting high-frequency coefficients. To avoid image degradation, the quantization level can also be controlled according to the sensitivity of small image areas to quantization noise. Sensitivity is determined by dividing an area into subblocks, which may overlap, and calculating statistics in each subblock. To reduce the amount of computation required in motion estimation, chrominance motion vectors are derived from luminance motion vectors.
Abstract: A video signal is coded by intraframe coding. The coded video signal is then decoded by intraframe/field decoding. A decoded local video signal is generated, which corresponds to a specific region in one frame of a video signal coded one frame prior to the video signal. A first motion signal is generated, which indicates motion of pictures in both the decoded video signal and the decoded local video signal. The motion signal is then subtracted from the video signal to generate a predictive error signal which is then coded by intraframe coding. The coded video signal and the coded predictive error signal are decoded by intraframe/field-decoding. A second motion signal is generated, which indicates motion of pictures in both the decoded video signal and a video signal decoded one frame prior to a video signal to be reproduced. The decoded predictive error signal and the second motion signal are added to each other to reproduce the video signal.
Abstract: An adaptive system and method for data storage and communication which is primarily used for the storage and transmission of large volumes of data, especially video data, over band-limited communication channels. The video coder of the system uses a prediction video signal which is subtracted from the input image to define a processing region in which image blocks of predetermined dimensions have large prediction error. The prediction image is successively improved by iteratively processing large error image blocks until the error is reduced below a predetermined threshold. At different processing iterations and image blocks, different compression techniques can be used to improve the overall system efficiency.
Abstract: The present invention teaches an improved parallel telecine for converting a plurality of recorded images or frames of film, defined by a first and a second set of frames of film, to a digital data stream. The improved telecine comprises a plurality of image transfer or digitizing systems for respectively digitizing each of the recorded images or frames of film or groupings of frames of film. Each image transfer and digitizing system comprises an illuminator system for illuminating the respective frames or groupings of frames, and a camera system for converting the image of the respective frame or groupings of frames into a digital data stream. Each camera sensor additionally comprises a position sensor for detecting a first and a second pair of edges on a coordinated positional tag at the edge of the film frame. Further, each camera system comprises an aligning mechanism for aligning each camera system in response to the set of edges of the respective coordinated position tag detected by the position sensor.
Abstract: A system and method for imaging and viewing, by a viewer, color and monochrome images as simultaneously three-dimensional and two-dimensional images. The invention comprises a camera device, a viewing device for displaying the images, a transmitter for transmitting a drive signal, and at least one pair of viewing glasses. The camera device includes a single imaging lens having a bifurcated, dual-aperture light valve, and a single image space for receiving and overlaying a plurality of left-eye images and a plurality of right-eye images at a field rate driven by the drive signal. The pair of viewing glasses includes a left-viewing-light valve and a right-viewing-light valve. In response to receiving the drive signal, the left-viewing-light valve opens and closes, synchronized with the field rate, for viewing the plurality of left-eye images.
Abstract: A subband coding method for dividing a luminance signal or a color difference signal in a digital video signal into a plurality of frequency bands in vertical and horizontal directions in a spatial frequency region by executing two-dimensional multilayer wavelet transform on each processing unit (field) of the digital video signal, and of the divided frequency bands of a luminance or color difference signal, an LH band consisting of high frequency components in the vertical direction and low frequency components in the horizontal direction in the layer consisting of the highest frequency bands, an HL band consisting of low frequency components in the vertical direction and high frequency components in the horizontal direction, an HH band consisting of high frequency components in the horizontal and vertical directions having quantization step size (Q.sub.-- STEP.sub.-- SIZE) set in a relationship of Q.sub.-- STEP.sub.-- SIZE(LH)<Q.sub.-- STEP.sub.-- SIZE(HL)<Q.sub.-- STEP.sub.
Abstract: A data structure is disclosed for use in a two-way wireless system for processing equity trades and the like. The data structure is stored in a computer-readable memory and includes information used by an application program. The data structure comprises a plurality of data packets, each of which contains the information used by the application program as well as a sequence code and a volley code. The sequence code associates a subset of the data packets together. The volley codes defines a hierarchical relationship among the subject of data packets. An order data packet has a hierarchical level that differs from that of an one or more execution data packets. A many-to-one relationship exists between the execution data packets and the order data packet. Each execution data packet has an execution sequence number uniquely assigned by the application program. A two-way wireless system using such a data structure is also disclosed.
June 7, 1995
Date of Patent:
August 18, 1998
Papyrus Technology Corp.
L. Thomas Patterson, Jr., Desmond Sean O'Neill, Stephen Tyler Carroll
Abstract: A system and method for wide angle imaging create a high resolution image using a convex primary mirror concentrically positioned relative to a concave secondary mirror and one or more detectors spherically juxtaposed. The radii of the primary and secondary mirrors are related by the square of the "golden ratio" to reduce low order aberrations. A fiber optic faceplate coupled to each detector corrects field curvature of the image which may then be detected with a conventional flat detector, such as a CCD camera.
Abstract: An apparatus and method for reducing shade effects associated with print quality control, typically generated while viewing or imaging a partly transparent/partly opaque printed material against a light diffusing surface. The reduction in shade effects is by illuminating the printed material from above and viewing or imaging the printed material against a multiple volume light scatterer, thereby redistributing at least part of the light, such that at least part of the scattered light illuminates shaded regions in the image or view obtained.
Abstract: A panoramic television surveillance system includes an image-sensing station having an imaging sensor with a line-format field of view mounted on a platform rotatable about an azimuthal scan-rotational axis. The imaging sensor collects a "live" television-like panoramic synoptic surveillance image of the full panoramic wide-angle field. The image data is delivered by an image-data-delivery line, which incorporates a rotary-joint data link, to an image-monitoring station. The image-monitoring station includes an image-data-processor, display-buffer-storage, and a synoptic panoramic wide-angle array of visual monitor displays to permit continuous synoptic evaluation of the full panoramic scene, up to 360.degree. wide, at an observer-personnel position. Alternatively, the image-monitoring station includes the image-data processor and an image-data recorder for continuous recording of the image data for off-line evaluation.
Abstract: A multiple room portable camera system having a plurality of rooms, a camera unit, a switching unit, and a video printer. Each of the plurality of rooms has a monitor, a signal connector, and a control connector. The camera unit has a signal connector and a control connector for connecting to the signal connectors and control connectors in the plurality of rooms. The switching unit connects to each of the monitors, signal connectors, and control connectors of each of the rooms. Based on the connection of the control connector of the camera into the control connector of a particular room, the switching unit will route the camera signals from the camera, through the video printer, and return those signals only to the monitor of the room containing the camera.
February 17, 1994
Date of Patent:
July 28, 1998
James D. Pritchett, Larry Gene Kaatz, Boris Germanishkis
Abstract: A furnace throat monitoring camera monitors the throat of a blast furnace through an opening defined in a surrounding wall of the throat. The furnace throat monitoring camera includes a wide-angle lens system for producing an optical image representing a condition in the throat, a zoom lens system for varying a size of the optical image produced by the wide-angle lens system, and an imaging device for capturing the optical image varied in size by the zoom lens system. The zoom lens system and the imaging device can be tilted in unison with each other by a tilting unit to observe any area of the optical image produced by the wide-angle lens system.
Abstract: A method is disclosed, for encoding a signal with a three dimensional image sequence using a series of left and right images. Each image in the left image series is a picture formed by non-interlaced or interlaced scanned left line images, and each image in the right image series is a picture formed by non-interlaced or interlaced scanned right line images. The left line images contained in the left picture are merged with the right line images contained in the right picture to produce an alternately arranged left and right line merged picture. The merged picture is encoded using an MPEG-2 compliant encoder.
Abstract: An apparatus and a method of the present invention for coding input picture data are so contrived as to adaptively change the number (N) of unitary frames, which constitute a group of pictures (GOP), in conformity with the frame rate of the input picture data, thereby forming picture data of a predetermined unitary time in a predetermined unit of GOPs.
Abstract: A variable bit rate video (VBR) coding method includes the steps of (a) dividing an entire sequence of moving pictures into N blocks, (b) determining a quantizer scale group having M elements, each element being applied to MPEG I, P, and B pictures, (c) quantizing the N blocks divided in step (a) by the M quantizer scales determined in step (b), (d) producing M.times.N bit rate-distortion pairs as a result of step (c), (e) assigning optimal bits to the N blocks divided in step (a) by applying the BFOS algorithm to the M.times.N bit rate-distortion pairs produced in step (d), and (f) variable bit rate-coding the entire sequence of moving pictures by using the optimal number of bits assigned to each block in step (e).
Abstract: A video compression system which is based on the image data compression system developed by the Motion Picture Experts Group (MPEG) uses various group-of-fields configurations to reduce the number of binary bits used to represent an image composed of odd and even fields of video information, where each pair of odd and even fields defines a frame. According to a first method, each field in the group of fields is predicted using the closest field which has previously been predicted as an anchor field. According to a second method, intra fields (I-fields) and predictive fields (P-fields) are distributed in the sequence so that no two I-fields and/or no two P-fields are at adjacent locations in the sequence.
October 2, 1995
Date of Patent:
December 29, 1998
Matsushita Electric Corporation of America