Abstract: The present invention selectively applies one of VLC tables stored in a memory for encoding a coded block pattern of a macroblock according to the number of blocks having an object within the macroblock, the number of blocks obtained using shape information, thereby reducing the amount of data transmitted and increasing coding efficiency.
The present invention also selectively applies one of VLD tables stored in a memory for decoding a coded block pattern of a macroblock according to the number of blocks having an object within the macroblock, the number of blocks obtained using shape information.
Type:
Grant
Filed:
May 7, 1999
Date of Patent:
May 27, 2003
Assignee:
Hyundai Curitel, Inc
Inventors:
Jae-Kyoon Kim, Jin-Hak Lee, Kwang-Hoon Park, Joo-Hee Moon, Sung-Moon Chun, Jae Won Chung
Abstract: A modem comprising entirely digital components and circuitry for receiving a digital signal of a first format from a first device, converting the digital signal to a digital signal of a second format and transmitting the digital signal of a second format to a second device. In one embodiment, a digital signal processor replaces at least a portion of the digital components and circuitry for receiving a digital signal of a first format and converting the digital signal to a digital signal of a second format and transmitting the digital signal of a second format to a second device.
Abstract: A method for increasing the accuracy of an analog neural network which computers a sum-of-products between an input vector and a stored weight pattern is described. In one embodiment of the present invention, the method comprises initially training the network by programming the synapses with a certain weight pattern. The training may be carried out using any standard learning algorithm. Preferably, a back-propagation learning algorithm is employed.Next, network is baked at an elevated temperature to effectuate a change in the weight pattern previously programmed during initial training. This change results from a charge redistribution which occurs within each of the synapses of the network. After baking, the network is then retrained to compensate for the change resulting from the charge redistribution. The baking and retraining steps may be successively repeated to increase the accuracy of the neural network to any desired level.