Patents by Inventor Manuel J. Alvarez

Manuel J. Alvarez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20020101367
    Abstract: Embodiments of a compression/decompression (codec) system may include a plurality of data compression engines each implementing a different data compression algorithm. A codec system may be designed for the reduction of data bandwidth and storage requirements and for compressing/decompressing data. Uncompressed data may be compressed using a plurality of compression engines in parallel, with each engine compressing the data using a different lossless data compression algorithm. At least one of the data compression engines may implement a parallel lossless data compression algorithm designed to process stream data at more than a single byte or symbol at one time. The plurality of different versions of compressed data generated by the different compression algorithms may be examined to determine an optimal version of the compressed data according to one or more predetermined criteria. A codec system may be integrated in a processor, a system memory controller or elsewhere within a system.
    Type: Application
    Filed: January 11, 2002
    Publication date: August 1, 2002
    Applicant: Interactive Silicon, Inc.
    Inventors: Peter D. Geiger, Manuel J. Alvarez, Thomas A. Dye
  • Publication number: 20020091905
    Abstract: Embodiments of a compression/decompression (codec) system may include a plurality of parallel data compression and/or parallel data decompression engines designed for the reduction of data bandwidth and storage requirements and for compressing/decompressing data. The plurality of compression/decompression engines may each implement a parallel lossless data compression/decompression algorithm. The codec system may split incoming uncompressed or compressed data up among the plurality of compression/decompression engines. Each of the plurality of compression/decompression engines may compress or decompress a particular part of the data. The codec system may then merge the portions of compressed or uncompressed data output from the plurality of compression/decompression engines. The codec system may implement a method for performing parallel data compression and/or decompression designed to process stream data at more than a single byte or symbol at one time.
    Type: Application
    Filed: January 11, 2002
    Publication date: July 11, 2002
    Applicant: Interactive Silicon, Incorporated,
    Inventors: Peter D. Geiger, Manuel J. Alvarez, Thomas A. Dye
  • Publication number: 20020073298
    Abstract: A method and system for allowing a processor or I/O master to address more system memory than physically exists are described. A Compressed Memory Management Unit (CMMU) may keep least recently used pages compressed, and most recently and/or frequently used pages uncompressed in physical memory. The CMMU translates system addresses into physical addresses, and may manage the compression and/or decompression of data at the physical addresses as required. The CMMU may provide data to be compressed or decompressed to a compression/decompression engine. In some embodiments, the data to be compressed or decompressed may be provided to a plurality of compression/decompression engines that may be configured to operate in parallel. The CMMU may pass the resulting physical address to the system memory controller to access the physical memory. A CMMU may be integrated in a processor, a system memory controller or elsewhere within the system.
    Type: Application
    Filed: July 26, 2001
    Publication date: June 13, 2002
    Inventors: Peter Geiger, Manuel J. Alvarez, Thomas A. Dye
  • Publication number: 20010054131
    Abstract: A system and method for performing parallel data compression which processes stream data at more than a single byte or symbol (character) at one time. The parallel compression engine modifies a single stream dictionary based data compression method to provide scalable, high bandwidth compression. The parallel compression method examines a plurality of symbols in parallel, thus providing improved compression performance. Several types of devices and components are described that may include the parallel compression engine.
    Type: Application
    Filed: March 27, 2001
    Publication date: December 20, 2001
    Inventors: Manuel J. Alvarez, Peter Geiger, Thomas A. Dye
  • Publication number: 20010038642
    Abstract: A parallel decompression system and method that decompresses input compressed data in one or more decompression cycles, with a plurality of tokens typically being decompressed in each cycle in parallel. A parallel decompression engine may include an input for receiving compressed data, a history window, and a plurality of decoders for examining and decoding a plurality of tokens from the compressed data in parallel in a series of decompression cycles. Several devices are described that may include the parallel decompression engine, including intelligent devices, network devices, adapters and other network connection devices, consumer devices, set-top boxes, digital-to-analog and analog-to-digital converters, digital data recording, reading and storage devices, optical data recording, reading and storage devices, solid state storage devices, processors, bus bridges, memory modules, and cache controllers.
    Type: Application
    Filed: March 28, 2001
    Publication date: November 8, 2001
    Applicant: Interactive Silicon, Inc.
    Inventors: Manuel J. Alvarez, Peter Geiger, Thomas A. Dye
  • Patent number: 6208273
    Abstract: A system and method for performing parallel data compression which processes stream data at more than a single byte or symbol (character) at one time. The parallel compression engine modifies a single stream dictionary based (or history table based) data compression method, such as that described by Lempel and Ziv, to provide a scalable, high bandwidth compression. The parallel compression method examines a plurality of symbols in parallel, thus providing greatly increased compression performance. The method first involves receiving uncompressed data, wherein the uncompressed data comprises a plurality of symbols. The method maintains a history table comprising entries, wherein each entry comprises at least one symbol. The method operates to compare a plurality of symbols with entries in the history table in a parallel fashion, wherein this comparison produces compare results. The method then determines match information for each of the plurality of symbols based on the compare results.
    Type: Grant
    Filed: October 20, 1999
    Date of Patent: March 27, 2001
    Assignee: Interactive Silicon, Inc.
    Inventors: Thomas A. Dye, Manuel J. Alvarez, II, Peter Geiger
  • Patent number: 5608802
    Abstract: A data ciphering device that has special application in implementing the Digital European Cordless Telephone (DECT) standard data ciphering algorithm which requires a lengthy procedure of key loading and logic operations during the stages of pre-ciphering and ciphering which require clocks operating at different frequencies. The device performs parallel mode loading of the shift registers, with a ciphering keyword. It also calculates, in a first cycle, during the pre-ciphering, the values of the bits of each shift register that determine the value of the next shift in order to, in a second cycle, effect parallel mode shifting in these registers with a value equal to the sum of the two previous shift values. During the ciphering process, the shifting is done in the registers, in parallel mode and in a single data clock cycle, with a value equivalent to the serial value obtained by the algorithm.
    Type: Grant
    Filed: December 27, 1994
    Date of Patent: March 4, 1997
    Assignee: Alcatel Standard Electrica S.A.
    Inventor: Manuel J. Alvarez Alvarez
  • Patent number: 5454093
    Abstract: A computer system comprises a data processor, a main memory, a cache memory and an inpage buffer. The cache memory is coupled to the main memory to receive data therefrom and is coupled to the processor to transfer data thereto. The inpage buffer is coupled to the main memory to receive data therefrom, coupled to the cache memory to transfer data thereto, and coupled to the processor to transfer data thereto. Part of a line of data is originally transferred to the cache memory bypassing the inpage buffer to give the processor immediate access to the data which it needs. The remainder of the line of data is subsequently transferred to the inpage buffer, and then the processor is given access to the contents of the inpage buffer. The processor accesses the data in the cache memory with one set of clocks while the remainder of the line of data is transferred to the inpage buffer with another set of clocks. The two sets of clocks optimize the operation of tile processor and the main memory.
    Type: Grant
    Filed: February 25, 1991
    Date of Patent: September 26, 1995
    Assignee: International Business Machines Corporation
    Inventors: Jamee Abdulhafiz, Manuel J. Alvarez, II, Glenn D. Gilda
  • Patent number: 4953077
    Abstract: A data processing system having a first logical device capable of sending and receiving clocked electronic data and a second logical device connected to the first logical device, the second logical device also being capable of sending and receiving clocked electronic data. A controller is connected to the first and second logical devices for controlling data transfer therebetween. The controller includes a clock edge generator and gating logic connected thereto for allowing the first logical device to accept data and the second logical device to send data during a time interval of an odd number of clock edges.
    Type: Grant
    Filed: May 15, 1987
    Date of Patent: August 28, 1990
    Assignee: International Business Machines Corporation
    Inventors: Manuel J. Alvarez, II, Earl W. Jackson, Jr.