Patents Examined by J. Peikari
  • Patent number: 5742831
    Abstract: Methods and apparatus for maintaining cache coherency for pending load operations. A processor is selectively stalling only when there exists certain relationships between the address of an incoming store instruction and the addresses of the pending load instructions. The address specified by an incoming store instruction is compared with all the addresses specified by the pending load instructions that are stored in a bus queue. The processor is stalled from issuing subsequent instructions and executing the store instruction if the comparison results in a match of the store instruction address with any of the addresses of the pending load instructions. Instruction issue and execution of the store instruction are unstalled when data from the matching load instruction address returns. Alternatively, a count of the number of load instructions pending in the bus queue for each specified address may be maintained.
    Type: Grant
    Filed: June 30, 1994
    Date of Patent: April 21, 1998
    Assignee: Intel Corporation
    Inventor: Kenneth Creta
  • Patent number: 5737564
    Abstract: A cache memory system includes a plurality of processors and a plurality of caches respectively assigned to the plurality of processors. Each cache is mapped to a different region of the main memory, so that memory contention is lessened to a great extent. Based on a memory address received by a cache, the cache compares the memory address to its assigned region of addresses. If the memory address falls within the assigned region for the cache, the cache then examines its contents as to determine if there is an address hit in the cache. If the memory address does not fall within the assigned region for the cache, the cache does not examine its contents to determine if there is an address hit in the cache, since an address hit is not possible in that case.
    Type: Grant
    Filed: June 5, 1995
    Date of Patent: April 7, 1998
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Salim A. Shah
  • Patent number: 5737636
    Abstract: A load queue is provided in a load/store unit of a superscalar processor that includes a real page number buffer for storing a real page number for each instruction entry in the load queue. The load queue also includes a real page number comparator coupled to the real page number buffer for comparing executing load instruction entries with queued load instruction entries in the load queue. The load queue further includes a cache line modified register coupled to the data cache. The cache line modified register marks the queued load instruction entries when a cache line of the data cache addressed by the queued load instruction entry has been modified. In a preferred embodiment, when the executing load instruction is out of program order with respect to one of the queued load instructions, and the modified cache line register has marked the queued load instruction, the load queue signals a sequencer unit to cancel the queued load instruction.
    Type: Grant
    Filed: January 18, 1996
    Date of Patent: April 7, 1998
    Assignee: International Business Machines Corporation
    Inventors: David George Caffo, Christopher Anthony Freymuth
  • Patent number: 5721956
    Abstract: Buffer space and disk bandwidth resources in a continuous media server are continuously re-allocated in order to optimize the number of continuous media requests which may be concurrently serviced at guaranteed transfer rates using on demand paging. Disk scheduling is provided to ensure that whenever an admitted request references a page of data, the page is available in a buffer for transfer to a client. Data for continuous media data files are stored on disk or held in the buffer to eliminate disk bandwidth limitations associated with concurrently servicing any number or combination of requests, provided buffer space is sufficient. Multiple requests for continuous media data files are selectively included in groups for servicing in order to provide that buffer and disk bandwidth requirements are maintained at a minimum and within available resource capabilities.
    Type: Grant
    Filed: May 15, 1995
    Date of Patent: February 24, 1998
    Assignee: Lucent Technologies Inc.
    Inventors: Clifford Eric Martin, Banu Ozden, Rajeev Rastogi, Abraham Silberschatz
  • Patent number: 5721863
    Abstract: A structure and method of operation of a cache memory are provided. The cache memory is organized such that the data on a given line of any page of the main memory is stored on the same line of a page of the cache memory. Two address memories are provided, one containing the first eight bits of the virtual address of the page of the data in main memory and the second the entire real page address in main memory. When an address is asserted on the bus, the line component of the address causes each of those lines from the cache memory to read out to a multiplexor. At the same time, the eight bit component of the virtual address is compared in the first memory to the eight bits of each line stored in the first memory, and if a compare is made, the data on that line from that page of cache memory is read to the CPU. Also, the entire real address is compared in the second memory, and if a match does not occur, the data from the cache to the CPU is flagged as invalid data.
    Type: Grant
    Filed: January 29, 1996
    Date of Patent: February 24, 1998
    Assignee: International Business Machines Corporation
    Inventors: James J. Covino, Roy Childs Flaker, Alan Lee Roberts, Jose Roriz Sousa
  • Patent number: 5717953
    Abstract: An information transfer system comprises a random accessible recording medium, a request information input section connected to a detachable information recording device, and an information output section connected to the detachable information recording device, thus to output, through the information output section, information read out at random from the recording medium on the basis of request information input through the request information input section. The information recording device stores information input through an information input section into a temporary recording section thereafter to record a recording medium comprised of a plurality of recording medium pieces, thereby making it possible to correctly record information at a high speed even in the case where a recording medium having relatively low write speed is employed.
    Type: Grant
    Filed: January 11, 1995
    Date of Patent: February 10, 1998
    Assignee: Sony Corporation
    Inventors: Kyoya Tsutsui, Naoya Haneda
  • Patent number: 5717901
    Abstract: A programmable variable depth and width random-access memory circuit is provided. The memory circuit contains rows and columns of memory cells for storing data. A row decoder is used to address individual rows of the memory cells. Column address circuitry receives a column address signal and a width and depth selection signal. A column decoder within the column address circuitry addresses one or more columns of memory cells of the RAM array based on the selected width of the array. The output of the column decoder is routed to the appropriate column or columns of memory cells by a pattern of fixed connections and a group of programmable multiplexers. The number of data output lines to which data signals are provided is determined by the selected width of the RAM array. The output circuitry contains a group of programmable demultiplexers and a routing array having a pattern of fixed connections suitable for passing data signals from the RAM array to the selected number of data output lines.
    Type: Grant
    Filed: November 8, 1995
    Date of Patent: February 10, 1998
    Assignee: Altera Corporation
    Inventors: Chiakang Sung, Wanli Chang, Joseph Huang, Richard G. Cliff
  • Patent number: 5710903
    Abstract: A data processing system capable of simultaneously processing a plurality of partial purge requests stored in a purge address stack and of reducing the effective time necessary for carrying out a partial purge process for the partial purging of an address translation lookaside buffer is disclosed. The data processing system comprises a purge address stack (106) having a plurality of registers (106a to 106d) for storing a plurality of partial purge requests, a plurality of comparators (109a to 109d) associated with the registers of the purge address stack respectively and functioning for both gaining normal access to an address translation lookaside buffer and carrying out a partial purge process, and a comparator (110) to be used when an address translator (107) operates.
    Type: Grant
    Filed: March 9, 1995
    Date of Patent: January 20, 1998
    Assignees: Hitachi, Ltd., Hitachi Computer Engineering Co., Ltd.
    Inventors: Taiji Horiuchi, Kuniki Toubaru, Hiromichi Kainou
  • Patent number: 5701437
    Abstract: According to this invention, a dual-memory managing apparatus is applied to a system in which a plurality of memories and a plurality of processors are connected to each other through a data bus, and the dual-memory managing apparatus is a dual-memory managing apparatus for performing control performed when a memory copy operation from at least one first memory to at least one second memory system.
    Type: Grant
    Filed: September 10, 1996
    Date of Patent: December 23, 1997
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Morishige Kinjo, Eiji Ishibashi
  • Patent number: 5692152
    Abstract: A cache system has a large master cache and smaller slave caches. The slave caches are coupled to the processor's pipelines and are kept small and simple to increase their speed. The master cache is set-associative and performs many of the complex cache management operations for the slave caches, freeing the slaves of these bandwidth-robbing duties. The master cache has a tag pipeline for accessing the tag RAM array, and a data pipeline for accessing the data RAM array. The tag pipeline is optimized for fast access of the tag RAM array, while the data pipeline is optimized for overall data transfer bandwidth. The tag pipeline and the data pipeline are bound together for retrieving the first sub-line of a new miss from the slave cache. Subsequent sub-lines only use the data pipeline, freeing the tag pipeline for other operations. Bus snoops and cache management operations can use just the tag pipeline without impacting data bandwidth.
    Type: Grant
    Filed: May 14, 1996
    Date of Patent: November 25, 1997
    Assignee: Exponential Technology, Inc.
    Inventors: Earl T. Cohen, Jay C. Pattin
  • Patent number: 5680574
    Abstract: The control unit arbitrarily selects a disk unit among the disk units that are inactive when the control unit receives an input/output request involving either read or staging. For a write request, the control unit selects a master disk unit in the disk unit group for the immediate writing of data, and a disk unit other than the above-mentioned master disk is preferably selected to execute the read and staging unit. After writing is used to write the data into the other disk units after the write request execution is completed by writing to the master disk unit.
    Type: Grant
    Filed: December 12, 1994
    Date of Patent: October 21, 1997
    Assignee: Hitachi, Ltd.
    Inventors: Akira Yamamoto, Takao Satoh, Shigeo Honma, Yoshihiro Asaka, Yoshiaki Kuwahara, Hiroyuki Kitajima
  • Patent number: 5675544
    Abstract: A memory circuit 14 is provided having a data register (20) coupled to the output of the memory cell array (16). The output of the data register (20) may be selectively output, allowing a plurality of memory circuits (14) to be tested in parallel with a substantial increase in efficiency. Furthermore, test data can be written to the memory cell arrays (14) while previous test data is read from the memory circuits for optimum efficiency.
    Type: Grant
    Filed: April 21, 1992
    Date of Patent: October 7, 1997
    Assignee: Texas Instruments Incorporated
    Inventor: Masashi Hashimoto
  • Patent number: 5675767
    Abstract: A method for dynamically detecting loss of map integrity in a form of system-managed storage (SMS). In SMS, maps are used to define access paths to data and to allocate and reallocate storage resources among applications running thereon. The method steps include incorporating as an indivisible part of an overwriting commmand the duplication of map information by appending a portion of it to each data block in store, and detecting loss of map integrity as a function of a comparison mismatch between the portion stored with a counterpart data block and the map upon each read/write access.
    Type: Grant
    Filed: November 10, 1992
    Date of Patent: October 7, 1997
    Assignee: International Business Machines Corporation
    Inventors: Robert Baird, Thomas Beretvas, Gerald Parks Bozman, Richard Roland Guyette, Paul Hodges, Alexander Stafford Lett, James Joseph Myers, William Harold Tetzlaff
  • Patent number: 5668974
    Abstract: A memory having variable interleaving levels and associated configurator circuit which provides for the optimum level of interleaving based on the memory configuration. A number of independently addressable storage modules may be installed in the memory, the modules having various capacities which are usually multiples of a basic capacity. The configurator circuit receives a first field (ALOW) of least significant bits from the address for the desired memory entry and a second field (AHIGH) of bits of greater weight from the memory address. According to the number of the modules present in the memory and their capacities, the configurator circuit generates a module selection signal for selecting from the various modules present and a plurality of signals (MBIT) representing the memory module address bits. The configurator circuit thereby configures the memory with the highest levels of interleaving allowed by the capacity of the modules installed and properly addresses the selected module.
    Type: Grant
    Filed: May 23, 1994
    Date of Patent: September 16, 1997
    Assignee: Bull HN Information Systems
    Inventors: Antonio Grassi, Daniele Zanzottera
  • Patent number: 5652912
    Abstract: A memory controller includes an input data path and an output data path. First circuitry generates signals to put the input data into at least one variably-dimensioned logical array of memory cells of a memory. Second circuitry generates signals to extract from the memory the contents of at least one variably-dimensioned logical array of memory cells. The memory may be double buffered such that data input to one of the portions may take place simultaneously as data output from the other of the portions. In a preferred embodiment, any combination of up to 254 total variable-dimensioned logical arrays of memory cells may be defined for input to and output from the memory. The memory controller may be viewed as supporting two simultaneous processes, an input "windowing" process for receiving windows of data and an output "windowing" process for simultaneously passing out windows of data.
    Type: Grant
    Filed: November 28, 1990
    Date of Patent: July 29, 1997
    Assignee: Martin Marietta Corporation
    Inventors: John D. Lofgren, Richard W. Benton
  • Patent number: 5649217
    Abstract: A multimedia data processing apparatus for exchanging asynchronous transfer mode cells, each cell having a data portion and a header portion including destination information. Cells input through input lines are stored in locations in respective buffer memories selected by an input spatial switch. The locations in the buffer memories are addressable for reading and writing. Cells can be read from the buffer memories in a manner which is independent of the order in which the cells are written. Addresses of the stored cells in the buffer memories are managed for each of the destinations of the cells. In accordance with the managed addresses for each destination, the cells stored in the buffer memories are read and output, through an output line spatial switch to desired output lines connected to the buffer memories.
    Type: Grant
    Filed: September 2, 1993
    Date of Patent: July 15, 1997
    Assignee: Mitsubishi Denki Kabushiki Kaisha
    Inventors: Hideaki Yamanaka, Kazuyoshi Oshima, Setsuko Miura
  • Patent number: 5649232
    Abstract: A structure and a method are provided for refilling a block of memory words stored in a cache memory. The structure and method provide a read buffer to optimally match the processor speed with the main memory using read clock enable RdCEn and acknowledge (Ack) signals. The RdCEn signal is provided as each memory word is available from the main memory. The Ack signal is provided to indicate the time at which the processor may empty the read buffer at the processor clock rate without subsequently executing a wait cycle to wait for any remaining memory words in the block to arrive. The benefit of the present invention is obtained without incurring a performance penalty on the single word read operation.
    Type: Grant
    Filed: April 13, 1995
    Date of Patent: July 15, 1997
    Assignee: Integrated Device Technology, Inc.
    Inventors: Philip A. Bourekas, Avigdor Willenz, Yeshayahu Mor, Scott Revak
  • Patent number: 5649231
    Abstract: The storage control system reduces the system load on a main storage unit by staggering data transfer to minimize a busy condition on the main memory bus, while maintaining a block size sufficient to ensure a good hit rate in a buffer storage. To achieve such a reduction, the system transfers a variable quantity of data from the buffer storage to the main storage unit, wherein the quantity of data is predetermined based on a detected load on the main storage unit. When the detected load is reduced, another variable quantity of data may be transferred.
    Type: Grant
    Filed: October 21, 1994
    Date of Patent: July 15, 1997
    Assignee: Fujitsu Limited
    Inventor: Yukihiko Kitano
  • Patent number: 5644786
    Abstract: A procedure for scheduling multiple process requests for read/write access to a disk memory device within a computer system. The procedure considers disk characteristics, such as the number of sectors per track, the number of tracks per cylinder, speed of disk rotation and disk controller queuing capability in determining the optimal order for executing process requests. Process requests are placed in packets within an execution queue, each packet including up to a predetermined maximum number of requests. Within the packets, the process requests are sorted in ascending/descending order by the cylinder number to which the requests desire access, while within each cylinder the requests are placed in next-closest-in-time sequence.
    Type: Grant
    Filed: November 8, 1990
    Date of Patent: July 1, 1997
    Assignees: AT&T Global Information Solutions Company, Hyundai Electronics America, Symbios Logic Inc.
    Inventors: Michael J. Gallagher, Ray M. Jantz
  • Patent number: 5644752
    Abstract: A master-slave cache system has a large master cache and smaller slave caches, including a slave data cache for supplying operands to an execution pipeline of a processor. The master cache performs all cache coherency operations, freeing the slaves to supply the processor's pipelines at their maximum bandwidth. A store queue is shared between the master cache and the slave data cache. Store data from the processor's execute pipeline is written from the store queue directly into both the master cache and the slave data cache, eliminating the need for the slave data cache to write data back to the master cache. Additionally, fill data from the master cache to the slave data cache is first written to the store queue. This fill data is available for use while in the store queue because the store queue acts as an extension to the slave data cache. Cache operations, diagnostic stores and TLB entries are also loaded into the store queue. A new store or line fill can be merged into an existing store queue entry.
    Type: Grant
    Filed: December 7, 1994
    Date of Patent: July 1, 1997
    Assignee: Exponential Technology, Inc.
    Inventors: Earl T. Cohen, Russell W. Tilleman, Jay C. Pattin