Patents Examined by Yamir Encarnacion
  • Patent number: 6430654
    Abstract: A multi-level cache and method for operation therefore includes a first non-blocking cache receiving access requests from a device in a processor, and a first miss queue storing entries corresponding to access requests not serviced by the first non-blocking cache. A second non-blocking cache is provided for receiving access requests from the first miss queue, and a second miss queue is provided for storing entries corresponding to access requests not serviced by the second non-blocking cache. Other queueing structures such as a victim queue and a write queue are provided depending on the particular structure of the cache level within the multilevel cache hierarchy.
    Type: Grant
    Filed: January 21, 1998
    Date of Patent: August 6, 2002
    Assignee: Sun Microsystems, Inc.
    Inventors: Sharad Mehrotra, Ricky C. Hetherington
  • Patent number: 6425054
    Abstract: To achieve high performance at low cost, an integrated digital signal processor uses an architecture which includes both a general purpose processor and a vector processor. The integrated digital signal processor also includes a cache subsystem, a first bus and a second bus. The cache subsystem provides caching and data routing for the processors and buses. Multiple simultaneous communication paths can be used in the cache subsystem for the processors and buses. Furthermore, simultaneous reads and writes are supported to a cache memory in the cache subsystem.
    Type: Grant
    Filed: October 10, 2000
    Date of Patent: July 23, 2002
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Le Trong Nguyen
  • Patent number: 6415356
    Abstract: One embodiment of the present invention provides a system that prefetches from memory by using an assist processor that executes in advance of a primary processor. The system operates by executing executable code on the primary processor, and simultaneously executing a reduced version of the executable code on the assist processor. This reduced version runs more quickly than the executable code, and generates the same pattern of memory references as the executable code. This allows the assist processor to generate the same pattern of memory references that the primary processor generates in advance of when the primary processor generates the memory references. The system stores results of memory references generated by the assist processor in a store that is shared with the primary processor so that the primary processor can access the results of the memory references. In one embodiment of the present invention, this store is a cache memory.
    Type: Grant
    Filed: May 4, 2000
    Date of Patent: July 2, 2002
    Assignee: Sun Microsystems, Inc.
    Inventors: Shailender Chaudhry, Marc Tremblay
  • Patent number: 6412056
    Abstract: A software distributed shared memory system includes a translation lookaside buffer extended to include fine-grain memory block-state bits associated with each block of information within a page stored in memory. The block-state bits provide multiple block states for each block. The block-state bits are used to check the state of each block, thereby alleviating the need for software checks and reducing checking overheads associated therewith.
    Type: Grant
    Filed: October 1, 1997
    Date of Patent: June 25, 2002
    Assignee: Compac Information Technologies Group, LP
    Inventors: Kourosh Gharachorloo, Daniel J. Scales
  • Patent number: 6397309
    Abstract: A computer program product is used with a programmable device to provide a data item reconstruction element for reconstructing information stored on a back-up information storage subsystem associated with at least one protected volume. The back-up information storage subsystem includes a plurality of storage media, each associated with one of a plurality of sets, the information associated with the protected volume being stored on storage media associated with one of the sets. The data item reconstruction element during the reconstruction operation retrieves in parallel information from a plurality of the storage media associated with the one of the sets on which information associated with the at least one protected volume is stored, to obtain the information which is associated with the protected volume.
    Type: Grant
    Filed: March 12, 2001
    Date of Patent: May 28, 2002
    Assignee: EMC Corporation
    Inventors: Nadav Kedem, Haim Bitner
  • Patent number: 6374341
    Abstract: The present invention provides an apparatus and a method for variable size pages using fixed size TLB (Translation Lookaside Buffer) entries. In one embodiment, an apparatus for variable size pages using fixed size TLB entries includes a first TLB for fixed size pages and a second TLB for variable size pages. In particular, the second TLB stores fixed size TLB entries for variable size pages. Further, in one embodiment, an input of an OR device is connected to the second TLB to provide a cost-effective and efficient implementation for translating linear addresses to physical addresses using fixed size TLB entries stored in the second TLB.
    Type: Grant
    Filed: September 2, 1998
    Date of Patent: April 16, 2002
    Assignee: ATI International SRL
    Inventors: Sandeep Nijhawan, Denis Gulsen, John S. Yates, Jr.
  • Patent number: 6363457
    Abstract: A method and system is provided where once the disk array subsystem has been initialized and mapped, addition and deletion of logical drives will be made without interruption to system operation. In adding a logical device, a method is provided to determine the amount of space required in adding the new logical device and the amount of physical space available for such operation. In adding such a new logical device, the data of the new logical device may be placed anywhere on the physical drives. In deleting a logical device, the space made available when deleting the data of the logical device creates a physical gap to the original mapping where the gap could be filled by subsequent additions of new logical devices. Once a new mapping is determined, the mapping is sent to the adapters and device controllers to update the mapping information available to the adapters and the device controllers in order for the adapters and the device controllers to properly address data in the disk array subsystem.
    Type: Grant
    Filed: February 8, 1999
    Date of Patent: March 26, 2002
    Assignee: International Business Machines Corporation
    Inventor: Gerald Franklin Sundberg
  • Patent number: 6343354
    Abstract: During a compressing portion, memory (20) is divided into cache line blocks (500). Each cache line block is compressed and modified by replacing address destinations of address indirection instructions with compressed address destinations. Each cache line block is modified to have a flow indirection instruction as the last instruction in each cache line. The compressed cache line blocks (500) are stored in a memory (858). During a decompression portion, a cache line (500) is accessed based on an instruction pointer (902) value. The cache line is decompressed and stored in cache. The cache tag is determined based on the instruction pointer (902) value.
    Type: Grant
    Filed: April 19, 2000
    Date of Patent: January 29, 2002
    Assignee: Motorola Inc.
    Inventors: Mauricio Breternitz, Jr., Roger A. Smith
  • Patent number: 6330645
    Abstract: Multiple memory access streams issued by multiple memory controller access devices are used for accessing DRAM or other memory. The memory controller can arbitrate among requests for multiple requestors, such as in a multiprocessor environment. Coherency can be provided by snooping a write buffer and returning, write buffer contents directly, in response to a read request for a matching address. Coherency need not be implemented through the entirety of an address space and preferably can be enabled for only selected portion or portions of the address space, reducing unnecessary coherency checking overhead.
    Type: Grant
    Filed: December 21, 1998
    Date of Patent: December 11, 2001
    Assignee: Cisco Technology, Inc.
    Inventor: Guy Harriman
  • Patent number: 6304949
    Abstract: A data processing apparatus for handling multi-thread programs comprises a data processor coupled with a random-access memory (HM) containing a plurality of data objects (DO). Each data object is accessed via respective pointers carried by memory stacks (SF) associated with respective threads. Periodically, a garbage collection procedure is applied to the random-access memory with those data objects (DO) having no extant pointers thereto from any source being identified and deleted. Subject to a locking constraint applied to some of the data objects, the remainder are compacted to free space in the memory (HM). To enable localizing of the garbage collection procedure, reference stacks (RS) are provided for each thread stack frame (SF) such as to identify, preferably via a per-thread reference table (TT), data objects (DO) referenced from only a single frame, which objects are deleted on conclusion of that frame.
    Type: Grant
    Filed: August 24, 1998
    Date of Patent: October 16, 2001
    Assignee: U.S. Philips Corporation
    Inventor: Richard J. Houlsdworth
  • Patent number: 6301635
    Abstract: A memory management method comprises storing parametric data in volatile memory such as RAM, and periodically updating the data stored in RAM to non-volatile memory such as Flash Memory. Updating of data to Flash Memory is dependent on the time since the last update or the importance of the data in the RAM.
    Type: Grant
    Filed: September 26, 1997
    Date of Patent: October 9, 2001
    Assignee: Nokia Mobile Phones Limited
    Inventors: Leslie Innes Bothwell, Andrew Blake
  • Patent number: 6298424
    Abstract: A computer system includes one or more microprocessors. The microprocessors assign a priority level to each memory operation as the memory operations are initiated. In one embodiment, the priority levels employed by the microprocessors include a fetch priority level and a prefetch priority level. The fetch priority level is higher priority than the prefetch priority level, and is assigned to memory operations which are the direct result of executing an instruction. The prefetch priority level is assigned to memory operations which are generated according to a prefetch algorithm implemented by the microprocessor. As memory operations are routed through the computer system to main memory and corresponding data transmitted, the elements involved in performing the memory operations are configured to interrupt the transfer of data for the lower priority memory operation in order to perform the data transfer for the higher priority memory operation.
    Type: Grant
    Filed: March 10, 2000
    Date of Patent: October 2, 2001
    Assignee: Advanced Micro Devices, Inc.
    Inventors: W. Kurt Lewchuk, Brian D. McMinn, James K. Pickett
  • Patent number: 6295595
    Abstract: A circuit and method for producing defect tolerant high density memory cells at a low cost is disclosed. Rather than using redundant memory cells to salvage a memory circuit having non-functional memory cells, an address mapping circuit is used to remap addresses for non-functional memory cells into addresses for functional memory cells. Specifically, if the memory array of a memory circuit includes non-functional memory cells, an address mapping scheme is selected to reduce the effective size of the memory circuit so only functional memory cells are addressed. Because redundant memory cells are not included in the memory circuit, the semiconductor area and the cost of the memory circuit is reduced.
    Type: Grant
    Filed: April 21, 1999
    Date of Patent: September 25, 2001
    Assignee: Tower Semiconductor Ltd.
    Inventors: Eli Wildenberg, Gennady Goltman
  • Patent number: 6286082
    Abstract: A hazard control circuit for a cache controller that prevents overwriting of modified cache data without write back. The cache controller controls a non-blocking, N-way set associative cache that uses a write-back cache-coherency protocol. The hazard control circuit prevents data loss by deferring assignment until after completion of a pending fill for that way. The hazard control circuit of the present invention includes a transit hazard buffer, a stall assertion circuit and a way assignment circuit.
    Type: Grant
    Filed: April 19, 1999
    Date of Patent: September 4, 2001
    Assignee: Sun Mocrosystems, Inc.
    Inventors: Anuradha N. Moudgal, Belliappa M. Kuttanna
  • Patent number: 6279095
    Abstract: A method and apparatus that operates within a virtual memory system in which portions of the memory system can be “omnibus wired,” Omnibus wiring a page guarantees that the page is present in memory and that the mapping tables pointing to the page's location are also present and filled in for all possible address references to the page. Because the system is which the present invention is implemented allows memory sharing between processes, there can be several virtual addresses that address a single page. Various elements in the virtual memory system include a “cached MSCR” field. The cached MSCR field is used to determine whether it is necessary to continue recursing upwards when performing omnibus wiring, since there is a special case associated with “uplevel references” and omnibus wiring.
    Type: Grant
    Filed: November 9, 1998
    Date of Patent: August 21, 2001
    Assignee: Compaq Computer Corporation
    Inventor: Charles Robert Landau
  • Patent number: 6279091
    Abstract: The present invention provides a program execution environment specification method capable of carrying out effective management of resources used for execution of a program. When a command to specify allocation of an area to store a logical address is issued, firstly, an address space structure is allocated as an area to store the logical address and an execution space structure is allocated to store an information of an area for sled execution. Next, a pointer to the address space structure is stored in a pointer to the execution space structure, and the pointer to the execution space structure is linked to a list of the pointer to the address space structure. After this, a sled structure is allocated as an area to store a sled information, the pointer to the execution space structure is stored in the pointer to the sled structure, and the pointer to the sled structure is linked to the list of the pointer to the execution space structure.
    Type: Grant
    Filed: April 9, 1998
    Date of Patent: August 21, 2001
    Assignee: Sony Corporation
    Inventors: Toshiki Kikuchi, Yasuhiko Yokote
  • Patent number: 6279081
    Abstract: The present invention is generally directed to a system and method for fetching data from a system memory to an ATM card. The method includes the steps of receiving a request (via a PCI bus) to fetch data from memory, and identifying the request as an ATM request. The method then determines, based on the start address, the number of cache lines that will be implicated by the fetch. Then, the method automatically fetches the appropriate number of cache lines into the cache, and then passes the data to the ATM card, via the PCI bus. In accordance with another aspect of the present invention, a system is provided for fetching data from memory for an ATM card. Broadly, the system includes a system memory for data storage and a cache memory for providing high-speed (retrieval) temporary storage of data, the cache memory being disposed in communication with the system memory via a high-speed system bus. The system further includes a PCI bus in communication with the cache memory via an input/output (I/O) bus.
    Type: Grant
    Filed: December 22, 1998
    Date of Patent: August 21, 2001
    Assignee: Hewlett-Packard Company
    Inventors: Thomas V Spencer, Robert J Horning, Monish S Shah
  • Patent number: 6279094
    Abstract: A method and apparatus that operates within an object oriented virtual memory management system. The invention relates to invalidating mapping tables that are pointed to by PMOs (Partitioned Memory Objects). The PMOs specify locations of pages within the entire memory space of a process. The mapping tables specify locations of pages that have been swapped into a primary memory. In a preferred embodiment of the present invention, each PMO includes a plurality of MORs. Each MOR includes an involved bit. When a page is swapped into memory, the involved bits in all MORs relating to the page are set (except for the last MOR on a level). When a page is swapped out of memory, the present invention allows the mapping tables for the page to be invalidated in an efficient manner. Once a MOR having an involved bit clear is detected, there is no requirement to invalidate additional mapping tables for the path.
    Type: Grant
    Filed: August 20, 1998
    Date of Patent: August 21, 2001
    Assignee: Compaq Computer Corporation
    Inventor: Charles R. Landau
  • Patent number: 6272606
    Abstract: A method of scheduling data storage or retrieval jobs in a data storage and/or retrieval system in which stored data is distributed between multiple data storage volumes comprises: (i) maintaining a queue of data storage or retrieval jobs for execution; and (ii) adding a newly initiated job to the queue so that: (a) if the newly initiated job requires access to the same data storage volume as a further job already in the queue, the newly initiated job is added to the queue at an adjacent queue position to that further job; and (b) if the newly initiated job does not require access to the same data storage volume as any other job already in the queue, adding the newly initiated job to the queue at a queue position independent of the data storage volumes of other jobs in the queue.
    Type: Grant
    Filed: April 1, 1998
    Date of Patent: August 7, 2001
    Assignee: Sony United Kingdom Limited
    Inventors: Martin Rex Dorricott, Simon Chandler
  • Patent number: 6260129
    Abstract: Disclosed is a system for managing pages in a volatile memory device for data transfer operations between a first storage area and a second storage area. The first storage area is queried to determine a number of data sets to include in a data transfer operation. A number of pages in the volatile memory device needed for the data transfer operation is then determined. A determination is then made as to whether the number of pages needed for the data transfer operation is greater than available fixed pages in a pool of pages. Available fixed pages in the pool are allocated to the data transfer operation after determining that the number of pages needed to process the data transfer operation is less than or equal to the available fixed pages in the pool.
    Type: Grant
    Filed: September 8, 1998
    Date of Patent: July 10, 2001
    Assignee: International Business Machines Corportion
    Inventors: Robert Nelson Crockett, Ronald Maynard Kern, Gregory Edward McBride, David Michael Shackelford, Stephen Charles West