Patents Examined by B. R. Peugh
  • Patent number: 6725344
    Abstract: The present invention includes a microprocessor having a system bus for exchanging data with a computer system, and a private bus for exchanging data with a cache memory system. Since the processor exchanges data with the cache memory system through the private bus, cache memory operations thus do not require use of the system bus, allowing other portions of the computer system to continue to function through the system bus. Additionally, the cache memory and the processor are able to exchange data in a burst mode while the processor determines from the tag data when a read or write miss is occurring.
    Type: Grant
    Filed: August 6, 2002
    Date of Patent: April 20, 2004
    Assignee: Micron Technology, Inc.
    Inventor: Joseph T. Pawlowski
  • Patent number: 6636938
    Abstract: A sound generator, capable of improving a DRAM download speed and reducing power consumption when operating a DRAM download by applying a dedicated download logic, may increase the download speed up to 8 times at the minimum to 62 times at the maximum, and reduce power consumption by decreasing unnecessary clockings. In addition, since the sound generator according to the present invention does not access a parameter memory when downloading, previously processed data is not erroneously handled, and there is no need to rewrite new data to an internal memory after the download operation is completed.
    Type: Grant
    Filed: August 12, 1998
    Date of Patent: October 21, 2003
    Assignee: Hynix Semiconductor, Inc.
    Inventor: Yeon Ok Kim
  • Patent number: 6574721
    Abstract: An apparatus and method provide simultaneous local and global addressing capabilities in a computer system. A global address space is defined that may be accessed by all processes. In addition, each process has a local address space that is local (and therefore available) only to that process. An address space processor is implemented in software to perform system functions that distinguish between local addresses and global addresses. In the preferred embodiments, the local address space has a size that is a multiple of the size of a segment of global address space. When the hardware indicates a page fault, the address space processor determines whether the address being translated is a local address or a global address. If the address is a local address, the address space processor uses a local directory to process the page fault. If the address is a global address, the address space processor uses a global directory to process the page fault.
    Type: Grant
    Filed: August 31, 1999
    Date of Patent: June 3, 2003
    Assignee: International Business Machines Corporation
    Inventors: Patrick James Christenson, Brian Eldridge Clark, Michael J. Corrigan, Paul LuVerne Godtland, Richard Karl Kirkman, Donald Arthur Morrison, Scott Alan Plaetzer
  • Patent number: 6560689
    Abstract: A prevalidation content addressable memory, CAM, is used to pre-decode a virtual address region extension and enable it for use by a translation look-aside buffer, TLB. The prevalidation CAM removes the region extensions stored in region registers from a serial TLB look-up path.
    Type: Grant
    Filed: March 31, 2000
    Date of Patent: May 6, 2003
    Assignee: Intel Corporation
    Inventors: Gregory S. Mathews, Gary Hammond
  • Patent number: 6549990
    Abstract: A processor employing a dependency link file. Upon detection of a load which hits a store for which store data is not available, the processor allocates an entry within the dependency link file for the load. The entry stores a load identifier identifying the load and a store data identifier identifying a source of the store data. The dependency link file monitors results generated by execution units within the processor to detect the store data being provided. The dependency link file then causes the store data to be forwarded as the load data in response to detecting that the store data is provided. The latency from store data being provided to the load data being forwarded may thereby be minimized. Particularly, the load data may be forwarded without requiring that the load memory operation be scheduled.
    Type: Grant
    Filed: May 21, 2001
    Date of Patent: April 15, 2003
    Assignee: Advanced Micro Devices, Inc.
    Inventors: William Alexander Hughes, Derrick R. Meyer
  • Patent number: 6542967
    Abstract: A cache object store is organized to provide fast and efficient storage of data as cache objects organized into cache object groups. The cache object store preferably embodies a multi-level hierarchical storage architecture comprising a primary memory-level cache store and, optionally, a secondary disk-level cache store, each of which is configured to optimize access to the cache object groups. These levels of the cache object store further exploit persistent and non-persistent storage characteristics of the inventive architecture.
    Type: Grant
    Filed: June 22, 1999
    Date of Patent: April 1, 2003
    Assignee: Novell, Inc.
    Inventor: Robert Drew Major
  • Patent number: 6542980
    Abstract: A memory access system maps a block of data between a time stamp and an addressable memory location in a manner which mimics actual transmission slot timing. The system includes a slot time module, a calculation conversion module, and an addressing module The slot time module determines the time stamp for the block of data, and the calculation conversion module generates a preliminary memory location based on the time stamp and a predetermined conversion factor. The predetermined conversion factor has a preliminary component which can be removed by the addressing module such that an addressable memory location is generated. The calculation conversion module has a base conversion module, a padding module, a left-shift module; and a summation module for constructing the conversion factor and applying the conversion factor to the retrieved time stamp.
    Type: Grant
    Filed: August 11, 2000
    Date of Patent: April 1, 2003
    Assignee: TRW Inc.
    Inventors: Gregory P. Chapelle, Stephane Mailleau
  • Patent number: 6519679
    Abstract: A storage device configuration manager implemented in software for a computer system including a processor, a memory coupled to the processor, and at least one storage device coupled to the processor, can advantageously allow a user having relatively limited knowledge to configure storage devices for use with specific applications. The storage device configuration manager includes a user interface allowing for selecting, editing, deleting, and/or activating storage polices. The storage policies include information useful for configuring the storage device to operate efficiently with a particular application, or within a particular user environment.
    Type: Grant
    Filed: June 11, 1999
    Date of Patent: February 11, 2003
    Assignee: Dell USA, L.P.
    Inventors: Narayan Devireddy, Xu Chen
  • Patent number: 6513104
    Abstract: An apparatus and method within a pipeline microprocessor are provided for allocating a cache line within an internal data cache upon a write miss to the data cache. The apparatus and method allow data to be written to the allocated cache line before fill data for the allocated cache line is received from external memory over a system bus. The apparatus includes write allocate logic and a fill controller. The write allocate logic stores first bytes within the cache line corresponding to the write, and updates remaining bytes of the cache line from memory. The fill controller is coupled to the write allocate logic. The fill controller issues a fill command over the system bus directing the external memory to provide the remaining bytes, where the fill command is issued in parallel with storage of the first bytes within the cache line.
    Type: Grant
    Filed: March 29, 2000
    Date of Patent: January 28, 2003
    Assignee: I.P-First, LLC
    Inventor: Darius D. Gaskins
  • Patent number: 6507895
    Abstract: An embodiment of the present invention provides for an apparatus for memory access demarcation. Data is accessed from a first cache, which comprises a first set of addresses and corresponding data at each of the addresses in the first set. A plurality of addresses is generated for a second set of addresses. The second set of addresses follows the first set of addresses. The second set of addresses are calculated based on a fixed stride, where the second set of addresses are associated with data from a first stream. A plurality of addresses is generated for a third set of addresses. The third set of addresses follows the first set of addresses. Each address in the third set of addresses is generated by tracing a link associated with another address in the third set of addresses. The third set of addresses is associated with data from a second stream.
    Type: Grant
    Filed: March 30, 2000
    Date of Patent: January 14, 2003
    Assignee: Intel Corporation
    Inventors: Hong Wang, Ralph Kling, Jeff Baxter, Konrad Lai
  • Patent number: 6499083
    Abstract: A disk-based storage system for storing a plurality of data segments responds to a direction-selection signal by autonomously providing the data segments in a selected sequence so as to be concatenated together to define a continuous data stream. The disk-based storage system comprises nonvolatile storage including rotating disk media having a plurality of addressable locations. Each of the data segments is stored in a respective one of the addressable locations. Each of the addressable locations has a leading end and a trailing end. A first one of the addressable locations has a trailing end on a first track, and a second one of the addressable locations has a leading end on a second track, the second track being spaced from the first track. The non-volatile storage provides for locally storing a doubly-linked list of pointers.
    Type: Grant
    Filed: September 15, 1999
    Date of Patent: December 24, 2002
    Assignee: Western Digital Ventures, Inc.
    Inventor: Christopher L. Hamlin
  • Patent number: 6499088
    Abstract: Methods and apparatus for populating a network cache are described. A router associated with the cache is enabled to compile flow data relating to object traffic. The flow data are analyzed to determine a first plurality of frequently requested objects. The network cache is populated with the first plurality of frequently requested objects. Subsequent to populating the network cache, the network cache is operated in conjunction with the router to cache a second plurality of requested objects.
    Type: Grant
    Filed: July 9, 2001
    Date of Patent: December 24, 2002
    Assignee: Cisco Technology, Inc.
    Inventors: Marvin Wexler, Sunil Gaitonde
  • Patent number: 6490654
    Abstract: A cache memory replacement algorithm replaces cache lines based on the likelihood that cache lines will not be needed soon. A cache memory in accordance with the present invention includes a plurality of cache lines that are accessed associatively, with a count entry associated with each cache line storing a count value that defines a replacement class. The count entry is typically loaded with a count value when the cache line is accessed, with the count value indicating the likelihood that the contents of cache lines will be needed soon. In other words, data which is likely to be needed soon is assigned a higher replacement class, while data that is more speculative and less likely to be needed soon is assigned a lower replacement class. When the cache memory becomes full, the replacement algorithm selects for replacement those cache lines having the lowest replacement class.
    Type: Grant
    Filed: July 31, 1998
    Date of Patent: December 3, 2002
    Assignee: Hewlett-Packard Company
    Inventors: John A. Wickeraad, Stephen B. Lyle, Brendan A. Voge
  • Patent number: 6484231
    Abstract: A memory device is provided that latches a plurality of data larger than a number of input or output bits and sequentially controls the transmission of the data for input/output preferably using a higher speed clock. The memory device can be a synchronous SRAM circuit that includes a control unit outputting a burst mode signal, an address decoder receiving an externally inputted address signal and the burst mode signal, outputting an internal address signal and a block coding signal, and a counter enabled by the burst mode signal and counting the block coding signal and outputting a coding signal. A multiplexer receives cell data from a plurality of sense amplifiers of the sense amplifier to concurrently latch the plurality of cell data having a prescribed number of bits larger than the number of external input and output bits and outputs one cell data among a plurality of the cell data in accordance with the coding signal. The latched data can be sequentially output to the outside using the counter.
    Type: Grant
    Filed: June 22, 1999
    Date of Patent: November 19, 2002
    Assignee: Hyundai Electronics Industries Co., Ltd.
    Inventor: Kyung Saeng Kim
  • Patent number: 6480930
    Abstract: A method balances workloads of storage devices of a storage subsystem. The method includes reading a mailbox to obtain control parameters and collecting historical data on numbers of accesses to storage volumes of the storage devices. The control parameters are written in the mailbox by host devices. The method also includes selecting data swaps that lead to better balanced workloads for storage devices based on the historical data. The act of selecting and/or the act of collecting being initialized by the control parameters.
    Type: Grant
    Filed: September 15, 1999
    Date of Patent: November 12, 2002
    Assignee: EMC Corporation
    Inventors: Avinoam Zakai, David Wayne DesRoches, Victoria Dubrovsky, Shai Bar-Nefy, Ruben Michel
  • Patent number: 6477627
    Abstract: A data processing network including a local system and a geographically remote system. Each of the local and remote systems includes a data storage facility. The remote data storage facility mirrors the local data storage facility. In a normal operating mode, the local and remote systems operate in near synchronism or in synchronism. In an alternate operating mode, writing operations at the local system immediately update the storage devices in the local data storage facility. Transfers of corresponding data to the remote data storage facility are made independently of and asynchronously with respect to the operation of the local system.
    Type: Grant
    Filed: March 15, 1999
    Date of Patent: November 5, 2002
    Assignee: EMC Corporation
    Inventor: Yuval Ofek
  • Patent number: 6470414
    Abstract: A bank selector circuit for a simultaneous operation flash memory device with a flexible bank partition architecture comprises a memory boundary option, a bank selector encoder coupled to receive a memory partition indicator signal from the memory boundary option, and a bank selector decoder coupled to receive a bank selector code from the bank selector encoder. The decoder, upon receiving a memory address, outputs a bank selector output signal to point the memory address to either a lower memory bank or an upper memory bank in the simultaneous operation flash memory device, in dependence upon the selected memory partition boundary.
    Type: Grant
    Filed: June 26, 2001
    Date of Patent: October 22, 2002
    Assignees: Advanced Micro Devices, Inc., Fujitsu Limited
    Inventors: Tiao-Hua Kuo, Yasushi Kasa, Nancy Leong, Johnny Chen, Michael Van Buskirk
  • Patent number: 6460116
    Abstract: A microprocessor configured to rapidly decode variable-length instructions is disclosed. The microprocessor is configured with a predecoder and an instruction cache. The predecoder is configured to expand variable-length instructions to create fixed-length instructions by padding instruction fields within each variable-length instruction with constants until each field reaches a predetermined maximum width. The fixed-width instructions are then stored within the instruction cache and output for execution when a corresponding requested address is received. The instruction cache may store both variable- and fixed-width instructions, or just fixed-width instructions. An array of pointers may be used to access particular fixed-length instructions. The fixed-length instructions may be configured to all have the same fields and the same lengths, or they may be divided into groups, wherein instructions within each group have the same fields and the same lengths.
    Type: Grant
    Filed: September 21, 1998
    Date of Patent: October 1, 2002
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rupaka Mahalingaiah
  • Patent number: 6457100
    Abstract: A novel structure for a highly-scaleable high-performance shared-memory computer system having simplified manufacturability. The computer system contains a repetition of system cells, in which each cell is comprised of a processor chip and a memory subset (having memory chips such as DRAMs or SRAMs) connected to the processor chip by a local memory bus. A unique type of intra-nodal busing connects each system cell in each node to each other cell in the same node. The memory subsets in the different cells need not have equal sizes, and the different nodes need not have the same number of cells. Each node has a nodal cache, a nodal directory and nodal electronic switches to manage all transfers and data coherence among all cells in the same node and in different nodes. The collection of all memory subsets in the computer system comprises the system shared memory, in which data stored in any memory subset is accessible to the processors on the other processor chips in the system.
    Type: Grant
    Filed: September 15, 1999
    Date of Patent: September 24, 2002
    Assignee: International Business Machines Corporation
    Inventors: Michael Ignatowski, Thomas James Heller, Jr., Gottfried Andreas Goldiran
  • Patent number: 6453400
    Abstract: A semiconductor integrated circuit device is comprised a main memory portion constituted with memory cells arranged in a plurality of rows and in a plurality of columns and a sub memory portion constituted with memory cells arranged in a plurality of rows and in a plurality of columns, wherein at least one of address input terminals assigning rows or columns of the main memory portion and at least one of address input terminals assigning rows or columns of the sub memory portion are commonly used and a total number of address input terminals is equal to or smaller than the number of address input terminals assigning rows or columns of the main memory portion. Therefore, the semiconductor integrated circuit device of the present invention has a main memory suitable for being accessed from a plurality of data processors.
    Type: Grant
    Filed: September 16, 1998
    Date of Patent: September 17, 2002
    Assignee: NEC Corporation
    Inventors: Taketo Maesako, Kouki Yamamoto, Yoshinori Matsui, Kenichi Sakakibara