Patents Examined by Yamir Encarnacion
  • Patent number: 6253284
    Abstract: A memory module controlling system. The controlling system has a multiplexer, an automatic detector and a terminal device. The system has several slots with each slot having a signaling line to the automatic detector so that the presence or absence of a memory module in each slot can be determined. The automatic detector gathers all the signals from the slots and is able to relay a control signal to the multiplexer. The signal output terminal of each slot is coupled to the input terminals of the multiplexer, and the output terminal of the multiplexer is coupled to the terminal device. As soon as the multiplexer receives a control signal regarding the state of the memory slots from the automatic detector, one of the input terminals of the multiplexer automatically connects to its output terminal. Hence, the output signal from the last memory-plugged slot is connected to the terminal device, thereby forming a complete data and clock pulse transmission channel.
    Type: Grant
    Filed: February 8, 1999
    Date of Patent: June 26, 2001
    Assignee: Asustek Computer Inc.
    Inventor: Hsien-Yueh Hsu
  • Patent number: 6243790
    Abstract: Modern disk array apparatus are capable of providing a plurality of logical disks within one cabinet. The present invention provides a disk array apparatus in which logical disks can be easily re-arranged within the array, or added to the array. An array controller logically controls at least one disk apparatus as one logical disk. The array controller also changes positional information within the drive modules stored within the disk apparatus. Information is matched indicating the position of the relevant disk apparatus within the logical disk. Therefore, after transposition of a particular logical disk, a new logical disk can be accurately recognized.
    Type: Grant
    Filed: March 25, 1997
    Date of Patent: June 5, 2001
    Assignee: Fujitsu Limited
    Inventor: Keiichi Yorimitsu
  • Patent number: 6237060
    Abstract: In general, a method and apparatus for managing available cache memory in a browser are disclosed. Any document stored in a cache memory not having associated with it a strong reference is subject to being reclaimed by a garbage collector. The most recently requested documents, however, are stored in the cache memory with strong references associated therewith thereby precluding them from being reclaimed until such time as the strong reference is abolished. The strong reference is abolished when the document identifier associated with the document stored in the cache memory is not present in the document stack. Therefor, only the most recently requested documents remain stored in the cache memory depending upon the depth of the document stack.
    Type: Grant
    Filed: April 23, 1999
    Date of Patent: May 22, 2001
    Assignee: Sun Microsystems, Inc.
    Inventors: Matthew F. Shilts, Michael R. Allen
  • Patent number: 6233650
    Abstract: The present invention discloses a method and apparatus for interfacing a memory array to a memory controller using a field-effect transistor (FET) switch. The memory controller has a bus which comprises a plurality of signal lines. The memory array is coupled to the memory controller. The memory array is divided into N groups of memory devices; each group has K memory devices. K memory devices in each of the N groups share memory signal lines. The FET switch couples the bus to one of the N groups of the shared memory signal lines at different times in response to a switch control indication.
    Type: Grant
    Filed: April 1, 1998
    Date of Patent: May 15, 2001
    Assignee: Intel Corporation
    Inventors: Brian P. Johnson, Dave Freker
  • Patent number: 6230231
    Abstract: A cache line index for an address cache entry is calculated by organizing an address such as a Media Access Control (“MAC”) address into a plurality of intermediate elements, barrel shifting the bits of at least one of the intermediate elements in accordance with predetermined criteria, and folding the intermediate address elements together with an exclusive-OR function. A Virtual Local Area Network (“VLAN”) index may also be included in cache line index calculation. The VLAN index enables segmentation of the cache into virtual tables. The tag portion of the cache entry includes a subset of the complete set of intermediate elements. The intermediate elements in the cache entry can be employed in conjunction with the cache line index to recover the original MAC address. Hence, the size of the tag portion of the address cache entry is reduced relative to the full MAC address without a reduction in the information content of the entry.
    Type: Grant
    Filed: March 19, 1998
    Date of Patent: May 8, 2001
    Assignee: 3Com Corporation
    Inventors: Kenneth J. DeLong, David S. Miller
  • Patent number: 6223255
    Abstract: A microprocessor includes a multiply-accumulate unit (MAU) for performing high-speed signal processing operations. First and second caches provide first and second operands (x, y) directly to the MAU when a multiply-accumulate (MAC) instruction is executed. In addition, a multiplexer is included to select data from either the first and second caches when a normal instruction is executed. A translation look-aside buffer may be included that has page table entries that include additional “reconfigure” and “way” bits to control writing data into the caches. In this manner, the microprocessor may use a conventional n-way set-associative cache to simultaneously access two or more operands.
    Type: Grant
    Filed: January 5, 1998
    Date of Patent: April 24, 2001
    Assignee: Lucent Technologies
    Inventor: Pramod V. Argade
  • Patent number: 6216213
    Abstract: During a compressing portion, memory (20) is divided into cache line blocks (500). Each cache line block is compressed and modified by replacing address destinations of address indirection instructions with compressed address destinations. Each cache line block is modified to have a flow indirection instruction as the last instruction in each cache line. The compressed cache line blocks (500) are stored in a memory (858). During a decompression portion, a cache line (500) is accessed based on an instruction pointer (902) value. The cache line is decompressed and stored in cache. The cache tag is determined based on the instruction pointer (902) value.
    Type: Grant
    Filed: June 7, 1996
    Date of Patent: April 10, 2001
    Assignee: Motorola, Inc.
    Inventors: Mauricio Breternitz, Jr., Roger A. Smith
  • Patent number: 6216198
    Abstract: A tag array includes, corresponding to each line, backward and forward links for holding line numbers for storing adjacent data included in continuous data. For storing the continuous data in the tag array and a data array, backward and forward links are generated by holding beforehand storing line numbers used immediately before. For reading the continuous data from the tag array and the data array, by holding beforehand the content of a forward link in a line accessed immediately before, a number of a line for storing next data in the continuous data is specified without indexing a cache memory.
    Type: Grant
    Filed: September 4, 1998
    Date of Patent: April 10, 2001
    Assignee: NEC Corporation
    Inventor: Seiji Baba
  • Patent number: 6212603
    Abstract: A processor prefetches instructions in a pipelined manner from a first (L1) cache to a local instruction cache, with an instruction pointer device being utilized to select one of a plurality of incoming addresses for fetching purposes. Instructions returned from the L1 cache are stored in an instruction streaming buffer before they are actually written into the instruction cache. A way multiplexer outputs instructions to dispersal logic in the processor, and is fed by either the local cache or a bypass path that provides the instruction to the way multiplexer from a plurality of bypass sources, which includes the instruction streaming buffer. A request address buffer registers physical and virtual addresses associated with an instruction of a miss request by the processor to the L1 cache. Each entry of the request address buffer has an ID that is sent to the L1 cache with the miss request.
    Type: Grant
    Filed: April 9, 1998
    Date of Patent: April 3, 2001
    Assignee: Institute for the Development of Emerging Architectures, L.L.C.
    Inventors: Rory McInerney, Eric Sindelar, Tse-Yu Yeh
  • Patent number: 6202135
    Abstract: A digital data processing system comprises a host information generating device, a mass storage subsystem, and a back-up information storage subsystem. The host information generating device generates information and provides it to the mass storage subsystem for storage. The mass storage subsystem receives the generated information from the host information generating device and transfers the generated information to the storage element for storage, and further transfers the generated information to the back-up information storage subsystem. The back-up information storage subsystem receives and stores the generated information from the mass storage subsystem's control element. The back-up information storage subsystem includes a filter/buffer module, a tape log module and a reconstruction module. The filter/buffer module filters and buffers the information received from the mass storage subsystem and provides the buffered information to the tape log module for storage.
    Type: Grant
    Filed: December 23, 1996
    Date of Patent: March 13, 2001
    Assignee: EMC Corporation
    Inventors: Nadav Kedem, Haim Bitner
  • Patent number: 6170034
    Abstract: The present invention includes a method of transferring data when some of the data is masked. A mask table is provided to a storage device where it is duplicated and stored with the duplicate. The duplicate data is compared to the original data for a data protection function. A mask index counter and mask bit counter maintain provide values for specific data that are to be processed. The counters are programmable so that if a transfer error occurs, counter values for the next data after the previously transferred good data is calculated and loaded therein. The present invention also has the capability not to transfer the last requested sector if that sector is masked. The present invention evaluates whether a stop count value equals a stop threshold value when a sector is identified as being masked. The stop count value is incremented for each sector that is read from the first storage device, regardless of whether that sector is to be transferred or masked.
    Type: Grant
    Filed: March 31, 1998
    Date of Patent: January 2, 2001
    Assignee: LSI Logic Corporation
    Inventors: Graeme Weston-Lewis, David M. Springberg, Stephen D. Hanna
  • Patent number: 6167499
    Abstract: A technique for conserving digital memory space is disclosed. This technique includes sequentially transmitting a first address and a second address on a first bus coupled to a FIFO memory. The first address is stored in the memory and compared to the second address to determine a first value corresponding to a difference between the first and second addresses. This first value is written in the memory to represent the second address and has a bit size smaller than the second address. A method to decode the first value to regenerate the second address is also disclosed. These techniques may be further enhanced by only storing an address in a sequential access memory when it differs from the most recently stored address in the memory.
    Type: Grant
    Filed: May 20, 1997
    Date of Patent: December 26, 2000
    Assignee: VLSI Technology, Inc.
    Inventor: Lawrence Letham
  • Patent number: 6167493
    Abstract: A CPU performs read access to a plurality of resources. A plurality of buffers connect the plurality of resources to the CPU, respectively. The CPU causes one of the plurality of buffers connected to one of the plurality of resources to be in an active state so that the CPU can perform read access to the one of the plurality of resources via the one of the plurality of buffers, the one of the plurality of resources being given priority.
    Type: Grant
    Filed: May 23, 1997
    Date of Patent: December 26, 2000
    Assignee: Fujitsu Limited
    Inventors: Hidetaka Ebeshu, Hirotoshi Okada, Hideaki Tomatsuri
  • Patent number: 6167497
    Abstract: A data processing apparatus includes physical registers larger in number than logical registers specified by a register specification field of an instruction executed by the apparatus. The physical registers are classified into a plurality of banks. In response to a particular instruction, an execution control section supplies a register address converter with bank information to select a bank of the physical register. The converter stores the bank information in a bank register. Receiving logical register address information specified by the register specification field of the instruction, the address converter adds the bank information set to the bank register to at least a portion of the logical register address information, thereby producing a physical register address which can specify any one of the physical registers.
    Type: Grant
    Filed: June 10, 1997
    Date of Patent: December 26, 2000
    Assignee: Hitachi, Ltd.
    Inventors: Yasuhiro Nakatsuka, Koyo Katsura
  • Patent number: 6157991
    Abstract: In a computer system including a CPU, a first storage system coupled to the CPU, a second storage system, and a communication link coupling the second storage system to the first storage system, a method and apparatus for asynchronously mirroring, to the second storage system, a plurality of units of data written by the CPU to the first storage system. In one aspect of the invention, the CPU writes units of data to the first storage system in a first order, and the units of data are asynchronously transmitted over the communication link from the first storage system to the second storage system in a second order that is different than the first order. In another aspect, the units of data are committed in the second storage system in an order that is independent of the order in which the units of data are received at the second storage system.
    Type: Grant
    Filed: April 1, 1998
    Date of Patent: December 5, 2000
    Assignee: EMC Corporation
    Inventor: Dan Arnon
  • Patent number: 6154767
    Abstract: Building resource (e.g., Internet content) and attribute transition probability models and using such models for pre-fetching resources, editing resource link topology, building resource link topology templates, and collaborative filtering.
    Type: Grant
    Filed: January 15, 1998
    Date of Patent: November 28, 2000
    Assignee: Microsoft Corporation
    Inventors: Steven J. Altschuler, Greg Ridgeway
  • Patent number: 6148386
    Abstract: An improved apparatus and method for providing addresses for accessing circular memory buffers is provided. An apparatus comprised of a first feedback circuit, a second feedback circuit, a beginning address register, an ending address register, and a comparator circuit. A control circuit is also provided. The beginning and ending address registers preferably include the beginning and ending addresses respectively of a circular memory buffer. The first feedback circuit is comprised of a first register, a first phase delay register, a first adder, a first displacement register, and a first multiplexer. The second feedback circuit is preferably comprised of a second register, a second phase delay register, a second adder, and a second displacement register.
    Type: Grant
    Filed: March 19, 1998
    Date of Patent: November 14, 2000
    Assignee: Lucent Technologies Inc
    Inventors: Douglas Rhodes, Mark Thierbach
  • Patent number: 6145068
    Abstract: To improve the speed of transition to the zero-volt suspend state, system context is saved from volatile random access memory to non-volatile memory, such as a hard disk, using a compression algorithm which speeds the transfer of data to non-volatile memory by recognizing data pages having bytes of a single value. The system context in extended memory of RAM consists of a number of system context memory blocks, and between these memory blocks are memory holes containing information which does not require storage. Initially, the entirety of data in a buffer region of RAM is stored directly to disk. Then, successive pages from each system context memory block are transferred to the buffer, where the page size corresponds to the memory management unit page size. When testing locates a region of heterogeneous entries, then a heterogeneous-data flag, the length of the heterogeneous region, and the heterogeneous data region are transferred to the buffer.
    Type: Grant
    Filed: September 16, 1997
    Date of Patent: November 7, 2000
    Assignee: Phoenix Technologies Ltd.
    Inventor: Timothy Lewis
  • Patent number: 6138211
    Abstract: In a high performance microprocessor adopting a superscalar technique, necessarily using a cache memory, TLB, BTB and etc. and being implemented by 4-way set associative, there is provided an LRU memory capable of performing a pseudo replacement policy and supporting multi-port required for operating various blocks included in the microprocessor. The LRU memory comprises an address decoding block for decoding an INDEX.sub.-- ADDRESS to produce a READ.sub.-- WORD and a WRITE.sub.-- WORD in response to the first phase and a second phase of the CLOCK signal, respectively; an LRU storing block; a way hit decoding block for decoding a WAY.sub.-- HIT to produce a MODIFY CONTROL signal in response to the second phase of the CLOCK signal; a data modifying block for latching a READ.sub.-- DATA from the LRU storing block to produce a DETECTED DATA and modifying it in response to the MODIFY CONTROL signal so as to produce a WRITE.sub.
    Type: Grant
    Filed: June 30, 1998
    Date of Patent: October 24, 2000
    Assignee: Hyundai Electronics Industries Co., Ltd.
    Inventors: Mun Weon Ahn, Hoai Sig Kang
  • Patent number: 6138216
    Abstract: A method is described of managing memory in a microprocessor system comprising two or more processors (40, 42). Each processor (40, 42) has a cache memory (44, 46) and the system has a system memory (48) divided into pages subdivided into blocks. The method is concerned with managing the system memory (48) identifying areas thereof as being "cacheable", "non-cacheable" or "free". Safeguards are provided to ensure that blocks of system memory (48) cannot be cached by two different processors (40, 42) simultaneously.
    Type: Grant
    Filed: January 21, 1998
    Date of Patent: October 24, 2000
    Assignee: nCipher Corporation Limited
    Inventor: Ian Nigel Harvey