Multiport Cache Patents (Class 711/131)
-
Patent number: 7620954Abstract: Each processor in a distributed shared memory system has an associated memory and a coherence directory. The processor that controls a memory is the Home processor. Under certain conditions, another processor may obtain exclusive control of a data block by issuing a Load Lock instruction, and obtaining a writeable copy of the data block that is stored in the cache of the Owner processor. If the Owner processor does not complete operations on the writeable copy of the data prior to the time that the data block is displaced from the cache, it issues a Victim To Shared message, thereby indicating to the Home processor that it should remain a sharer of the data block. In the event that another processor seeks exclusive rights to the same data block, the Home processor issues an Invalidate message to the Owner processor.Type: GrantFiled: August 8, 2001Date of Patent: November 17, 2009Assignee: Hewlett-Packard Development Company, L.P.Inventors: Matthew C. Mattina, Carl Ramey, Bongjin Jung, Judson Leonard
-
Patent number: 7620780Abstract: Dynamic cache architecture for a multi-processor array. The system includes a plurality of processors, with at least one of the processors configured as a parent processor, and at least one of the processors configured as a child processor. A data cache is coupled to the parent processor, and a dual port memory is respectively associated with each child processor part and parcel of a unified memory architecture. The parent processor may then dynamically distribute sub-cache components to dual-port memories based upon a scatter-gather work unit decomposition pattern. A parent cache controller reads, in response to a memory request from a child processor and an address translation pattern from the parent processor, a set of data from non-contiguous addresses of the data cache according to the address translation pattern, and writes the set of data to contiguous addresses of the dual port memory associated with the requesting child processor.Type: GrantFiled: January 23, 2007Date of Patent: November 17, 2009Assignee: XILINX, Inc.Inventor: James B. Anderson
-
Patent number: 7617329Abstract: A system includes a scalability port switch (SPS) and a plurality of nodes. The SPS has a plurality of ports, each port coupled to a node. Each port is connected to a scalability port protocol distributed (SPPD). A snoop filter in the SPS tracks which nodes may be using various memory addresses. A scalability port protocol central (SPPC) is responsible for processing messages to support coherent and non-coherent transactions in the system.Type: GrantFiled: December 30, 2002Date of Patent: November 10, 2009Assignee: Intel CorporationInventors: Tuan M. Quach, Lily P. Looi, Kai Cheng
-
Patent number: 7613886Abstract: Methods and apparatus provide for receiving a request from an initiating device to initiate a data transfer into a local memory for execution of one or more programs therein, the local memory being operatively coupled to a first of a plurality of parallel processors capable of operative communication with a shared memory; facilitating the data transfer into the local memory; and producing a synchronization signal indicating that the data transfer into the local memory has been completed.Type: GrantFiled: February 8, 2005Date of Patent: November 3, 2009Assignee: Sony Computer Entertainment Inc.Inventor: Takeshi Yamazaki
-
Patent number: 7613065Abstract: In a multi-port memory device, a plurality of ports simultaneously access a plurality of banks through global data buses. A data conflict detector compares valid data signals input from the plurality of ports through the global data buses to the plurality of banks, and detects data conflict caused when the valid data signals are simultaneously input to the same bank.Type: GrantFiled: September 28, 2006Date of Patent: November 3, 2009Assignee: Hynix Semiconductor, Inc.Inventor: Jin-Il Chung
-
Patent number: 7606807Abstract: A system is provided to improve performance of a storage system. The system comprises a multi-tier buffer cache. The buffer cache may include a global cache to store resources for servicing requests issued from one or more processes at the same time, a free cache to receive resources from the global cache and to store the received resources as free resources, and a local cache to receive free resources from the free cache, the received free resources to store resources that can be accessed by a single process at one time. The system may further include a buffer cache manager to manage transferring resources from the global cache to the free cache and from the free cache to the local cache.Type: GrantFiled: February 14, 2006Date of Patent: October 20, 2009Assignee: Network Appliance, Inc.Inventors: Jason S. Sobel, Jonathan T. Wall
-
Patent number: 7603523Abstract: A method and apparatus for supporting cache coherency in a multiprocessor computing environment having multiple processing units, each processing unit having one or more local cache memories associated and operatively connected therewith. The method comprises providing a snoop filter device associated with each processing unit, each snoop filter device having a plurality of dedicated input ports for receiving snoop requests from dedicated memory writing sources in the multiprocessor computing environment. Each of the memory writing sources is directly connected to the dedicated input ports of all other snoop filter devices associated with all other processing units in a point-to-point interconnect fashion.Type: GrantFiled: February 21, 2008Date of Patent: October 13, 2009Assignee: International Business Machines CorporationInventors: Matthias A. Blumrich, Dong Chen, Alan G. Gara, Mark E. Giampapa, Philip Heidelberger, Dirk I. Hoenicke, Martin Ohmacht, Valentina Salapura, Pavlos M. Vranas
-
Publication number: 20090228659Abstract: A processor and a computing system are provided. A processor includes a processor core, and a buffer memory to read word data from a memory, the read word data including first byte data read by the processor core from the memory, and to store the read word data, wherein the buffer memory determines whether second byte data requested by the processor core is stored in the buffer memory.Type: ApplicationFiled: July 21, 2008Publication date: September 10, 2009Inventors: Sang Suk LEE, Suk Jin Kim, Yeon Gon Cho
-
Patent number: 7577015Abstract: In general, in one aspect, the disclosure describes an apparatus that includes a memory device having a plurality of memory cells. An inverter is used to invert data and tag information destined for the memory device. A register is used to capture the inverted data and tag information. A write inverted value logic is used to determine when to enable writing the inverted data and tag information from the register to the memory device. When inverted data and tag information is written to a memory cell the memory cell is invalidated.Type: GrantFiled: March 30, 2007Date of Patent: August 18, 2009Assignee: Intel CorporationInventors: Jaume Abella, Xavier Vera, Javier Carretero Casado, Jose-Alejandro Pineiro, Antonio Gonzalez
-
Patent number: 7571281Abstract: In one embodiment, an apparatus includes an input port to receive a request to determine whether data units are stored in the cache, as well as an output port to generate look-ups for the pool of tags. The apparatus also includes a look-up filter coupled to the input and output ports, and operates to filter out superfluous look-ups for the data units, thereby forming filtered look-ups. Advantageously, the look-up filter can filter out superfluous look-ups to at least reduce the quantity of look-up operations associated with the request, thereby reducing stalling associated with multiple look-up operations. In a specific embodiment, the look-up filter can include a data unit grouping detector and a look-up suppressor.Type: GrantFiled: June 2, 2006Date of Patent: August 4, 2009Assignee: Nvidia CorporationInventor: Sameer M. Gauria
-
Patent number: 7562193Abstract: The invention relates to a memory unit with at least two memory areas for storing data, first terminals for accessing data within the memory areas, and second terminals for accessing data within the memory areas. To provide multi-purpose access to the memory, the memory unit provides at least two access control means for providing selectively sole addressing and accessing data through one of the terminals, or individual addressing and accessing data through each of the terminals, respectively.Type: GrantFiled: April 19, 2004Date of Patent: July 14, 2009Assignee: Nokia CorporationInventors: Matti Floman, Jani Klint
-
Patent number: 7562191Abstract: Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way set layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way set layer number. If the layer number is equal to the way set layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.Type: GrantFiled: November 15, 2005Date of Patent: July 14, 2009Assignee: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Publication number: 20090177843Abstract: The present invention is directed to a system and method which employ two memory access paths: 1) a cache-access path in which block data is fetched from main memory for loading to a cache, and 2) a direct-access path in which individually-addressed data is fetched from main memory. The system may comprise one or more processor cores that utilize the cache-access path for accessing data. The system may further comprise at least one heterogeneous functional unit that is operable to utilize the direct-access path for accessing data. In certain embodiments, the one or more processor cores, cache, and the at least one heterogeneous functional unit may be included on a common semiconductor die (e.g., as part of an integrated circuit). Embodiments of the present invention enable improved system performance by selectively employing the cache-access path for certain instructions while selectively employing the direct-access path for other instructions.Type: ApplicationFiled: January 4, 2008Publication date: July 9, 2009Applicant: Convey ComputerInventors: Steven J. Wallach, Tony Brewer
-
Patent number: 7536512Abstract: The eviction candidate sorting tool (ECST) is used with existing eviction algorithms that utilize a database for tracking objects stored in a cache. Rather than storing all the metadata associated with an object in a cache, the ECST extracts only certain attributes from the metadata, creating an “evict table” listing all the cached objects and the chosen attributes, or “classes.” The table can be sorted by class based on an eviction algorithm. An eviction mechanism can use the sorted table to identify candidates for eviction.Type: GrantFiled: September 14, 2006Date of Patent: May 19, 2009Assignee: International Business Machines CorporationInventors: Madhu Chetuparambil, Andrew C. Chow, Andrew Ivory, Nirmala Kodali
-
Patent number: 7536499Abstract: A memory access control device enabling freer access from a plurality of ports to a plurality of memories and a processing system having the same are provided. From among addresses generated at a read address generation unit and addresses input from an external bus, the address which is supplied to a local memory (LM) is selected in accordance with configuration information supplied by a configuration information storage unit. Addresses correspond to ports. Lower bits thereof instruct the storage region inside the LM, and upper bits instruct the LM to be accessed. The read data to be output to a port is selected from among the read data of a plurality of LMs in accordance with the upper bits of this address.Type: GrantFiled: May 21, 2004Date of Patent: May 19, 2009Assignee: Sony CorporationInventor: Ikuhiro Tamura
-
Patent number: 7536400Abstract: Maintaining data used for performing “what-if” analysis is disclosed. The systems and methods of the invention define an efficient mechanism allowing a user to specify how base values from a database are to be changed. The changes can be held in a local delta cache which is only exposed to a single user, leaving the base data unchanged. The changes can also be maintained in a write-back partition, which results in the changes being exposed to all clients of the database. Values in the write-back partition can be selectively rolled back if required.Type: GrantFiled: September 6, 2005Date of Patent: May 19, 2009Assignee: Microsoft CorporationInventors: Mosha Pasumansky, Amir Netz
-
Patent number: 7529139Abstract: Method and memory circuits capable of allowing M memory addresses of an N-port memory to be accessed concurrently, wherein N and M both are a natural number, and M is larger than N. Accordingly, a higher-order multi-port memory can be replaced by a lower-order multi-port or single-port memory. Consequently, smaller chip area or higher data access rate can be achieved.Type: GrantFiled: January 26, 2007Date of Patent: May 5, 2009Assignee: MediaTek, Inc.Inventors: Yu-Wen Huang, Chih-Wei Hsu, Chih-Hui Kuo
-
Patent number: 7526612Abstract: A multiport cache memory is provided which enables reduction of a probability of bank contention which will occur when a plurality of read operations are executed simultaneously, and an access control system of the multiport cache memory.Type: GrantFiled: November 9, 2005Date of Patent: April 28, 2009Assignee: NEC CorporationInventor: Satoshi Nakazato
-
Patent number: 7519779Abstract: Method and apparatus for reading the internal address space of an adapter in a system during a dump are described. The adapter includes a control port and a data port used as channels for exchanging control messages and dump data between the adapter and the system. The system starts the dump by sending to the data port a specification of a block of the adapter's internal address space. In response, the adapter sends dump data portions to a system buffer via the data port.Type: GrantFiled: August 26, 2002Date of Patent: April 14, 2009Assignee: International Business Machines CorporationInventor: Brian E. Bakke
-
Patent number: 7519770Abstract: A disk array controller which includes a channel interface unit for connecting a host computer through a first type channel, a channel interface unit for connecting a host computer through a second type channel, a plurality of disk interface units provided with an interface with a magnetic disk unit respectively, a cache memory unit, and a shared memory unit. The number of access paths connected to said cache memory unit is less than the number of access paths connected to the shared memory unit.Type: GrantFiled: August 21, 2007Date of Patent: April 14, 2009Assignee: Hitachi, Ltd.Inventors: Kazuhisa Fujimoto, Atsushi Tanaka, Akira Fujibayashi
-
Publication number: 20090083491Abstract: A storage system may include storage, a main pipeline to carry data for the storage, and a store pipeline to carry data for the storage. The storage system may also include a controller to prioritize data storage requests for the storage based upon available interleaves and which pipeline is associated with the data storage requests.Type: ApplicationFiled: September 26, 2007Publication date: March 26, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Derrin M. Berger, Michael A. Blake, Garrett M. Drapala, Pak-kin Mak
-
Publication number: 20090019266Abstract: With respect to memory access instructions contained in an internal representation program, an information processing apparatus generates a load cache instruction, a cache hit judgment instruction, and a cache miss instruction that is executed in correspondence with a result of a judgment process performed according to the cache hit judgment instruction. In a case where the internal representation program contains a plurality of memory access instructions having a possibility of causing accesses to mutually the same cache line in a cache memory, the information processing apparatus generates a combine instruction instructing that judgment results of the judgment processes that are performed according to the cache hit judgment instruction should be combined into one judgment result. The information processing apparatus outputs an output program that contains these instructions that have been generated.Type: ApplicationFiled: February 26, 2008Publication date: January 15, 2009Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Seiji MAEDA
-
Publication number: 20090006760Abstract: A design structure is provided for a dual-mode memory chip supporting a first operation mode in which received data access commands contain chip select data to identify the chip addressed by the command, and control logic in the memory chip determines whether the command is addressed to the chip, and a second operation mode in which the received data access command addresses a set of multiple chips. Preferably, the first mode supports a daisy-chained configuration of memory chips. Preferably the second mode supports a hierarchical interleaved memory subsystem, in which each addressable set of chips is configured as a tree, command and write data being propagated down the tree, the number of chips increasing at each succeeding level of the tree.Type: ApplicationFiled: March 21, 2008Publication date: January 1, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gerald K. Bartley, John M. Borkenhagen, Philip Raymond Germann
-
Patent number: 7467261Abstract: A dual storage apparatus is provided that comprises a first and a second memories for respectively retaining a set of identical data and a selector for selecting either of the two (2) sets of the data respectively read from the first and the second memory based on a read control signal inputted into the selector, having a request management unit for, when the read control signal has been inputted, attaching an identifier for identifying the read control signal to the inputted read control signal and outputting the signal and the identifier; and a plurality of memory control units for each of the first and the second memories. The dual storage apparatus detects a synchronization error by verifying coincidence of the identifier attached by the request management unit and controls the selector such that the data from a system from which no synchronization error has been detected.Type: GrantFiled: September 29, 2005Date of Patent: December 16, 2008Assignee: Fujitsu LimitedInventors: Toshikazu Ueki, Takaharu Ishizuka, Takao Matsui, Makoto Hataida, Yuka Hosokawa
-
Publication number: 20080276046Abstract: A multi-port cache memory (200) comprising a plurality of input ports (201, 203) for inputting a plurality of addresses, at least part of each address indexing a plurality of ways; a plurality of output ports (227, 299) for outputting data associated with each of said plurality of addresses; a plurality of memory blocks (219a, 219b, 219c) for storing said plurality of ways, each memory block comprising a single input port (217a, 217b, 217c, 217d) and storing said ways; means (209, 215, 223, 225) for selecting one of said plurality of ways such that data of said selected way is output on an associated output port (227, 229) of said cache memory (200); a predictor (211) for predicting which plurality of ways will be indexed by each of said plurality of addresses; and means (213a, 213b, 213c, 213d) for indexing said plurality of ways based on the predicted ways.Type: ApplicationFiled: June 2, 2006Publication date: November 6, 2008Applicant: NXP B.V.Inventors: Cornelis M. Moerman, Math Verstraelen
-
Patent number: 7447812Abstract: Multi-queue first-in first-out (FIFO) memory devices include multi-port register files that provide write count and read count flow-through when the write and read queues are equivalent. According to some of these embodiments, a multi-queue FIFO memory device includes a write flag counter register file that is configured to support flow-through of write counter updates to at least one read port of the write flag counter register file. This flow-through occurs when an active write queue and an active read queue within the FIFO memory device are the same. A read flag counter register file is also provided, which supports flow-through of read counter updates to at least one read port of the read flag counter register file when the active write queue and the active read queue are the same.Type: GrantFiled: March 15, 2005Date of Patent: November 4, 2008Assignee: Integrated Device Technology, Inc.Inventors: Jason Zhi-Cheng Mo, Prashant Shamarao, Jianghui Su
-
Publication number: 20080256297Abstract: A device that includes multiple processors that are connected to multiple level-one cache units. The device also includes a multi-port high-level cache unit that includes a first modular interconnect, a second modular interconnect, multiple high-level cache paths; whereas the multiple high-level cache paths comprise multiple concurrently accessible interleaved high-level cache units. Conveniently, the device also includes at least one non-cacheable path. A method for retrieving information from a cache that includes: concurrently receiving, by a first modular interconnect of a multiple-port high-level cache unit, requests to retrieve information. The method is characterized by providing information from at least two paths out of multiple high-level cache paths if at least two high-level cache hit occurs, and providing information via a second modular interconnect if a high-level cache miss occurs.Type: ApplicationFiled: November 17, 2005Publication date: October 16, 2008Applicant: Freescale Semiconductor, Inc.Inventors: Ron Bercovich, Odi Dahan, Norman Goldstein, Yehuda Nowogrodski
-
Patent number: 7421559Abstract: A synchronous multi-port memory including a plurality of ports coupled with a memory array, each of the plurality of ports including a delay stage to delay a memory access while a memory access arbitration is performed. The synchronous multi-port memory may also include selection logic coupled with the plurality of ports and the memory array to arbitrate among a plurality of contending memory access requests, to select a prevailing memory access request and to implement memory access controls.Type: GrantFiled: December 16, 2004Date of Patent: September 2, 2008Assignee: Cypress Semiconductor CorporationInventor: Rishi Yadav
-
Publication number: 20080209129Abstract: A cache memory system and method for supporting multiple simultaneous store operations using a plurality of tag memories is provided. The cache data system further provides a plurality of multiple simultaneous cache store functions along with a single cache load function that is simultaneous with the store functions. Embodiments create a cache memory wherein the cache write buffer does not operate as a bottle neck for data store operations into a cache memory system or device.Type: ApplicationFiled: October 18, 2006Publication date: August 28, 2008Applicant: NXP B.V.Inventors: Jan-Willem Van De Waerdt, Carlos Basto
-
Patent number: 7401186Abstract: Method, system and computer program product for tracking changes in an L1 data cache directory. A method for tracking changes in an L1 data cache directory determines if data to be written to the L1 data cache is to be written to an address to be changed from an old address to a new address. If it is determined that the data to be written is to be written to an address to be changed, a determination is made if the data to be written is associated with the old address or the new address. If it is determined that the data is to be written to the new address, the data is allowed to be written to the new address following a prescribed delay after the address to be changed is changed. The method is preferably implemented in a system that provides a Store Queue (STQU) design that includes a Content Addressable Memory (CAM)-based store address tracking mechanism that includes early and late write CAM ports. The method eliminates time windows and the need for an extra copy of the L1 data cache directory.Type: GrantFiled: February 9, 2005Date of Patent: July 15, 2008Assignee: International Business Machines CorporationInventors: Sheldon B. Levenstein, Anthony Saporito
-
Publication number: 20080168231Abstract: A memory includes at least one write bit line and a plurality of memory cells. The at least one write bit line is configured to carry a write bit signal. The plurality of memory cells are arranged in a column and are configured to be selectively coupled to the at least one write bit line. The plurality of memory cells are configured to be selectively read or written in a first phase of a cycle and selectively read or written in a second phase of the cycle using the at least one write bit line.Type: ApplicationFiled: January 4, 2007Publication date: July 10, 2008Inventor: RAVINDRARAJ RAMARAJU
-
Patent number: 7363430Abstract: A system may include M cache entries, each of the M cache entries to transmit a signal indicating a read from or a write to the cache entry and comprising a data register and a memory address register, and K layers of decision cells, where K=log2M. The K layers M/2 decision cells of a first layer to indicate the other one of the respective two of the M cache entries and to transmit a hit signal in response to the signal, a second layer of M/4 decision cells to enable the other one of the respective two of the M/2 decision cells of the first layer and transmit a second hit signal in response to the signal, a (K?1)th layer of two decision cells to enable the other one of the respective two decision cells of the (K?2)th layer and transmit a third hit signal in response to the second hit signal, and a Kth layer of a root decision cell to enable the other one of the respective two decision cells of the (K?1)th layer in response to the third hit signal.Type: GrantFiled: April 6, 2005Date of Patent: April 22, 2008Assignee: Intel CorporationInventors: Samie B. Samaan, Avinash Sodani
-
Patent number: 7360024Abstract: A multi-port instruction/data integrated cache which is provided between a parallel processor and a main memory and stores therein a part of instructions and data stored in the main memory has a plurality of banks, and a plurality of ports including an instruction port unit consisting of at least one instruction port used to access an instruction from the parallel processor and a data port unit consisting of at least one data port used to access data from the parallel processor. Further, a data width which can be specified to the bank from the instruction port is set larger than a data width which can be specified to the bank from the data port.Type: GrantFiled: October 15, 2003Date of Patent: April 15, 2008Assignee: Semiconductor Technology Academic Research CenterInventors: Tetsuo Hironaka, Hans Jürgen Mattausch, Tetsushi Koide, Tai Hirakawa, Koh Johguchi
-
Patent number: 7346739Abstract: First-in-first-out (FIFO) memory system and method for providing the same is described. In one example, a dual-port memory circuit includes first storage locations for defining a plurality of FIFOs, second storage locations for storing status information for each of the FIFOs, a first port, and a second port. The first port includes a write data terminal for receiving write data and a write address terminal for receiving write addresses. Each of the write addresses includes a first portion for selecting a FIFO of the FIFOs and a second portion for selecting a storage location in the dual-port memory circuit. The second port includes a read data terminal for providing read data and a read address terminal for receiving read addresses. Each of the read addresses includes a first portion for selecting a FIFO of the FIFOs and a second portion for selecting a storage location in the dual-port memory circuit.Type: GrantFiled: November 19, 2004Date of Patent: March 18, 2008Assignee: Xilinx, Inc.Inventors: Kurt M. Conover, John H. Linn, Anita L. Schreiber
-
Patent number: 7340562Abstract: A distributed data cache includes a number of cache memory units or register files each having a number of cache lines. Data buses are connected with the cache memory units. Each data bus is connected with a different cache line from each cache memory unit. A number of data address generators are connected with a memory unit and the data buses. The data address generators retrieve data values from the memory unit and communicate the data values to the data buses without latency. The data address generators are adapted to simultaneously communicate each of the data values to a different data bus without latency. The cache memory units are adapted to simultaneously load data values from the data buses, with each data value loaded into a different cache line without latency.Type: GrantFiled: July 24, 2003Date of Patent: March 4, 2008Assignee: NVIDIA CorporationInventor: Amit Ramchandran
-
Patent number: 7337372Abstract: Multi-hit errors in a processor cache are detected by a multi-hit detection circuit coupled to the hit lines of the cache. The multi-hit detection circuit compares pairs of hit signals on the hit lines to determine if any two hit signals both indicate a hit. If multiple hits are detected, an error flag indicating the occurrence of multiple hits is generated.Type: GrantFiled: August 11, 2003Date of Patent: February 26, 2008Assignee: Intel CorporationInventor: Kevin X. Zhang
-
Publication number: 20080016282Abstract: A cache memory system includes: a plurality of cache lines, each including a data section for storing data of main memory and a line classification section for storing identification information that indicates whether the data stored in the data section is for instruction processing or for data processing; a cache hit determination section for determining whether or not there is a cache hit by using the identification information stored in each of the cache lines; and a cache update section for updating one of the cache lines that has to be updated, according to result of the determination.Type: ApplicationFiled: June 27, 2007Publication date: January 17, 2008Inventor: Kazuhiko Sakamoto
-
Patent number: 7320053Abstract: A cache memory system may be is organized as a set of numbered banks. If two clients need to access the cache, a contention situation may be resolved by a contention resolution process. The contention resolution process may be based on relative priorities of the clients.Type: GrantFiled: October 22, 2004Date of Patent: January 15, 2008Assignee: Intel CorporationInventors: Prasoonkumar Surti, Brian Ruttenberg, Aditya Navale
-
Patent number: 7318122Abstract: A disk array controller which includes a channel interface unit for connecting a host computer through a first type channel, a channel interface unit for connecting a host computer through a second type channel, a plurality of disk interface units provided with an interface with a magnetic disk unit respectively, a cache memory unit, and a shared memory unit. The number of access paths connected to said cache memory unit is less than the number of access paths connected to the shared memory unit.Type: GrantFiled: October 2, 2006Date of Patent: January 8, 2008Assignee: Hitachi, Ltd.Inventors: Kazuhisa Fujimoto, Atsushi Tanaka, Akira Fujibayashi
-
Patent number: 7307912Abstract: Systems and methods disclosed herein provide for variable data width memory. For example, in accordance with an embodiment of the present invention, a technique for doubling a width of a memory is disclosed, without having to increase a width of the internal data path or the number of input/output pads.Type: GrantFiled: October 25, 2004Date of Patent: December 11, 2007Assignee: Lattice Semiconductor CorporationInventors: Hemanshu T. Vernenker, Margaret C. Tait, Christopher Hume, Nhon Nguyen, Allen White, Tim Swensen, Sam Tsai, Steve Eplett
-
Patent number: 7234022Abstract: Various embodiments of systems and methods for performing accumulation operations on block operands are disclosed. In one embodiment, an apparatus may include a memory, a functional unit that performs an operation on block operands, and a cache accumulator. The cache accumulator is configured to provide a block operand to the functional unit and to store the block result generated by the functional unit. The cache accumulator is configured to provide the block operand to the functional unit in response to an instruction that uses an address in the memory to identify the block operand. Thus, the cache accumulator behaves as both a cache and an accumulator.Type: GrantFiled: December 19, 2001Date of Patent: June 19, 2007Assignee: Sun Microsystems, Inc.Inventor: Fay Chong, Jr.
-
Patent number: 7219185Abstract: A processor having the capability to dispatch multiple parallel operations, including multiple load operations, accesses a cache which is divided into banks. Each bank supports a limited number of simultaneous read and write access operations. A bank prediction field is associated with each memory access operation. Memory access operations are selected for dispatch so that they are predicted to be non-conflicting. Preferably, the processor automatically maintains a bank predict value based on previous bank accesses, and a confirmation value indicating a degree of confidence in the bank prediction. The confirmation value is preferably an up-or-down counter which is incremented with each correct prediction and decremented with each incorrect prediction.Type: GrantFiled: April 22, 2004Date of Patent: May 15, 2007Assignee: International Business Machines CorporationInventor: David Arnold Luick
-
Patent number: 7216206Abstract: A control apparatus of a storage unit having a first and a second communication ports for conducting communication with a computer, a first and a second processors that control respectively the first and the second communication ports, first and second storage devices that store respectively a first and a second queues for storing commands sent from the computer respectively to the first and the second communication ports, and a first nonvolatile memory that the first processor accesses, the first and the second processors executing the commands stored respectively in the first and the second queues to thereby control the communications with the computer, comprising a unit causing the second processor to implement execution of the command stored in the first queue; and a unit changing data stored in the first memory while the second processor is being caused to implement execution of the command stored in the first queue.Type: GrantFiled: August 22, 2006Date of Patent: May 8, 2007Assignee: Hitachi, Ltd.Inventors: Katsuhiro Uchiumi, Hiroshi Kuwabara, Yoshio Mitsuoka
-
Patent number: 7213104Abstract: A disk array controller which includes a channel interface unit for connecting a host computer through a first type channel, a channel interface unit for connecting a host computer through a second type channel, a plurality of disk interface units provided with an interface with a magnetic disk unit respectively, a cache memory unit, and a shared memory unit. The number of access paths connected to said cache memory unit is less than the number of access paths connected to the shared memory unit.Type: GrantFiled: November 18, 2004Date of Patent: May 1, 2007Assignee: Hitachi, Ltd.Inventors: Kazuhisa Fujimoto, Atsushi Tanaka, Akira Fujibayashi
-
Patent number: 7194581Abstract: A memory agent may include a first port and a second port, wherein the memory agent is capable of detecting the presence of another memory agent on the second port. A method may include performing a presence detect operation on a first port of a memory agent, and reporting the results of the presence detect operation through a second port of the memory agent.Type: GrantFiled: June 3, 2003Date of Patent: March 20, 2007Assignee: Intel CorporationInventor: Pete D. Vogt
-
Patent number: 7178000Abstract: A system for storing and retrieving data provided by the system on a system bus in a sequence at a predetermined system data rate. The system includes a system memory controller for enabling a system memory to store and retrieve the data at a rate twice the system data rate. Also provided is a trace buffer having a dual port random access memory. A trace buffer control system is provided for enabling the data on the system bus and fed concurrently to a pair of data ports of the dual port random access memory to be stored in the dual port random access memory at the predetermined system data rate and for enabling such dual port random access memory stored data to be retrieved from the dual port random access memory in the same sequence as such data was provided on the system data bus.Type: GrantFiled: March 18, 2004Date of Patent: February 13, 2007Assignee: EMC CorporationInventor: Krzysztof Dobecki
-
Patent number: 7162591Abstract: Methods and apparatus are provided for closely coupling a dedicated memory port to a processor core while allowing external components access to the dedicated memory. A processor core such as a processor core on a programmable chip is provided with dedicated read access to a dual ported memory. Write access is arbitrated between processor core write access and read/write access by external components. A dedicated memory port is particularly beneficial in digital signal processing applications.Type: GrantFiled: January 6, 2004Date of Patent: January 9, 2007Assignee: Altera CorporationInventors: Tracy Miranda, Steven Perry
-
Patent number: 7152138Abstract: A system-on-a-chip is described herein. The system-on-a-chip includes a microprocessor, a non-volatile imperfect semiconductor memory device and a memory controller. The memory controller is configured to transfer device data between the microprocessor and the non-volatile semiconductor imperfect memory device.Type: GrantFiled: January 30, 2004Date of Patent: December 19, 2006Assignee: Hewlett-Packard Development Company, L.P.Inventors: Andrew M. Spencer, Tracy Ann Sauerwein
-
Patent number: 7142541Abstract: According to some embodiments, routing information for an information packet is determined in accordance with a destination address and a device address.Type: GrantFiled: August 9, 2002Date of Patent: November 28, 2006Assignee: Intel CorporationInventors: Alok Kumar, Raj Yavatkar
-
Patent number: 7124236Abstract: A microprocessor including a level two cache memory including asynchronously accessible cache blocks. The microprocessor includes an execution unit coupled to a cache memory subsystem which includes a plurality of storage blocks, each configured to store a plurality of data units. Each of the plurality of storage blocks may be accessed asynchronously. In addition, the cache subsystem includes a plurality of tag units which are coupled to the plurality of storage blocks. Each of the tag units may be configured to store a plurality of tags each including an address tag value which corresponds to a given unit of data stored within the plurality of storage blocks. Each of the plurality of tag units may be accessed synchronously.Type: GrantFiled: November 26, 2002Date of Patent: October 17, 2006Assignee: Advanced Micro Devices, Inc.Inventors: Teik-Chung Tan, Mitchell Alsup, Jerry D. Moench