Addressing Of Memory Level In Which Access To Desired Data Or Data Block Requires Associative Addressing Means, E.g., Cache, Etc. (epo) Patents (Class 711/E12.017)

  • Patent number: 8255633
    Abstract: A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the list address.
    Type: Grant
    Filed: January 29, 2010
    Date of Patent: August 28, 2012
    Assignee: International Business Machines Corporation
    Inventors: Peter Boyle, Norman Christ, Alan Gara, Changhoan Kim, Robert Mawhinney, Martin Ohmacht, Krishnan Sugavanam
  • Publication number: 20120215959
    Abstract: Disclosed is a cache memory controlling method for reducing cache latency. The method includes sending a target address to a tag memory storing tag data and sending the target address to a second group data memory that has a latency larger than that of a first group data memory. The method further includes generating and outputting a cache signal that indicates whether the first group data memory includes target data and that indicates whether the second group data memory includes target data. The target address is sent to the second group data memory before the output of the cache signal. With an exemplary embodiment, cache latency is minimized or reduced, and the performance of a cache memory system is improved.
    Type: Application
    Filed: January 3, 2012
    Publication date: August 23, 2012
    Inventors: Seok-Il Kwon, Hoijin Lee
  • Publication number: 20120215979
    Abstract: A cache is provided, including a data array having a plurality of entries configured to store a plurality of different types of data, and a tag array having a plurality of entries and configured to store a tag of the data stored at a corresponding entry in the data array and further configured to store an identification of the type of data stored in the corresponding entry in the data array.
    Type: Application
    Filed: February 21, 2011
    Publication date: August 23, 2012
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventor: Douglas HUNT
  • Publication number: 20120215981
    Abstract: A method of operating a storage system comprises detecting a cut in an external power supply, switching to a local power supply, preventing receipt of input/output commands, copying content of cache memory to a local storage device and marking the content of the cache memory that has been copied to the local storage device. When a resumption of the external power supply is detected, the method continues by charging the local power supply, copying the content of the local storage device to the cache memory, processing the content of the cache memory with respect to at least one storage volume and receiving input/output commands. When detecting a second cut in the external power supply, the system switches to the local power supply, prevents receipt of input/output commands, and copies to the local storage device only the content of the cache memory that is not marked as present.
    Type: Application
    Filed: May 1, 2012
    Publication date: August 23, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gordon D. HUTCHISON, Paul J. QUELCH
  • Publication number: 20120215980
    Abstract: In one example, a method of restoring data backed up in a content addressed storage system may include retrieving a recipe and appended storage addresses from a first storage node of content addressed storage, where the recipe may include instructions for generating a data structure from two or more data pieces, and the two or more data pieces may be resident in locations identified by the appended storage addresses. The example method may further include populating a cache with the appended storage addresses for the two or more data pieces. As well the method may further include retrieving, and populating the cache with, the two or more data pieces without looking up a storage address for any of the two or more data pieces in an index, and restoring the data structure using the retrieved two or more data pieces in the cache.
    Type: Application
    Filed: April 30, 2012
    Publication date: August 23, 2012
    Applicant: EMC CORPORATION
    Inventors: Scott C. Auchmoody, Eric W. Olsen
  • Publication number: 20120215965
    Abstract: A nonvolatile memory stores therein a plurality of partitioned translation tables which are created by partitioning a logical-to-physical address translation table in a page unit. A RAM stores therein a logical-to-physical address translation table cache for storing at least the one or more partitioned translation tables, a translation-table management table for managing the partitioned translation tables, and a cache management table for managing the logical-to-physical address translation table cache. The translation-table management table includes a cache presence-or-absence flag and a cache entry number, the cache presence-or-absence flag being used for indicating that the partitioned translation tables are stored into the logical-to-physical address translation table cache, the cache entry number being used for indicating storage destinations of the partitioned translation tables in the logical-to-physical address translation table cache.
    Type: Application
    Filed: February 14, 2012
    Publication date: August 23, 2012
    Applicant: Hitachi, Ltd.
    Inventors: Ryoichi Inada, Ryo Fujita, Takuma Nishimura, Koji Matsuda
  • Publication number: 20120215987
    Abstract: A method for managing caches, including: broadcasting, by a first cache agent operatively connected to a first cache and using a first physical network, a first peer-to-peer (P2P) request for a memory address; issuing, by a second cache agent operatively connected to a second cache and using a second physical network, a first response to the first P2P request based on a type of the first P2P request and a state of a cacheline in the second cache corresponding to the memory address; issuing, by a third cache agent operatively connected to a third cache, a second response to the first P2P request; and upgrading, by the first cache agent and based on the first response and the second response, a state of a cacheline in the first cache corresponding to the memory address.
    Type: Application
    Filed: February 17, 2011
    Publication date: August 23, 2012
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventor: Paul N. Loewenstein
  • Publication number: 20120210041
    Abstract: An apparatus, system, and method are disclosed for caching data. A storage request module detects an input/output (“I/O”) request for a storage device cached by solid-state storage media of a cache. A direct mapping module references a single mapping structure to determine that the cache comprises data of the I/O request. The single mapping structure maps each logical block address of the storage device directly to a logical block address of the cache. The single mapping structure maintains a fully associative relationship between logical block addresses of the storage device and physical storage addresses on the solid-state storage media. A cache fulfillment module satisfies the I/O request using the cache in response to the direct mapping module determining that the cache comprises at least one data block of the I/O request.
    Type: Application
    Filed: August 12, 2011
    Publication date: August 16, 2012
    Applicant: FUSION-IO, INC.
    Inventors: David Flynn, David Atkisson, Joshua Aune
  • Publication number: 20120210066
    Abstract: A multi-level cache comprises a plurality of cache levels, each configured to cache I/O request data pertaining to I/O requests of a different respective type and/or granularity. The multi-level cache may comprise a file-level cache that is configured to cache I/O request data at a file-level of granularity. A file-level cache policy may comprise file selection criteria to distinguish cacheable files from non-cacheable files. The file-level cache may monitor I/O requests within a storage stage, and may service I/O requests from a cache device.
    Type: Application
    Filed: November 2, 2011
    Publication date: August 16, 2012
    Applicant: FUSION-IO, INC.
    Inventors: Vikram Joshi, Yang Luan, Michael F. Brown, Hrishikesh A. Vidwans
  • Publication number: 20120210047
    Abstract: A method of managing a database system using a swarm database system that communicates a request to read data to at least a subset of nodes. Checking the identifier by each respective node in the subset of nodes to determine if the requested read data is stored in the node. Providing the read data to the first node if the respective node in the subset includes read data.
    Type: Application
    Filed: December 16, 2011
    Publication date: August 16, 2012
    Inventors: Keith PETERS, Bryn Robert Dole, Michael Markson, Robert Michael Saliba, Rich Skrenta, Robert N. Truel, Gregory B. Lindahl
  • Publication number: 20120210068
    Abstract: A multi-level cache comprises a plurality of cache levels, each configured to cache I/O request data pertaining to I/O requests of a different respective type and/or granularity. A cache device manager may allocate cache storage space to each of the cache levels. Each cache level maintains respective cache metadata that associates I/O request data with respective cache address. The cache levels monitor I/O requests within a storage stack, apply selection criteria to identify cacheable I/O requests, and service cacheable I/O requests using the cache storage device.
    Type: Application
    Filed: November 2, 2011
    Publication date: August 16, 2012
    Applicant: FUSION-IO, INC.
    Inventors: Vikram Joshi, Yang Luan, Michael F. Brown, Hrishikesh A. Vidwans
  • Publication number: 20120210064
    Abstract: Various embodiments for managing data in a computing storage environment by a processor device are provided. In one such embodiment, by way of example only, an extender storage pool system is configured for at least one of a source and a target storage pool to expand an available storage capacity for the at least one of the source and the target storage pool. A most recent snapshot of the data is sent to the extender storage pool system. The most recent snapshot of the data is stored on the extender storage pool system as a last replicated snapshot of the data.
    Type: Application
    Filed: February 11, 2011
    Publication date: August 16, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Juan A. CORONADO, Christina A. LARA, Lisa R. MARTINEZ
  • Publication number: 20120210065
    Abstract: Techniques for managing memory in a multiprocessor architecture are presented. Each processor of the multiprocessor architecture includes its own local memory. When data is to be removed from a particular local memory or written to storage that data is transitioned to another local memory associated with a different processor of the multiprocessor architecture. If the data is then requested from the processor, which originally had the data, then the data is acquired from a local memory of the particular processor that received and now has the data.
    Type: Application
    Filed: February 14, 2011
    Publication date: August 16, 2012
    Inventor: Nikanth Karthikesan
  • Publication number: 20120210056
    Abstract: A cache memory includes a CAM with an associativity of n (where n is a natural number) and an SRAM, and storing or reading out corresponding data when a tag address is specified by a CPU connected to the cache memory, the tag address constituted by a first sub-tag address and a second sub-tag address. The cache memory classifies the data, according to the time at which a read request has been made, into at least a first generation which corresponds to a read request made at a recent time and a second generation which corresponds to a read request made at a time which is different from the recent time. The first sub-tag address is managed by the CAM. The second sub-tag address is managed by the SRAM. The cache memory allows a plurality of second sub-tag addresses to be associated with a same first sub-tag address.
    Type: Application
    Filed: October 19, 2010
    Publication date: August 16, 2012
    Applicant: THE UNIVERSITY OF ELECTRO-COMMUNICATIONS
    Inventors: Sho Okabe, Koki Abe
  • Publication number: 20120210067
    Abstract: To provide a mirroring device which does not need a competition control function dedicated for a restoring process without halting other access commands during the restoring process of a mirroring. The mirroring device includes a pair of storage devices, a mirroring control unit which duplicates write data in the storage device in the case that a pair of storage devices are in a normal state and writes data in an available storage device in the case of a reduced state, a cache unit which stores input and output data and rewrites target data to be rewritten in the pair of storage devices, and a mirroring recovery unit which readouts date to the cache unit from the available storage device in the reduced state and recovers duplication of data by rewriting to the pair of storage devices when recovering from the reduced state to the normal state.
    Type: Application
    Filed: February 9, 2012
    Publication date: August 16, 2012
    Applicant: NEC Computertechno, Ltd.
    Inventor: Koji ABUMI
  • Patent number: 8244980
    Abstract: A method and apparatus for improving shared cache performance. In one embodiment, the present invention includes a cache having multiple ways. A locality tester measures a first locality of a first process and second locality of a second process. A first set of multiple ways stores the data used by the first process and a second set of multiple ways stores the data used by the second process, where the second set is a superset of the first set.
    Type: Grant
    Filed: June 21, 2006
    Date of Patent: August 14, 2012
    Assignee: Intel Corporation
    Inventor: Tryggve Fossum
  • Publication number: 20120203967
    Abstract: Processing within a multiprocessor computer system is facilitated by: deciding by a processor, pursuant to processing of a request to update a previous storage key to a new storage key, whether to purge the previous storage key from, or update the previous storage key in, local processor cache of the multiprocessor computer system. The deciding includes comparing a bit value(s) of one or more required components of the previous storage key to respective predefined allowed stale value(s) for the required component(s), and leaving the previous storage key in local processor cache if the bit value(s) of the required component(s) in the previous storage key equals the respective predefined allowed stale value(s) for the required component(s). By selectively leaving the previous storage key in local processor cache, interprocessor communication pursuant to processing of the request to update the previous storage key to the new storage key is minimized.
    Type: Application
    Filed: April 16, 2012
    Publication date: August 9, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Gary A. WOFFINDEN
  • Publication number: 20120203960
    Abstract: In some embodiments, a non-volatile cache memory may include a multi-level non-volatile cache memory configured to be located between a system memory and a mass storage device of an electronic system and a controller coupled to the multi-level non-volatile cache memory, wherein the controller is configured to control utilization of the multi-level non-volatile cache memory. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: April 19, 2012
    Publication date: August 9, 2012
    Inventors: R. Scott Tetrick, Dale Juenemann, Robert Brennan
  • Patent number: 8239630
    Abstract: Optimizing cache-resident area where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. An LU is a logical space resident in cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size. When the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory.
    Type: Grant
    Filed: June 2, 2011
    Date of Patent: August 7, 2012
    Assignee: Hitachi, Ltd.
    Inventor: Hideyuki Koseki
  • Publication number: 20120198173
    Abstract: According to one embodiment, a router manages routing of a packet transferred between a plurality of cores and at least one of cache memories to which the cores can access. The router includes an analyzer, a packet memory and a controller. The analyzer determines whether the packet is a read-packet or a write-packet. The packet memory stores at least part of the write-packet issued by one of the cores. The controller stores cache data of the write-packet and a cache address in the packet memory when the analyzer determines that the packet is the write-packet. The cache address indicates an address in which the cache data is stored. The controller outputs the cache data stored in the packet memory to the core issuing a read-request as a response data corresponding to the read packet when the analyzer determines that the packet is the read-packet and the cache address corresponding to the read-request is stored in the packet memory.
    Type: Application
    Filed: March 21, 2011
    Publication date: August 2, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Hui Xu
  • Publication number: 20120198158
    Abstract: A cache memory including: a plurality of parallel input ports configured to receive, in parallel, memory access requests wherein each parallel input port is operable to receive a memory access request for any one of a plurality of processing units; and a plurality of cache blocks wherein each cache block is configured to receive memory access requests from a unique one of the plurality of input ports such that there is a one-to-one mapping between the plurality of parallel input ports and the plurality of cache blocks and wherein each of the plurality of cache blocks is configured to serve a unique portion of an address space of the memory.
    Type: Application
    Filed: September 17, 2009
    Publication date: August 2, 2012
    Inventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
  • Publication number: 20120198156
    Abstract: A data processor is disclosed that definitively determines an effective address being calculated and decoded will be associated with an address range that includes a memory local to a data processor unit, and will disable a cache access based upon a comparison between a portion of a base address and a corresponding portion of an effective address input operand. Access to the local memory can be accomplished through a first port of the local memory when it is definitively determined that the effective address will be associated with an address range. Access to the local memory cannot be accomplished through the first port of the local memory when it is not definitively determined that the effective address will be associated with the address range.
    Type: Application
    Filed: January 28, 2011
    Publication date: August 2, 2012
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventor: William C. Moyer
  • Publication number: 20120198157
    Abstract: A method for translating instructions for a processor. The method includes accessing a plurality of guest instructions that comprise multiple guest branch instructions, and assembling the plurality of guest instructions into a guest instruction block. The guest instruction block is converted into a corresponding native conversion block. The native conversion block is stored into a native cache. A mapping of the guest instruction block to corresponding native conversion block is stored in a conversion look aside buffer. Upon a subsequent request for a guest instruction, the conversion look aside buffer is indexed to determine whether a hit occurred, wherein the mapping indicates whether the guest instruction has a corresponding converted native instruction in the native cache. The converted native instruction is forwarded for execution in response to the hit.
    Type: Application
    Filed: January 27, 2012
    Publication date: August 2, 2012
    Applicant: SOFT MACHINES, INC.
    Inventor: Mohammad Abdallah
  • Publication number: 20120198122
    Abstract: A method for managing mappings of storage on a code cache for a processor. The method includes storing a plurality of guest address to native address mappings as entries in a conversion look aside buffer, wherein the entries indicate guest addresses that have corresponding converted native addresses stored within a code cache memory, and receiving a subsequent request for a guest address at the conversion look aside buffer. The conversion look aside buffer is indexed to determine whether there exists an entry that corresponds to the index, wherein the index comprises a tag and an offset that is used to identify the entry that corresponds to the index. Upon a hit on the tag, the corresponding entry is accessed to retrieve a pointer to the code cache memory corresponding block of converted native instructions. The corresponding block of converted native instructions are fetched from the code cache memory for execution.
    Type: Application
    Filed: January 27, 2012
    Publication date: August 2, 2012
    Applicant: SOFT MACHINES, INC.
    Inventor: Mohammad Abdallah
  • Publication number: 20120198159
    Abstract: An information processing device of the invention includes a measurement section which detects the changes in the uses of a built-in memory and an external memory, and a control section which monitors the measurement result from the measurement section, changes the configuration of the built-in memory, transfers the data stored in the built-in memory and the external memory, and changes the external memory area and the built-in memory area used by the CPU and other bus master devices, wherein it is possible to detect the changes in the memory utilization efficiency that cannot be predicted by static analysis, and to maintain an optimal memory configuration.
    Type: Application
    Filed: October 13, 2010
    Publication date: August 2, 2012
    Applicant: PANASONIC CORPORATION
    Inventors: Kunio Fujikawa, Tomohide Uchimi
  • Publication number: 20120198169
    Abstract: Mechanisms are provided for dynamically rewriting branch instructions in a portion of code. The mechanisms execute a branch instruction in the portion of code. The mechanisms determine if a target instruction of the branch instruction, to which the branch instruction branches, is present in an instruction cache associated with the processor. Moreover, the mechanisms directly branch execution of the portion of code to the target instruction in the instruction cache, without intervention from an instruction cache runtime system, in response to a determination that the target instruction is present in the instruction cache. In addition, the mechanisms redirect execution of the portion of code to the instruction cache runtime system in response to a determination that the target instruction cannot be determined to be present in the instruction cache.
    Type: Application
    Filed: April 10, 2012
    Publication date: August 2, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tong Chen, Brian Flachs, Brad W. Michael, Mark R. Nutter, John K.P. O'Brien, Kathryn M. O'Brien, Tao Zhang
  • Patent number: 8234452
    Abstract: A device and a method for fetching instructions. The device includes a processor adapted to execute instructions; a high level memory unit adapted to store instructions; a direct memory access (DMA) controller that is controlled by the processor; an instruction cache that includes a first input port and a second input port; wherein the instruction cache is adapted to provide instructions to the processor in response to read requests that are generated by the processor and received via the first input port; wherein the instruction cache is further adapted to fetch instructions from a high level memory unit in response to read requests, generated by the DMA controller and received via the second input port.
    Type: Grant
    Filed: November 30, 2006
    Date of Patent: July 31, 2012
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Ron Bercovich, Odi Dahan, Norman Goldstein, Yuval Kfir
  • Publication number: 20120191910
    Abstract: A processing circuit includes a processing unit and a data buffer. When the processing unit receives a load instruction and determines that the load instruction has a load-use condition, the processing unit stores specific data into the data buffer, where the specific data is loaded by executing the load instruction.
    Type: Application
    Filed: January 9, 2012
    Publication date: July 26, 2012
    Inventors: Yen-Ju Lu, Chao-Wei Huang
  • Publication number: 20120191911
    Abstract: A system and method for increasing cache size is provided. Generally, the system contains a memory and a processor. The processor is configured by the memory to perform the steps of: categorizing storage blocks within a storage device as within a first category of storage blocks if the storage blocks that are available to the system for storing data when needed; categorizing storage blocks within the storage device as within a second category of storage blocks if the storage blocks contain application data therein; and categorizing storage blocks within the storage device as within a third category of storage blocks if the storage blocks are storing cached data and are available for storing application data if no first category of storage blocks are available to the system.
    Type: Application
    Filed: February 1, 2012
    Publication date: July 26, 2012
    Applicant: HOLA NETWORKS LTD.
    Inventors: Derry Shribman, Ofer Vilenski
  • Publication number: 20120191982
    Abstract: Embodiments in accordance with the invention utilize the cryptographic transformation function of an SP processor to encrypt data at rest. The use of the primary processor-based cryptographic transformation function is preferable to use of an auxiliary cryptographic processor because the transformation occurs directly, and thus can be faster and more cost effective.
    Type: Application
    Filed: December 5, 2008
    Publication date: July 26, 2012
    Inventor: Timothy Evert LEVIN
  • Publication number: 20120185648
    Abstract: Exemplary method, system, and computer program embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap.
    Type: Application
    Filed: January 14, 2011
    Publication date: July 19, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Lokesh M. GUPTA, Carol S. MELLGREN, Alfred E. SANCHEZ
  • Publication number: 20120185649
    Abstract: A method for optimizing a plurality of volume records stored in cache may include monitoring a volume including multiple data sets, wherein each data set is associated with a volume record, and each volume record is stored in a volume record data set. The method may include tracking read and write operations to each of the data sets over a period of time. The method may further include reorganizing the volume records in the volume record data set such that volume records for data sets with a larger number of read operations relative to write operations are grouped together, and volume records for data sets with a smaller number of read operations relative to write operation are grouped together. A corresponding apparatus and computer program product are also disclosed.
    Type: Application
    Filed: March 26, 2012
    Publication date: July 19, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Philip R. Chauvet, David C. Reed, Michael R. Scott, Max D. Smith
  • Publication number: 20120179877
    Abstract: The present invention employs three decoupled hardware mechanisms: read and write signatures, which summarize per-thread access sets; per-thread conflict summary tables, which identify the threads with which conflicts have occurred; and a lazy versioning mechanism, which maintains the speculative updates in the local cache and employs a thread-private buffer (in virtual memory) only in the rare event of an overflow. The conflict summary tables allow lazy conflict management to occur locally, with no global arbitration (they also support eager management). All three mechanisms are kept software-accessible, to enable virtualization and to support transactions of arbitrary length.
    Type: Application
    Filed: March 16, 2012
    Publication date: July 12, 2012
    Applicant: University of Rochester, Office of Technology Transfer
    Inventors: Arrvindh SHRIRAMAN, Sandhya DWARKADAS, Michael SCOTT
  • Publication number: 20120179876
    Abstract: A method of processing store requests in a data processing system includes enqueuing a store request in a store queue of a cache memory of the data processing system. The store request identifies a target memory block by a target address and specifies store data. While the store request and a barrier request older than the store request are enqueued in the store queue, a read-claim machine of the cache memory is dispatched to acquire coherence ownership of target memory block of the store request. After coherence ownership of the target memory block is acquired and the barrier request has been retired from the store queue, a cache array of the cache memory is updated with the store data.
    Type: Application
    Filed: January 6, 2011
    Publication date: July 12, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guy L. Guthrie, William J. Starke, Derek E. Williams
  • Publication number: 20120179853
    Abstract: The present disclosure includes devices, systems, and methods for memory address translation. One or more embodiments include a memory array and a controller coupled to the array. The array includes a first table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a data segment stored in the array and a logical address. The controller includes a second table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the first table and a logical address. The controller also includes a third table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the second table and a logical address.
    Type: Application
    Filed: January 6, 2011
    Publication date: July 12, 2012
    Applicant: MICRON TECHNOLOGY, INC.
    Inventors: Troy A. Manning, Martin L. Culley, Troy D. Larsen
  • Patent number: 8219755
    Abstract: In one embodiment, a cache comprises a tag memory and a comparator. The tag memory is configured to store tags of cache blocks stored in the cache, and is configured to output at least one tag responsive to an index corresponding to an input address. The comparator is coupled to receive the tag and a tag portion of the input address, and is configured to compare the tag to the tag portion to generate a hit/miss indication. The comparator comprises dynamic circuitry, and is coupled to receive a control signal which, when asserted, is defined to force a first result on the hit/miss indication independent of whether or not the tag portion matches the tag. The comparator also comprises circuitry coupled to receive the control signal and configured to inhibit a state change on an output of the dynamic circuitry during an evaluate phase of the dynamic circuitry to produce the first result responsive to an assertion of the control signal.
    Type: Grant
    Filed: August 1, 2011
    Date of Patent: July 10, 2012
    Assignee: Apple Inc.
    Inventor: Brian J. Campbell
  • Patent number: 8219754
    Abstract: Improved thrashing aware and self configuring cache architectures that reduce cache thrashing without increasing cache size or degrading cache hit access time, for a DSP. In one example embodiment, this is accomplished by selectively caching only the instructions having a higher probability of recurrence to considerably reduce cache thrashing.
    Type: Grant
    Filed: July 13, 2010
    Date of Patent: July 10, 2012
    Assignee: Analog Devices, Inc.
    Inventors: Tushar P. Ringe, Abhijit Giri
  • Patent number: 8219752
    Abstract: A system for caching data in a distributed data processing system allows for the caching of user-modifiable data (as well as other types of data) across one or multiple entities in a manner that prevents stale data from being improperly used.
    Type: Grant
    Filed: March 31, 2008
    Date of Patent: July 10, 2012
    Assignee: Amazon Technologies, Inc.
    Inventors: Jonathan A. Jenkins, Mark S. Baumback, Ryan J. Snodgrass
  • Publication number: 20120173823
    Abstract: In an embodiment of the invention, a method for data profiling incorporating an enterprise service bus (ESB) coupling the target and source systems following an extraction, transformation, and loading (ETL) process for a target system and a source system is provided. The method includes receiving baseline data profiling results obtained during ETL from a source application to a target application, caching the updates, determining current data profiling results within the ESB for cached updates, and triggering an action if a threshold disparity is detected upon the current data profiling results and the baseline data profiling results.
    Type: Application
    Filed: February 28, 2012
    Publication date: July 5, 2012
    Applicant: International Business Machines Corporation
    Inventors: Sebastian Nelke, Martin Oberhofer, Yannick Saillet, Jens Seifert
  • Publication number: 20120173790
    Abstract: Embodiments of the invention relate to a storage system cache with flash memory units organized in a RAID configuration. An aspect of the invention includes a storage system comprising a storage system cache with flash memory in a RAID configuration. The storage cache comprises flash memory units organized in an array configuration. Each of the flash memory units comprises flash memory devices and a flash unit controller. Each flash unit controller manages data access and data operations for its corresponding flash memory devices. The storage system further includes an array controller, coupled to the flash memory units, and that manages data access and data operations for the flash memory units and organizes data as full array stripes. The storage system further includes a primary storage device, which is coupled to the array controller, and stores data for the storage system.
    Type: Application
    Filed: December 29, 2010
    Publication date: July 5, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: STEVEN R. HETZLER, DANIEL F. SMITH
  • Publication number: 20120173818
    Abstract: A cache memory includes a data array that stores memory blocks, a directory of contents of the data array, and a cache controller that controls access to the data array. The cache controller includes an address conflict detection system having a set-associative array configured to store at least tags of memory addresses of in-flight memory access transactions. The address conflict detection system accesses the set-associative array to detect if a target address of an incoming memory access transaction conflicts with that of an in-flight memory access transaction and determines whether to allow the incoming transaction memory access transaction to proceed based upon the detection.
    Type: Application
    Filed: January 5, 2011
    Publication date: July 5, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Andrew K. Martin
  • Publication number: 20120166721
    Abstract: There are provided a semiconductor integrated circuit device, a method of controlling a semiconductor integrated circuit device, and a cache device capable of efficiently implementing power saving, wherein the cache device includes a low-voltage operation enabling cache (200), and a small-area cache (300) having a type different from that of the cache (200), the cache (200) and the cache (300) being independently supplied with source voltage; the cache (200) being operable at a voltage lower than the lower limit voltage at which the cache (300) is operable; a cache control unit (400) operating switchable controls between a first mode allowing only the cache (200) to operate, and a second mode allowing the cache (200) or the cache (300) to operate; and the cache (200) in the first mode operating to supply a voltage below the lower limit voltage at which the cache (300) is operable, while interrupting power supply to the cache (300).
    Type: Application
    Filed: August 18, 2010
    Publication date: June 28, 2012
    Inventor: Hiroaki Inoue
  • Publication number: 20120166705
    Abstract: Virtual machines are managed by obtaining software hierarchy information of a current virtual machine to be installed. Then logical memory assigned to the current virtual machine is divided into a private part and a shared part based at least in part upon existing software hierarchy information of at least one virtual machine already installed and the software hierarchy information of the current virtual machine. Then, the shared part of the logical memory is mapped to shared segments of a physical memory, wherein the shared segments are used by at least one installed virtual machine.
    Type: Application
    Filed: December 20, 2011
    Publication date: June 28, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Jun Dai, Zhi Gan, Rui Bo Han, Xian Liu
  • Patent number: 8209493
    Abstract: Embodiments of the invention are generally directed to systems, methods, and apparatuses for improving power/performance tradeoffs associated with multi-core memory thermal throttling algorithms. In some embodiments, the priority of shared resource allocation is changed on one or more points in a system, while the system is in dynamic random access memory (DRAM) throttling mode. This may enable the forward progress of cache bound workloads while still throttling DRAM for memory bound workloads.
    Type: Grant
    Filed: March 26, 2008
    Date of Patent: June 26, 2012
    Assignee: Intel Corporation
    Inventor: Hemant G. Rotithor
  • Publication number: 20120159084
    Abstract: A method is provided for identifying a first portion of a computer program for speculative execution by a first processor element. At least one memory object is declared as being protected during the speculative execution. Thereafter, if a first signal is received indicating that the at least one protected memory object is to be accessed by a second processor element, then delivery of the first signal is delayed for a preselected duration of time to potentially allow the speculative execution to complete. The speculative execution of the first portion of the computer program may be aborted in response to receiving the delayed first signal before the speculative execution of the first portion of the computer program has been completed.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Inventors: MARTIN T. POHLACK, Michael P. Hohmuth, Stephan Diestelhorst, David S. Christie, JaeWoong Chung
  • Publication number: 20120159081
    Abstract: An access request that includes a combination of a file identifier and an offset value is received. If the page cache does not contain the page indexed by the combination, then the file system is accessed and the offset value is mapped to a disk location. The file system can access a block map to identify the location. A table (e.g., a shared location table) that includes entries (e.g., locations) for pages that are shared by multiple files is accessed. If the aforementioned disk location is in the table, then the requested page is in the page cache and it is not necessary to add the page to the page cache. Otherwise, the page is added to the page cache.
    Type: Application
    Filed: December 15, 2010
    Publication date: June 21, 2012
    Applicant: SYMANTEC CORPORATION
    Inventors: Mukund Agrawal, Shriram Wankhade
  • Publication number: 20120159082
    Abstract: Methods and apparatuses are disclosed for direct access to cache memory. Embodiments include receiving, by a direct access manager that is coupled to a cache controller for a cache memory, a region scope zero command describing a region scope zero operation to be performed on the cache memory; in response to receiving the region scope zero command, generating a direct memory access region scope zero command, the direct memory access region scope zero command having an operation code and an identification of the physical addresses of the cache memory on which the operation is to be performed; sending the direct memory access region scope zero command to the cache controller for the cache memory; and performing, by the cache controller, the direct memory access region scope zero operation in dependence upon the operation code and the identification of the physical addresses of the cache memory.
    Type: Application
    Filed: December 16, 2010
    Publication date: June 21, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason A. Cox, Omer Heymann, Nadav Levison, Kevin C. Lin, Eric F. Robinson
  • Publication number: 20120159068
    Abstract: The storage system includes a disk controller for receiving write commands from a computer, and a plurality of disk devices in which data is written in accordance with the control of the disk controller. The size of the first block which constitutes the data unit handled in the execution of the input/output processing of the data in accordance with the write command by the disk controller is different from the size of the second block which constitutes the data unit handled in the execution of the input/output processing of data by the plurality of disk devices. The disk controller issues an instruction for the writing of data to the disk devices using a third block unit of a size corresponding to a common multiple of the size of the first block and the size of the second block.
    Type: Application
    Filed: February 28, 2012
    Publication date: June 21, 2012
    Inventors: Ikuya YAGISAWA, Naoto Matsunami
  • Publication number: 20120159124
    Abstract: A computer-implemented method and a system for computational acceleration of seismic data processing are described. The method includes defining a specific non-uniform memory access (NUMA) scheduling for a plurality of cores in a processor according to data to be processed; and running two or more threads through each of the plurality of cores.
    Type: Application
    Filed: December 15, 2010
    Publication date: June 21, 2012
    Inventors: Chaoshun Hu, Yue Wang, Tamas Nemeth
  • Publication number: 20120159085
    Abstract: A method for validating an eligibility for verification of a memory device within an embedded demand paged memory operating system environment is provided. The method includes receiving a request from an application being executed by a processor coupled to the memory device, the request to utilize at least one memory location. The method includes identifying, by the processor, at least one memory block corresponding to at least one memory location within the memory device, determining, by the processor, whether the at least one memory block is eligible for verification, and producing an eligibility result based on the determination by the processor. A system for validating an eligibility for verifying memory device integrity is also disclosed.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Inventors: Timothy Steven Potter, Donald Becker, Bruce Montgomery, JR., Dave Dopson