Partitioned Cache Patents (Class 711/129)
-
Patent number: 7877547Abstract: A system, method and circuit for efficiently managing a cache storage device. A cache storage device may include a cache management module. The cache management module may be adapted to generate a management unit and to associate the management unit with new data that is to be written into the cache. The cache management module may be further adapted to assign two or more allocation units for each management unit, to store the new data in the cache. A cache management module may include a management unit module. The management unit module may be adapted to generate management units associated with predefined global cache management functions. The cache management module may further include an allocation unit module in communication with the management unit module. The allocation unit module may be adapted to assign allocation units for storing data written into the cache.Type: GrantFiled: May 6, 2005Date of Patent: January 25, 2011Assignee: International Business Machines CorporationInventors: Ofir Zohar, Yaron Revah, Haim Helman, Dror Cohen, Shemer Schwartz
-
Patent number: 7877548Abstract: Buffer memories having hardware controlled buffer space regions in which the hardware controls the dimensions of the various buffer space regions to meet the demands of a particular system. The hardware monitors the usage of the buffer data regions over time and subsequently and automatically adjusts the dimensions of the buffer space regions based on the utilization of those buffer regions.Type: GrantFiled: January 30, 2008Date of Patent: January 25, 2011Assignee: International Business Machines CorporationInventor: Robert A. Shearer
-
Patent number: 7873790Abstract: The present invention concerns a storage method and system (1) comprising processing means (11) and storage resources (20, 100) containing firstly storage means (20) including at least one physical library (P201 to P20n) and secondly memory means (100) called a cache (100), in which the processing means (11) of the storage system (1), vis-à-vis the computer platforms (101 to 10n), emulate at least one virtual library (V201 to V20n) from at least one physical library (P201 to P20n) which the storage system has under its control, characterized in that the processing means (11) of the storage system (1) comprise a management module (30) responsible for emulation and managing priorities over time for accesses to the storage resources (20, 100) using the results of calculations of at least one cache activity index per determined periods of time, and of at least one cache occupancy rate at a given time.Type: GrantFiled: October 3, 2007Date of Patent: January 18, 2011Assignee: Bull S.A.S.Inventors: Jean-Louis Bouchou, Christian Dejon
-
Publication number: 20110010504Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.Type: ApplicationFiled: July 10, 2009Publication date: January 13, 2011Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
-
Publication number: 20100332761Abstract: A mechanism is provided for providing an improved reconfigurable cache. The mechanism partitions a large cache into inclusive cache regions with equal-ratio size or other coarse size increase. The cache controller includes an address decoder for the large cache with a large routing structure. The cache controller includes an additional address decoder for the small cache with a smaller routing structure. The additional address decoder for the small cache reduces decode, array access, and data return latencies. When only a small cache is actively in use, the rest of the cache can be turned into low-power mode to save power.Type: ApplicationFiled: June 26, 2009Publication date: December 30, 2010Applicant: International Business Machines CorporationInventor: Jian Li
-
Publication number: 20100332762Abstract: Methods and apparatus relating to directory cache allocation that is based on snoop response information are described. In one embodiment, an entry in a directory cache may be allocated for an address in response to a determination that another caching agent has a copy of the data corresponding to the address. Other embodiments are also disclosed.Type: ApplicationFiled: June 30, 2009Publication date: December 30, 2010Inventors: Adrian C. Moga, Malcolm H. Mandviwalla, Stephen R. Van Doren
-
Patent number: 7861055Abstract: Aspects of a method and system for an on-chip configurable data RAM for fast memory and pseudo associative caches are provided. Memory banks of configurable data RAM integrated within a chip may be configured to operate as fast on-chip memory or on-chip level 2 cache memory. A set associativity of the on-chip level 2 cache memory may be same after configuring the memory banks as prior to the configuring. The configuring may occur during initialization of the memory banks, and may adjusted the amount of the on-chip level 2 cache. The memory banks configured to operate as on-chip level 2 cache memory or as fast on-chip memory may be dynamically enabled by a memory address.Type: GrantFiled: September 16, 2005Date of Patent: December 28, 2010Assignee: Broadcom CorporationInventor: Fong Pong
-
Patent number: 7856633Abstract: A method of partitioning a memory resource, associated with a multi-threaded processor, includes defining the memory resource to include first and second portions that are dedicated to the first and second threads respectively. A third portion of the memory resource is then designated as being shared between the first and second threads. Upon receipt of an information item, (e.g., a microinstruction associated with the first thread and to be stored in the memory resource), a history of Least Recently Used (LRU) portions is examined to identify a location in either the first or the third portion, but not the second portion, as being a least recently used portion. The second portion is excluded from this examination on account of being dedicated to the second thread. The information item is then stored within a location, within either the first or the third portion, identified as having been least recently used.Type: GrantFiled: March 24, 2000Date of Patent: December 21, 2010Assignee: Intel CorporationInventors: Chan W. Lee, Glenn Hinton, Robert Krick
-
Patent number: 7856531Abstract: A runtime code manipulation system is provided that supports code transformations on a program while it executes. The runtime code manipulation system uses code caching technology to provide efficient and comprehensive manipulation of an application running on an operating system and hardware. The code cache includes a system for automatically keeping the code cache at an appropriate size for the current working set of an application running.Type: GrantFiled: December 30, 2008Date of Patent: December 21, 2010Assignee: Massachusetts Institute of TechnologyInventors: Derek L. Bruening, Saman P. Amarasinghe
-
Publication number: 20100318744Abstract: A method for allocating space in a cache based on media I/O speed is disclosed herein. In certain embodiments, such a method may include storing, in a read cache, cache entries associated with faster-responding storage devices and cache entries associated with slower-responding storage devices. The method may further include implementing an eviction policy in the read cache. This eviction policy may include demoting, from the read cache, the cache entries of faster-responding storage devices faster than the cache entries of slower-responding storage devices, all other variables being equal. In certain embodiments, the eviction policy may further include demoting, from the read cache, cache entries having a lower read-hit ratio faster than cache entries having a higher read-hit ratio, all other variables being equal. A corresponding computer program product and apparatus are also disclosed and claimed herein.Type: ApplicationFiled: June 15, 2009Publication date: December 16, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lawrence Y. Chiu, Lokesh M. Gupta, Yu-Cheng Hsu
-
Publication number: 20100318742Abstract: In a particular embodiment, a circuit device includes a translation look-aside buffer (TLB) configured to receive a virtual address and to translate the virtual address to a physical address of a cache having at least two partitions. The circuit device also includes a control logic circuit adapted to identify a partition replacement policy associated with the identified one of the at least two partitions based on a partition indicator. The control logic circuit controls replacement of data within the cache according to the identified partition replacement policy in response to a cache miss event.Type: ApplicationFiled: June 11, 2009Publication date: December 16, 2010Applicant: QUALCOMM INCORPORATEDInventors: Erich James Plondke, Lucian Codrescu, Ajay Ingle
-
Patent number: 7853591Abstract: A system protects database operations performed on a shared resource. The system may chunk memory to form a set of memory chunks which have memory blocks, at least some of the memory blocks including database objects. The system may configure at least one binary search tree using the memory chunks as nodes and buffer a set of pointers corresponding to the memory blocks. The system may further validate the buffered pointers and dereference validated buffered pointers.Type: GrantFiled: June 30, 2006Date of Patent: December 14, 2010Assignee: Juniper Networks, Inc.Inventors: Xiaosong Yang, Lin Chen, Changming Liu
-
Publication number: 20100312967Abstract: A cache control apparatus is provided in a computer system including an access source and a storage apparatus. This device, based on I/O status information, which is information denoting the I/O status in accordance with an I/O command from the access source, determines whether or not the I/O performance from the access source drops. In a case where the result of this determination is affirmative, the cache control apparatus changes a cache utilization status specified from cache utilization status information, which is information denoting the cache utilization status related to a cache area, to a cache utilization status that improves I/O performance.Type: ApplicationFiled: July 29, 2009Publication date: December 9, 2010Inventors: Yosuke Kasai, Manabu Obana, Akihiko Sakaguchi
-
Publication number: 20100302077Abstract: The present invention reduces the number of writes to a main memory to increase useful life of the main memory. To reduce the number of writes to the main memory, data to be written is written to a cache line in a lowest-level cache memory and in a higher-level cache memory(s). If the cache line in the lowest-level cache memory is full, the number of used cache lines in the lowest-level cache reaches a threshold, or there is a need for an empty entry in the lowest-level cache, a processor or a hardware unit compresses content of the cache line and stores the compressed content in the main memory. The present invention also provides LZB algorithm allowing decompression of data from an arbitrary location in compressed data stream with a bound on the number of characters which needs to be processed before a character or string of interest is processed.Type: ApplicationFiled: June 2, 2009Publication date: December 2, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bulent Abali, Mohammad Banikazemi, Dan E. Poff
-
Publication number: 20100293332Abstract: In response to a request including a state object, which can indicate a state of an enumeration of a cache, the enumeration can be continued by using the state object to identify and send cache data. Also, an enumeration of cache units can be performed by traversing a data structure that includes object nodes, which correspond to cache units, and internal nodes. An enumeration state stack can indicate a current state of the enumeration, and can include state nodes that correspond to internal nodes in the data structure. Additionally, a cache index data structure can include a higher level table and a lower level table. The higher level table can have a leaf node pointing to the lower level table, and the lower level table can have a leaf node pointing to one of the cache units. Moreover, the lower level table can be associated with a tag.Type: ApplicationFiled: May 21, 2009Publication date: November 18, 2010Applicant: Microsoft CorporationInventors: Muralidhar Krishnaprasad, Sudhir Mohan Jorwekar, Sharique Muhammed, Subramanian Muralidhar, Anil K. Nori
-
Publication number: 20100293334Abstract: Version indicators within an existing range can be associated with a data partition in a distributed data store. A partition reconfiguration can be associated with one of multiple partitions in the data store, and a new version indicator that is outside the existing range can be assigned to the reconfigured partition. Additionally, a broadcast message can be sent to multiple nodes, which can include storage nodes and/or client nodes that are configured to communicate with storage nodes to access data in a distributed data store. The broadcast message can include updated location information for data in the data store. In addition, a response message can be sent to a requesting node of the multiple nodes in response to receiving from that node a message that requests updated location information for the data. The response message can include the requested updated location information.Type: ApplicationFiled: May 15, 2009Publication date: November 18, 2010Applicant: Microsoft CorporationInventors: Lu Xun, Hua-Jun Zeng, Muralidhar Krishnaprasad, Radhakrishnan Srikanth, Ankur Agrawal, Balachandar Pavadaisamy
-
Patent number: 7836320Abstract: A data processing apparatus and method are provided for performing power management. The data processing apparatus has a plurality of domains in which devices of the data processing apparatus can operate, and comprises at least one master device for performing operations and at least one slave device for use by such master devices when performing those operations. Each master device is arranged to issue a domain ID signal identifying the domain in which that master device is currently operating. Further, power control logic is provided for determining, based on the domain ID signal issued by the various master devices, whether any portion of a slave device is not currently useable, and if so to cause any such portion to enter a power saving state. This provides a particularly efficient technique for power management in such a data processing apparatus.Type: GrantFiled: July 7, 2006Date of Patent: November 16, 2010Assignee: ARM LimitedInventor: Peter William Harris
-
Patent number: 7836256Abstract: One embodiment of the present method and apparatus for application-specific dynamic cache placement includes grouping sets of data in a cache memory system into two or more virtual partitions and processing a load/store instruction in accordance with the virtual partitions, where the load/store instruction specifies at least one of the virtual partitions to which the load/store instruction is assigned.Type: GrantFiled: June 30, 2008Date of Patent: November 16, 2010Assignee: International Business Machines CorporationInventors: Krishnan Kunjunny Kailas, Rajiv Alazhath Ravindran, Zehra Sura
-
Patent number: 7831772Abstract: A method for temporarily storing data objects in memory of a distributed system comprising a plurality of servers sharing access to data comprises steps of: reserving memory at each of the plurality of servers as a default data cache for storing data objects; in response to user input, allocating memory of at least one of the plurality of servers as a named cache reserved for storing a specified type of data object; in response to an operation at a particular server requesting a data object, determining whether the requested data object is of the specified type corresponding to the named cache at the particular server; if the data object is determined to be of the specified type corresponding to the named cache, storing the requested data object in the named cache at the particular server; and otherwise, using the default data cache for storing the requested data object.Type: GrantFiled: December 12, 2006Date of Patent: November 9, 2010Assignee: Sybase, Inc.Inventors: Vaibhav A. Nalawade, Vadiraja P. Bhatt, KantiKiran K. Pasupuleti
-
Patent number: 7831634Abstract: In one embodiment, a centralized cache configuration for the regions of cache of a system is described. As regions of cache may require different resources they therefore may need to be configured differently. The system for providing central cache configuration includes a centralized cache manager to transform information regarding applications, services, etc. into proper cache regions for each application, service, etc. In one embodiment, the size of these regions is based on a relative weight schema.Type: GrantFiled: April 29, 2005Date of Patent: November 9, 2010Assignee: SAP AGInventors: Petio G. Petev, Frank Kilian, Krasimir Semerdzhiev
-
Patent number: 7822925Abstract: A semi-trace cache combines elements and features of an instruction cache and a trace cache. An ICache portion of the semi-trace cache is filled with instructions fetched from the next level of the memory hierarchy while a TCache portion is filled with traces gleaned either from the actual stream of retired instructions or predicted before execution.Type: GrantFiled: August 28, 2008Date of Patent: October 26, 2010Assignee: Marvell International Ltd.Inventor: Michael W. Morrow
-
Patent number: 7822926Abstract: A data processor includes a cache memory having a plurality of cache rows each row storing a cache line of data values, a memory management unit responsive to a page table entry to control access to a corresponding group of memory addresses forming a memory page, and a cache controller coupled to said cache memory and responsive to a cache miss to trigger a line fill operation to store data values into a cache row. The cache controller is responsive to a cache line size specifier associated with at least one page table entry to vary the number of data values within a cache line fetched in a line fill operation in dependence upon said cache line size specifier. Controlling cache line size on a page basis is more efficient than controlling cache line size on a cache row or virtual address basis.Type: GrantFiled: April 16, 2007Date of Patent: October 26, 2010Assignee: ARM LimitedInventors: Daren Croxford, Peter James Aldworth
-
Publication number: 20100268889Abstract: Techniques a generally described for creating a compiler determined map for the allocation of memory space within a cache. An example computing system is disclosed having a multicore processor with a plurality of processor cores. At least one cache may be accessible to at least two of the plurality of processor cores. A compiler determined map may separately allocate a memory space to threads of execution processed by the processor cores.Type: ApplicationFiled: April 21, 2009Publication date: October 21, 2010Inventors: Thomas Martin Conte, Andrew Wolfe
-
Patent number: 7805572Abstract: Embodiments of the present invention are directed to a scheme in which information as to the future behavior of particular software is used in order to optimize cache management and reduce cache pollution. Accordingly, a certain type of data can be defined as “short life data” by using knowledge of the expected behavior of particular software. Short life data can be a type of data which, according to the ordinary expected operation of the software, is not expected to be used by the software often in the future. Data blocks which are to be stored in the cache can be examined to determine if they are short life data blocks. If the data blocks are in fact short life data blocks they can be stored only in a particular short life area of the cache.Type: GrantFiled: June 29, 2007Date of Patent: September 28, 2010Assignee: Emulex Design & Manufacturing CorporationInventors: Steven Gerard LeMire, Eddie Miller, Eric David Peel
-
Patent number: 7802072Abstract: A data storage device comprises a memory that includes a plurality of physically partitioned memory areas with a rewrite buffer area to be used for data rewrite set within each of the partitioned memory areas and a memory management unit that updates data recorded in each partitioned memory area by utilizing the corresponding rewrite buffer area is set within the partitioned memory area.Type: GrantFiled: July 31, 2007Date of Patent: September 21, 2010Assignee: Felica Networks, Inc.Inventor: Toshiharu Takemura
-
Patent number: 7802081Abstract: Apparatus, systems, methods, and articles may operate to store one or more parameters associated with a pseudo-device in a device configuration table associated with a first partition within a multi-partition computing platform. An inter-partition bridge (IPB) may be exposed to an operating system executing within the first partition. The IPB may be adapted to couple the first partition to a second partition sequestered from the first partition. The IPB may be configured by the parameter(s) associated with the pseudo-device. Other embodiments may be described and claimed.Type: GrantFiled: September 30, 2005Date of Patent: September 21, 2010Assignee: Intel CorporationInventors: Thomas Schultz, Saul Lewites
-
Patent number: 7801163Abstract: A method for allocating space among a plurality of queues in a buffer includes sorting all the queues of the buffer according to size, thereby to establish a sorted order of the queues. At least one group of the queues is selected, consisting of a given number of the queues in accordance with the sorted order. A portion of the space in the buffer is allocated to the group, responsive to the number of the queues in the group. A data packet is accepted into one of the queues in the group responsive to whether the data packet will cause the space occupied in the buffer by the queues in the group to exceed the allocated portion of the space.Type: GrantFiled: April 13, 2006Date of Patent: September 21, 2010Inventors: Yishay Mansour, Alexander Kesselman
-
Publication number: 20100235580Abstract: A system and method are provided for managing cache memory in a computer system. A cache controller portions a cache memory into a plurality of partitions, where each partition includes a plurality of physical cache addresses. Then, the method accepts a memory access message from the processor. The memory access message includes an address in physical memory and a domain identification (ID). A determination is made if the address in physical memory is cacheable. If cacheable, the domain ID is cross-referenced to a cache partition identified by partition bits. An index is derived from the physical memory address, and a partition index is created by combining the partition bits with the index. A processor is granted access (read or write) to an address in cache defined by partition index.Type: ApplicationFiled: April 6, 2009Publication date: September 16, 2010Inventor: Daniel Bouvier
-
Patent number: 7797492Abstract: A method and apparatus for dedicating cache entries to certain streams for performance optimization are disclosed. The method according to the present techniques comprises partitioning a cache array into one or more special-purpose entries and one or more general-purpose entries, wherein special-purpose entries are only allocated for one or more streams having a particular stream ID.Type: GrantFiled: February 20, 2004Date of Patent: September 14, 2010Inventors: Anoop Mukker, Zohar Bogin, Tuong Trieu, Aditya Navale
-
Patent number: 7793048Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.Type: GrantFiled: September 9, 2008Date of Patent: September 7, 2010Assignee: International Business Machines CorporationInventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
-
Patent number: 7792879Abstract: A method comprising calculating first and second allocation speeds respectively for at least a first space and a second space in a runtime environment's memory space; and partitioning the runtime environment's memory space in proportion to said first and second space's respective allocation speeds so that the first space is filled to a first threshold level approximately at the same time as the second space is filled to a second threshold level.Type: GrantFiled: March 11, 2008Date of Patent: September 7, 2010Assignee: Intel CorporationInventors: Ji Qi, Xiao-Feng Li
-
Patent number: 7774525Abstract: Zoned initialization of a solid state drive is provided. A solid state memory device includes a controller for controlling storage and retrieval of data to and from the device. A set of solid state memory components electrically coupled to the controller. The set is electrically divided into a first zone and a second zone, wherein the first zone is at least partially initialized independent from the second zone. An interface is coupled between the controller and the set of solid state memory components to facilitate transfer of data between the set of solid state memory components and the controller.Type: GrantFiled: March 13, 2007Date of Patent: August 10, 2010Assignee: Dell Products L.P.Inventors: Munif M. Farhan, Thomas L. Pratt
-
Patent number: 7769952Abstract: To eliminate duplicated caching in a storage system with plural disk cache partitions, which are obtained by dividing a disk cache. A storage system includes non-volatile medium that stores data; a disk cache that temporarily stores data to be stored in the non-volatile medium; a control unit that controls input and output of data to and from the non-volatile medium; and a memory unit that stores information used by the control unit. The control unit divides the disk cache into at least one of independent disk cache partitions. The memory unit stores the first information that describes states of respective memory areas in the disk cache, and the second information that indicates states of respective memory areas in the disk cache used by the divided disk cache partitions. The second information includes information that identifies the first information corresponding to the respective memory areas in the disk cache.Type: GrantFiled: October 17, 2005Date of Patent: August 3, 2010Assignee: Hitachi, Ltd.Inventors: Akiyoshi Hashimoto, Aki Tomita
-
Publication number: 20100191915Abstract: A system for providing dynamic queue splitting to maximize throughput of queue entry processing while maintaining the order of queued operations on a per-destination basis. Multiple queues are dynamically created by splitting heavily loaded queues in two. As queues become dormant, they are re-combined. Queue splitting is initiated in response to a trigger condition, such as a queue exceeding a threshold length. When multiple queues are used, the queue in which to place a given operation is determined based on the destination for that operation. Each queue in the queue tree created by the disclosed system can store entries containing operations for multiple destinations, but the operations for a given destination are all always stored within the same queue. The queue into which an operation is to be stored may be determined as a function of the name of the operation destination.Type: ApplicationFiled: January 29, 2009Publication date: July 29, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: William A. Spencer
-
Patent number: 7764492Abstract: A system comprises a first mechanical adapter configured to accept a plurality of non-volatile storage devices of a first size. The system also comprises a cage into which the first mechanical adapter is installable. At least one non-volatile storage device of a second size is installable in the cage without the first mechanical adapter.Type: GrantFiled: January 31, 2007Date of Patent: July 27, 2010Assignee: Hewlett-Packard Development Company, L.P.Inventors: Jonathan D. Bassett, Ronald P. Dean, Tom J. Searby
-
Publication number: 20100179865Abstract: Music can be broadcast from a radio station and recorded onto a cache of a personal electronic device, such as a portable digital music player. The recording can occur such that there is segmenting of music into different cache portions based upon classification. Instead of playing music from the radio station, music can be played from the cache to ensure high quality and desirable variety. Different rules can be used to govern which music is played as well as how music should be removed from the cache. In addition, targeted advertisements can be used that relate to the music in the cache as well as a user location.Type: ApplicationFiled: January 9, 2009Publication date: July 15, 2010Applicant: QUALCOMM IncorporatedInventors: Patrik N. Lundqvist, Guilherme K. Hoefel, Robert S. Daley, Jack B. Steenstra
-
Publication number: 20100174863Abstract: A system is described for providing scalable in-memory caching for a distributed database. The system may include a cache, an interface, a non-volatile memory and a processor. The cache may store a cached copy of data items stored in the non-volatile memory. The interface may communicate with devices and a replication server. The non-volatile memory may store the data items. The processor may receive an update to a data item from a device to be applied to the non-volatile memory. The processor may apply the update to the cache. The processor may generate an acknowledgement indicating that the update was applied to the non-volatile memory and may communicate the acknowledgment to the device. The processor may then communicate the update to a replication server. The processor may apply the update to the non-volatile memory upon receiving an indication that the update was stored by the replication server.Type: ApplicationFiled: March 15, 2010Publication date: July 8, 2010Applicant: Yahoo! Inc.Inventors: Brian F. Cooper, Adam Silberstein, Utkarsh Srivastava, Raghu Ramakrishnan, Rodrigo Fonseca
-
Patent number: 7752341Abstract: A programmable controller includes a CPU unit, a communication unit and peripheral units connected together through an internal bus. The communication unit has a bus master function, including a cache memory for recording IO data stored in the memory of an input-output unit. When a message is received, it is judged whether the IO data stored in the memory of the input-output unit specified by this message is updated or not. If the data are not updated, a response is created based on the IO data stored in the IO data stored in the cache memory. If the data are updated, the input-output unit is accessed and updated IO data are obtained and a response is created based on the obtained IO data.Type: GrantFiled: February 6, 2009Date of Patent: July 6, 2010Assignee: OMRON CorporationInventor: Shinichiro Kawaguchi
-
Patent number: 7747812Abstract: A method includes configuring a flash memory device including a first memory sector having a primary memory sector correspondence, a second memory sector having an alternate memory sector correspondence, and a third memory sector having a free memory sector correspondence, copying a portion of the primary memory sector to the free memory sector, erasing the primary memory sector, and changing a correspondence of each of the first memory sector, the second memory sector, and the third memory sector.Type: GrantFiled: December 22, 2005Date of Patent: June 29, 2010Assignee: Pitney Bowes Inc.Inventors: Wesley A. Kirschner, Gary S. Jacobson, John A. Hurd, G. Thomas Athens, Steven J. Pauly, Richard C. Day, Jr.
-
Patent number: 7747823Abstract: Cache management strategies are described for retrieving information from a storage medium, such as an optical disc, using a cache memory including multiple cache segments. A first group of cache segments can be devoted to handling the streaming transfer of a first type of information, and a second group of cache segments can be devoted to handling the bulk transfer of a second type of information. A host system can provide hinting information that identifies which group of cache segments that a particular read request targets. A circular wrap-around fill strategy can be used to iteratively supply new information to the cache segments upon cache hits by performing pre-fetching. Various eviction algorithms can be used to select a cache segment for flushing and refilling upon a cache miss, such as a least recently used (LRU) algorithm or a least frequently used (LFU) algorithm.Type: GrantFiled: February 11, 2008Date of Patent: June 29, 2010Assignee: Microsoft CorporationInventors: Brian L. Schmidt, Jonathan E. Lange, Timothy R. Osborne
-
Patent number: 7747839Abstract: A data processing apparatus and method are provided for handling instructions to be executed by processing circuitry. The processing circuitry has a plurality of processor states, each processor state having a different instruction set associated therewith. Pre-decoding circuitry receives the instructions fetched from the memory and performs a pre-decoding operation to generate corresponding pre-decoded instructions, with those pre-decoded instructions then being stored in a cache for access by the processing circuitry. The pre-decoding circuitry performs the pre-decoding operation assuming a speculative processor state, and the cache is arranged to store an indication of the speculative processor state in association with the pre-decoded instructions.Type: GrantFiled: January 23, 2008Date of Patent: June 29, 2010Assignee: ARM LimitedInventors: Peter Richard Greenhalgh, Andrew Christopher Rose
-
Patent number: 7743209Abstract: There is provided a storage system capable of handling a large amount of control data at low cost in high performance. The storage system includes a cache memory for temporarily storing data read/written between a host computer and a disk array, a CPU for making a control related to data transfer, and a local memory for storing control data utilized by the CPU. The disk array has a first user data storing area for storing user data and a control data storing area for storing all control data. A control unit has a virtualization unit for allocating a memory space of the control data storing area to a virtual address accessible from the CPU, reading the control data specified by the virtual address to a physical address of the local memory, and transferring the control data to the CPU.Type: GrantFiled: December 1, 2006Date of Patent: June 22, 2010Assignee: Hitachi, Ltd.Inventors: Masanori Takada, Kentaro Shimada, Shuji Nakamura
-
Patent number: 7739454Abstract: The present invention partitions a cache region of a storage subsystem for each user and prevents interference between user-dedicated regions. A plurality of CLPR can be established within the storage subsystem. A CLPR is a user-dedicated region that can be used by partitioning the cache region of a cache memory. Management information required to manage the data stored in the cache memory is allocated to each CLPR in accordance with the attribute of the segment or slot. The clean queue and clean counter, which manage the segments in a clean state, are provided in each CLPR. The dirty queue and dirty counter are used jointly by all the CLPR. The free queue, classification queue, and BIND queue are applied jointly to all the CLPR, only the counters being provided in each CLPR.Type: GrantFiled: June 12, 2007Date of Patent: June 15, 2010Assignee: Hitachi, Ltd.Inventors: Sachiko Hoshino, Takashi Sakaguchi, Yasuyuki Nagasoe, Shoji Sugino
-
Patent number: 7734873Abstract: A processor includes a cache hierarchy including a level-1 cache and a higher-level cache. The processor maps a portion of physical memory space to a portion of the higher-level cache, executes instructions, at least some of which comprise microcode, allows microcode to access the portion of the higher-level cache, and prevents instructions that do not comprise microcode from accessing the portion of the higher-level cache. The first portion of the physical memory space can be permanently allocated for use by microcode. The processor can move one or more cache lines of the first portion of the higher-level cache from the higher-level cache to a first portion of the level-1 cache, allow microcode to access the first portion of the first level-1 cache, and prevent instructions that do not comprise microcode from accessing the first portion of the first level-1 cache.Type: GrantFiled: May 29, 2007Date of Patent: June 8, 2010Assignee: Advanced Micro Devices, Inc.Inventors: Gary Lauterbach, Bruce R. Holloway, Michael Gerard Butler, Sean Lic
-
Patent number: 7730453Abstract: Methods for handling zero-length allocations are disclosed. An example of such a method may include returning a self-describing/diagnosing dynamic address that has all the properties required for a secure implementation. Another example may include returning a series of different addresses (instead of a single address per process) to improve supportability. Yet another example may include maintaining diagnostic information about the original allocation for ease of problem resolution.Type: GrantFiled: December 13, 2005Date of Patent: June 1, 2010Assignee: Microsoft CorporationInventor: Michael Luther Swafford
-
Publication number: 20100131715Abstract: A apparatus is provided for updating data within a business planning tool. The apparatus comprises a computer memory (22) arranged to store operational data in a plurality of line items (50), each line item (50) being arranged to represent operational data in data cells (52) occupying space in a plurality of dimensions (X, Y), and each line item (50) having data cells in a first dimension (Y) configured to represent the operational data in a at least one hierarchy level, and having data cells in a second dimension (X) arranged to represent the respective operational data over at least one time period.Type: ApplicationFiled: November 19, 2009Publication date: May 27, 2010Inventor: Michael Peter Gould
-
Patent number: 7720957Abstract: Apparatus and storage media for auto-configuration of an internal network interface are disclosed. Embodiments may install an internal VLAN manager in a logically partitioned computer system along with network agents in each of the partitions in the logically partitioned system to facilitate configuring an internal communications network and the corresponding internal network interfaces in each participating partition. In particular, an administrator accesses internal VLAN manager, selects an internal VLAN ID, selects each of the participating partitions, and configures the communications network with global parameters and ranges. The internal VLAN manager then generates partition parameters and incorporates them into messages for each of the partitions selected to participate in the internal network.Type: GrantFiled: February 11, 2009Date of Patent: May 18, 2010Assignee: International Business Machines CorporationInventors: Charles S. Graham, Harvey G. Kiel, Chetan Mehta, Lee A. Sendelbach, Jaya Srikrishnan
-
Publication number: 20100122032Abstract: Embodiments of the present invention provide a system that selectively performs lookups for cache lines. During operation, the system by maintains a lower-level cache and a higher-level cache in accordance with a set of rules that dictate conditions under which cache lines are held in the lower-level cache and the higher-level cache. The system next performs a lookup for cache line A in the lower level cache. The system then discovers that the lookup for cache line A missed in the lower-level cache, but that cache line B is present in the lower-level cache. Next, in accordance with the set of rules, the system determines, without performing a lookup for cache line A in the higher-level cache, that cache line A is guaranteed not to be present and valid in the higher-level cache because cache line B is present in the lower-level cache.Type: ApplicationFiled: November 13, 2008Publication date: May 13, 2010Applicant: SUN MICROSYSTEMS, INC.Inventors: Robert E. Cypher, Haakan E. Zeffer
-
Patent number: 7716422Abstract: Provided are a storage apparatus using a non-volatile memory as a cache and a method of operating the same, in which the non-volatile memory is used as the cache so as to preserve data even when electricity is interrupted. The storage apparatus using a non-volatile memory as a cache includes a main storage medium, the non-volatile memory being used as the cache of the main storage medium and having a stationary region and a non-stationary region divided according to whether data are fixed, and a block management unit managing blocks allocated in the non-volatile memory.Type: GrantFiled: November 20, 2006Date of Patent: May 11, 2010Assignee: Samsung Electronics Co., Ltd.Inventors: Dong-kun Shin, Sang-lyul Min, Shea-yun Lee, Jang-hwan Kim, Dong-hyun Song, Jeong-eun Kim
-
Patent number: 7716720Abstract: The present invention is directed to a system for providing a trusted environment for untrusted computing systems. The system may include a HAC subsystem managing shared resources and a trusted bus switch for controlling a COTS processor to access the shared resources. The shared resources such as memory and several I/O resources reside on the trusted side of the trusted bus switch. Alternatively, the system may include a SCM as an add-on module to an untrusted host environment. Only authenticated applications including COTS OS execute on the SCM while untrusted applications execute on the untrusted host environment. The SCM may control secure resource access from the untrusted host through a plug-in module interface. All secure resources may be maintained on the trusted side of the plug-in module interface.Type: GrantFiled: June 17, 2005Date of Patent: May 11, 2010Assignee: Rockwell Collins, Inc.Inventors: James A. Marek, David S. Hardin, Raymond A. Kamin, III, Steven E. Koenck, Allen P. Mass