Addressing Of Memory Level In Which Access To Desired Data Or Data Block Requires Associative Addressing Means, E.g., Cache, Etc. (epo) Patents (Class 711/E12.017)

  • Publication number: 20110093648
    Abstract: According to one embodiment, a method for using flash memory in a storage cache comprises receiving data to be cached in flash memory of a storage cache, at least some of the received data being received from at least one of a host system and a storage medium, selecting a block of the flash memory for receiving the data, buffering the received data until sufficient data has been received to fill the block, and overwriting existing data in the selected block with the buffered data. According to another embodiment, a method comprises receiving data, at least some of the data being from a host system and/or a storage medium, and sequentially overwriting sequential blocks of the flash memory with the received data. Other devices and methods for working with flash memory in a storage cache according to various embodiments are included and described herein.
    Type: Application
    Filed: October 21, 2009
    Publication date: April 21, 2011
    Applicant: International Business Machines Corporation
    Inventors: Wendy A. Belluomini, Binny S. Gill, Michael A. Ko
  • Publication number: 20110087839
    Abstract: The apparatus, methods and systems for a smart address parser (hereinafter, “SAP”) described herein implement a text parser whereby users may enter a text string, such as manually via an input field. The SAP processes the input address string to extract address elements for storage, display, reporting, and/or use in a wide variety of back-end applications. In various embodiments and implementations, the SAP may facilitate: separation and identification of address components regardless of the order in which they are supplied in the input address string; supplementation of missing address information; correction and/or recognition of misspelled terms, abbreviations, alternate names, and/or the like variants of address elements; recognition of unique addresses based on minimal but sufficient input identifiers; and/or the like.
    Type: Application
    Filed: October 9, 2009
    Publication date: April 14, 2011
    Applicant: Verizon Patent and Licensing Inc.
    Inventors: Nityanand Sharma, Sutap Chatterjee
  • Publication number: 20110087844
    Abstract: This invention is related to content delivery systems and methods. In one aspect of the invention, a content provider controls a replacement process operating at an edge server. The edge server services content providers and has a data store for storing content associated with respective ones of the content providers. A content provider sets a replacement policy at the edge server that controls the movement of content associated with the content provider, into and out of the data store. In another aspect of the invention, a content delivery system includes a content server storing content files, an edge server having cache memory for storing content files, and a replacement policy module for managing content stored within the cache memory. The replacement policy module can store portions of the content files at the content server within the cache memory, as a function of a replacement policy set by a content owner.
    Type: Application
    Filed: December 20, 2010
    Publication date: April 14, 2011
    Applicant: EdgeCast Networks, Inc.
    Inventors: Lior Elazary, Alex Kazerani, Jay Sakata
  • Publication number: 20110087842
    Abstract: Retrieving content items based on a social distance between a user and content providers. The social distance is determined based on, for example, user interaction with the content providers. The content providers are ranked, for the user, based on the determined social distance. Prior to a request from the user, the content items are pre-fetched based on the ranked content providers and constraints such as storage space, bandwidth, and battery power level of a computing device of the user. In some embodiments, additional content items are retrieved, or retrieved content items are deleted, as a variable-size cache on the computing device fills or changes size.
    Type: Application
    Filed: October 12, 2009
    Publication date: April 14, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Bo Lu, Raghavendra Malpani, Joseph A. Porkka, Bartosz Henryk Paliswiat, Christian Bøgh Jensen, Sushil Kumar
  • Publication number: 20110082980
    Abstract: A cache memory device and method for operating the same. One embodiment of the cache memory device includes an address decoder decoding a memory address and selecting a target cache line. A first cache array is configured to output a first cache entry associated with the target cache line, and a second cache array coupled to an alignment unit is configured to output a second cache entry associated with the alignment cache line. The alignment unit coupled to the address decoder selects either the target cache line or a neighbor cache line proximate the target cache line as an alignment cache line output. Selection of either the target cache line or a neighbor cache line is based on an alignment bit in the memory address. A tag array cache is split into even and odd cache lines tags, and provides one or two tags for every cache access.
    Type: Application
    Filed: October 2, 2009
    Publication date: April 7, 2011
    Applicant: International Business Machines Corporation
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Publication number: 20110082967
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, can perform data caching. In some implementations, a method and system include receiving information that includes a logical address, allocating a physical page in a non-volatile memory structure, mapping the logical address to a physical address of the physical page, and writing, based on the physical address, data to the non-volatile memory structure to cache information associated with the logical address. The logical address can include an identifier of a data storage device and a logical page number.
    Type: Application
    Filed: October 5, 2010
    Publication date: April 7, 2011
    Inventors: Shekhar S. Deshkar, Sandeep Karmarkar, Arvind Pruthi, Ram Kishore Johri
  • Publication number: 20110082845
    Abstract: A condensed version of a plurality of rules for one or more forms being used by a user is loaded from a database. The condensed version of the plurality of rules is stored in cache memory of a computing device. When an event occurs, a rules engine determines if a condensed version of the rule is stored in cache memory. If a rule is not applicable, the rules engine does not query the database. If a rule is applicable, the rules engine determines if an action should be taken for the event. If no action is to be taken, the database is not queried for the rule. If an action is to be taken, the database may be queried for information for the rule to allow performing of the action or if the action is included in the condensed version, the action is performed without querying the database.
    Type: Application
    Filed: October 1, 2009
    Publication date: April 7, 2011
    Applicant: Oracle International Corporation
    Inventors: Appla Jagadesh Padala, Shalabh Gupta
  • Publication number: 20110082973
    Abstract: A method and system where a hardware platform such as a disk drive is formatted to the largest block length it is desired to read from or write to. Using commands, data can be accessed from the drive in any block length that is equal to or less than the formatted block length.
    Type: Application
    Filed: December 10, 2010
    Publication date: April 7, 2011
    Applicant: International Business Machines Corporation
    Inventors: Thomas R. Forrer, JR., Jason Eric Moore, Abel Enrique Zuzuarregui
  • Publication number: 20110082981
    Abstract: Data is processed using a first and second processing circuit (12) coupled to a background memory (10) via a first and second cache circuit (14, 14?) respectively. Each cache circuit (14, 14?) stores cache lines, state information defining states of the stored cache lines, and flag information for respective addressable locations within at least one stored cache line. The cache control circuit of the first cache circuit (14) is configured to selectively set the flag information for part of the addressable locations within the at least one stored cache line to a valid state when the first processing circuit (12) writes data to said part of the locations, without prior loading of the at least one stored cache line from the background memory (10). Data is copied from the at least one cache line into the second cache circuit (14?) from the first cache circuit (14) in combination with the flag information for the locations within the at least one cache line.
    Type: Application
    Filed: April 22, 2009
    Publication date: April 7, 2011
    Applicant: NXP B.V.
    Inventors: Jan Hoogerbrugge, Andrei Sergeevich Terechko
  • Publication number: 20110082908
    Abstract: A replication count of a data element of a node of a cache cluster is defined. The data element has a key-value pair where the node is selected based on a hash of the key and a size of the cache cluster. The data element is replicated to at least one other node of the cache cluster based on the replication count.
    Type: Application
    Filed: October 6, 2009
    Publication date: April 7, 2011
    Inventor: Bela Ban
  • Publication number: 20110078377
    Abstract: Social networking data is received at the dispersed storage processing unit, the social networking data associated with at least one of a plurality of user devices. Dispersed storage metadata associated with the social networking data is generated. A full record and at least one partial record are generated based on the social networking data and further based on the dispersed storage metadata. The full record is stored in a dispersed storage network. The partial record is pushed to at least one other of the plurality of user devices via the data network.
    Type: Application
    Filed: May 28, 2010
    Publication date: March 31, 2011
    Applicant: CLEVERSAFE, INC.
    Inventors: Gary W. Grube, Timothy W. Markison
  • Publication number: 20110078378
    Abstract: An information processing apparatus sequentially selects a function whose execution frequency is high as a selected function that is to be stored in an internal memory, in a source program having a hierarchy structure. The information processing apparatus allocates the selected function to a memory area of the internal memory, allocates a function that is not the selected function and is called from the selected function to an area close to the memory area of the internal memory, and generates an internal load module. The information processing apparatus allocates a remaining function to an external memory coupled to a processor and generates an external load module. Then, a program executed by the processor having the internal memory is generated. By allocating the function with a high execution frequency to the internal memory, it is possible to execute the program at high speed, which may improve performance of a system.
    Type: Application
    Filed: September 23, 2010
    Publication date: March 31, 2011
    Applicants: FUJITSU LIMITED, FUJITSU SEMICONDUCTOR LIMITED
    Inventors: Takahisa SUZUKI, Hiromasa YAMAUCHI, Hideo MIYAKE, Makiko ITO
  • Publication number: 20110078367
    Abstract: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.
    Type: Application
    Filed: September 25, 2009
    Publication date: March 31, 2011
    Inventors: Alexander L. Minkin, Steven James Heinrich, RaJeshwaren Selvanesan, Brett W. Coon, Charles McCarver, Anjana Rajendran, Stewart G. Carlton
  • Publication number: 20110078375
    Abstract: A method and device for executing data access and storage using a host device, the method comprising providing a removable device for the host operable to effect communication between the host and a remote storage service, wherein the removable device is operable to cache data received from and sent to the storage service, the removable device further operable to effect communication between the host device and the storage service using a wireless communication module.
    Type: Application
    Filed: September 30, 2009
    Publication date: March 31, 2011
    Inventors: Keir Shepherd, Neil MacDougall, Ben Wynne, Lon Barfield
  • Publication number: 20110078379
    Abstract: An I/O processor determines whether or not the amount of dirty data on a cache memory exceeds a threshold value and, if the determination is that this threshold value has been exceeded, writes a portion of the dirty data of the cache memory to a storage device. If a power source monitoring and control unit detects a voltage abnormality of the supplied power, the power monitoring and control unit maintains supply of power using power from a battery, so that a processor receives supply of power from the battery and saves the dirty data stored on the cache memory to a non-volatile memory.
    Type: Application
    Filed: December 3, 2010
    Publication date: March 31, 2011
    Inventors: Junichi IIDA, Xiaoming Jiang
  • Publication number: 20110078376
    Abstract: A method and apparatus for obtaining location content from multiple networks is disclosed. The method may comprises: obtaining coarse location content at a wireless communication device (WCD) from a first network using a first protocol, wherein the coarse location content includes information defining locations of geographic coverage regions for one or more second networks which use a second protocol, obtaining WCD location information, determining from the WCD location information and the coarse location content if the WCD is within the geographic coverage region of a second network, accessing the determined second network using the second protocol, receiving from the accessed second network fine location content, and generating an integrated location content item by combining the coarse location content with the fine location content.
    Type: Application
    Filed: September 29, 2009
    Publication date: March 31, 2011
    Applicant: QUALCOMM Incorporated
    Inventors: MANOJ M. DESHPANDE, Michael M. Fan, Roger W. Martin
  • Publication number: 20110078369
    Abstract: Systems and methods for using a page table in an information handling system including a semiconductor storage device are disclosed. A page table in an information handling system may be provided. The information handling system may include a memory, and the memory may include a semiconductor storage device. NonDRAM tag data may be stored in the page table. The nonDRAM tag data may indicate one or more attributes of one or more pages in the semiconductor storage device.
    Type: Application
    Filed: September 30, 2009
    Publication date: March 31, 2011
    Inventors: William F. Sauber, Richard Schuckle, Thomas Pratt
  • Publication number: 20110072211
    Abstract: A method for providing state inheritance across command lists in a multi-threaded processing environment. The method includes receiving an application program that includes a plurality of parallel threads; generating a command list for each thread of the plurality of parallel threads; causing a first command list associated with a first thread of the plurality of parallel threads to be executed by a processing unit; and causing a second command list associated with a second thread of the plurality of parallel threads to be executed by the processing unit, where the second command list inherits from the first command list state associated with the processing unit.
    Type: Application
    Filed: August 9, 2010
    Publication date: March 24, 2011
    Inventors: Jerome F. DULUK, JR., Jesse David Hall, Henry Packard Moreton, Patrick R. Brown
  • Publication number: 20110072170
    Abstract: A bi-endian multiprocessor system having multiple processing elements, each of which includes a processor core, a local memory and a memory flow controller. The memory flow controller transfers data between the local memory and data sources external to the processing element. If the processing element and the data source implement data representations having the same endian-ness, each multi-word line of data is stored in the local memory in the same word order as in the data source. If the processing element and the data source implement data representations having different endian-ness, the words of each multi-word line of data are transposed when data is transferred between local memory and the data source. The processing element may incorporate circuitry to add doublewords, wherein the circuitry can alternately carry bits from a first word to a second word or vice versa, depending upon whether the words in lines of data are transposed.
    Type: Application
    Filed: September 21, 2009
    Publication date: March 24, 2011
    Inventors: Brian King Flachs, Brad William Michael, Nicolas Maeding, Shigeaki Iwasa, Seiji Maeda, Hiroo Hayashi
  • Publication number: 20110072197
    Abstract: Described embodiments provide a method of transferring, by a media controller, data associated with a host data transfer between a host device and a storage media. A buffer layer module of the media controller segments the host data transfer into one or more data transfer segments. Each data transfer segment corresponds to at least a portion of the data. The buffer layer module allocates a number of physical buffers to a virtual circular buffer for buffering the one or more data transfer segments. The buffer layer module transfers, by the virtual circular buffer, each of the data transfer segments between the host device and the storage media through the allocated physical buffers.
    Type: Application
    Filed: March 25, 2010
    Publication date: March 24, 2011
    Inventors: Timothy Lund, Carl Forhan, Michael Hicken
  • Publication number: 20110072217
    Abstract: A plurality of mid-tier databases form a single, consistent cache grid for data in a one or more backend data sources, such as a database system. The mid-tier databases may be standard relational databases. Cache agents at each mid-tier database swap in data from the backend database as needed. Consistency in the cache grid is maintained by ownership locks. Cache agents prevent database operations that will modify cached data in a mid-tier database unless and until ownership of the cached data can be acquired for the mid-tier database. Cache groups define what backend data may be cached, as well as a general structure in which the backend data is to be cached. Metadata for cache groups is shared to ensure that data is cached in the same form throughout the entire grid. Ownership of cached data can then be tracked through a mapping of cached instances of data to particular mid-tier databases.
    Type: Application
    Filed: September 18, 2009
    Publication date: March 24, 2011
    Inventors: Chi Hoang, Tirthankar Lahiri, Marie-Anne Neimat, Chih-Ping Wang, John Miller, Dilys Thomas, Nagender Bandi, Susan Cheng
  • Publication number: 20110072187
    Abstract: Described embodiments provide a media controller that determines the size of a cache of data being transferred between a host device and one or more sectors of a storage device. The one or more sectors are segmented into a plurality of chunks, and each chunk corresponds to at least one sector. The contents of the cache are managed in a cache hash table. At startup of the media controller, a buffer layer module of the media controller initializes the cache in a buffer of the media controller. During operation of the media controller, the buffer layer module determines a number of chunks allocated to the cache. Based on the number of chunks allocated to the cache, the buffer layer module updates the size of the of the cache hash table.
    Type: Application
    Filed: March 12, 2010
    Publication date: March 24, 2011
    Inventors: Carl Forhan, Timothy Lund
  • Publication number: 20110066808
    Abstract: An apparatus, system, and method are disclosed for caching data on a solid-state storage device. The solid-state storage device maintains metadata pertaining to cache operations performed on the solid-state storage device, as well as storage operations of the solid-state storage device. The metadata indicates what data in the cache is valid, as well as information about what data in the nonvolatile cache has been stored in a backing store. A backup engine works through units in the nonvolatile cache device and backs up the valid data to the backing store. During grooming operations, the groomer determines whether the data is valid and whether the data is discardable. Data that is both valid and discardable may be removed during the grooming operation. The groomer may also determine whether the data is cold in determining whether to remove the data from the cache device. The cache device may present to clients a logical space that is the same size as the backing store.
    Type: Application
    Filed: September 8, 2010
    Publication date: March 17, 2011
    Applicant: FUSION-IO, INC.
    Inventors: David Flynn, John Strasser, Jonathan Thatcher, David Atkisson, Michael Zappe, Joshua Aune, Kevin B. Vigor
  • Publication number: 20110066608
    Abstract: An architectural stack includes a rules proxy. The rules proxy may be between a web server and a HTTP proxy cache, and may be an HTTP proxy application. The rules proxy receives a user request to access a web page from the web server, captures user data (e.g., referrer data and/or session data) from the user request, applies a rule to the user data to assign the user to a user bucket, generates a web page with content using the assigned user bucket, and delivers the user-specific, generated web page to the user.
    Type: Application
    Filed: September 14, 2009
    Publication date: March 17, 2011
    Applicant: CBS INTERACTIVE, INC.
    Inventors: William W. Graham, JR., Peter J. Offringa
  • Publication number: 20110066812
    Abstract: The present invention is directed to a transfer request block (TRB) cache system and method. A cache is used to store plural TRBs, and a mapping table is utilized to store corresponding TRB addresses in a system memory. A cache controller pre-fetches the TRBs and stores them in the cache according to the content of the mapping table.
    Type: Application
    Filed: July 1, 2010
    Publication date: March 17, 2011
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: SHUANG-SHUANG QIN, JIIN LAI, ZHI-QIANG HUI, XIU-LI GUO
  • Publication number: 20110066807
    Abstract: Protecting computers against cache poisoning, including a cache-entity table configured to maintain a plurality of associations between a plurality of data caches and a plurality of entities, where each of the caches is associated with a different one of the entities, and a cache manager configured to receive data that is associated with any of the entities and store the received data in any of the caches that the cache-entity table indicates is associated with the entity, and receive a data request that is associated with any of the entities and retrieve the requested data from any of the caches that the cache-entity table indicates is associated with the requesting entity, where any of the cache-entity table and cache manager are implemented in either of computer hardware and computer software embodied in a computer-readable medium.
    Type: Application
    Filed: September 14, 2009
    Publication date: March 17, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Roee Hay, Adi Sharabani
  • Publication number: 20110066785
    Abstract: A memory management system and method include and use a cache buffer (such as a table look-aside buffer, TLB), a memory mapping table, a scratchpad cache, and a memory controller. The cache buffer is configured to store a plurality of data structures. The memory mapping table is configured to store a plurality of addresses of the data structures. The scratchpad cache is configured to store the base address of the data structures. The memory controller is configured to control reading and writing in the cache buffer and the scratchpad cache. The components are operable together under control of the memory controller to facilitate effective searching of the data structures in the memory management system.
    Type: Application
    Filed: January 27, 2010
    Publication date: March 17, 2011
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: JIAN LI, JIIN LAI, SHAN-NA PANG, ZHI-QIANG HUI, DI DAI
  • Publication number: 20110066795
    Abstract: The present invention is directed to a stream context cache system, which primarily includes a cache and a mapping table. The cache stores plural stream contexts, and the mapping table stores associated stream context addresses in a system memory. Consequently, a host may, according to the content of the mapping table, directly retrieve the stream context that is pre-fetched and stored in the cache, rather than read the stream context from the system memory.
    Type: Application
    Filed: July 1, 2010
    Publication date: March 17, 2011
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: XIU-LI GUO, JIIN LAI, ZHI-QIANG HUI, SHUANG-SHUANG QIN
  • Publication number: 20110066830
    Abstract: Techniques for pre-filling a cache associated with a second core prior to migration of a thread from a first core to the second core are generally disclosed. The present disclosure contemplates that some computer systems may include a plurality of processor cores, and that some cores may have hardware capabilities different from other cores. In order to assign threads to appropriate cores, thread/core mapping may be utilized and, in some cases, a thread may be reassigned from one core to another core. In a probabilistic anticipation that a thread may be migrated from a first core to a second core, a cache associated with the second core may be pre-filled (e.g., may become filled with some data before the thread is rescheduled on the second core). Such a cache may be a local cache to the second core and/or an associated buffer cache, for example.
    Type: Application
    Filed: September 11, 2009
    Publication date: March 17, 2011
    Inventors: Andrew Wolfe, Thomas M. Conte
  • Publication number: 20110060879
    Abstract: A processing system is provided. The processing system includes a first processing unit coupled to a first memory and a second processing unit coupled to a second memory. The second memory comprises a coherent memory and a private memory that is private to the second processing unit.
    Type: Application
    Filed: September 9, 2010
    Publication date: March 10, 2011
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Philip J. ROGERS, Warren Fritz Kruger, Mark Hummel, Eric Demers
  • Publication number: 20110057939
    Abstract: Disclosed herein are systems, apparatuses, and methods for enabling efficient reads to a local memory of a processing unit. In an embodiment, a processing unit includes an interface and a buffer. The interface is configured to (i) send a request for a portion of data in a region of a local memory of an other processing unit and (ii) receive, responsive to the request, all the data from the region. The buffer is configured to store the data from the region of the local memory of the other processing unit.
    Type: Application
    Filed: March 8, 2010
    Publication date: March 10, 2011
    Applicants: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: David I.J. GLEN, Philip J. Rogers, Gordon F. Caruk, Gongxian Jeffrey Cheng, Mark Hummel, Stephen Patrick Thompson, Anthony Asaro
  • Patent number: 7904643
    Abstract: A content addressable memory (CAM) device, method, and method of generating entries for range matching are disclosed. A CAM device (800) according to one embodiment can include a pre-encoder (806) that encodes range bit values W into additional bits E. Additional bits E can indicate compression of range rules according to particular bit pairs. A CAM array (802) can include entries that store compressed range code values (RANGE) with corresponding additional bit values (ENC). Alternate embodiments can include pre-encoders that encode portions of range values (K1 to Ki) in a “one-hot” fashion. Corresponding CAM entries can include encoded value having sections that each represent increasingly finer divisions of a range space.
    Type: Grant
    Filed: March 26, 2010
    Date of Patent: March 8, 2011
    Assignee: Netlogic Microsystems, Inc.
    Inventor: Srinivasan Venkatachary
  • Publication number: 20110055489
    Abstract: Filters and methods for managing presence counter saturation are disclosed. The filters can be coupled to a collection of items and maintain information for determining a potential presence of an identified item in the collection of items. The filter includes a filter controller and one or more mapping functions. Each mapping function has a plurality of counters associated with the respective mapping function. When a membership status of an item in the collection of items changes, the filter receives a membership change notification including an identifier identifying the item. Each mapping function processes the identifier to identify a particular counter associated with the respective mapping function. If a particular counter has reached a predetermined value, a request including a reference to the particular counter is sent to the collection of items. The filter receives a response to the request and modifies the particular counter as a result of the response.
    Type: Application
    Filed: September 1, 2009
    Publication date: March 3, 2011
    Applicant: QUALCOMM INCORPORATED
    Inventors: Vimal K. Reddy, Mike W. Morrow
  • Publication number: 20110055484
    Abstract: Mechanisms are provided for tracking dependencies of threads in a multi-threaded computer program execution. The mechanisms detect a dependency of a first thread's execution on results of a second thread's execution in an execution flow of the multi-threaded computer program. The mechanisms further store, in a hardware thread dependency vector storage associated with the first thread's execution, an identifier of the dependency by setting at least one bit in the hardware thread dependency vector storage corresponding to the second thread. Moreover, the mechanisms schedule tasks performed by the multi-threaded computer program based on the hardware thread dependency vector storage to minimize squashing of threads.
    Type: Application
    Filed: September 3, 2009
    Publication date: March 3, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alexandre E. Eichenberger, John K.P. O'Brien, Kathryn M. O'Brien, Lakshminarayanan Renganarayana, Xiaotong Zhuang
  • Publication number: 20110055456
    Abstract: A method for giving a read command to a flash memory chip to read data to be accessed by a host system is provided. The method includes receiving a host read command; determining whether the received host read command follows a last host read command; if yes, giving a cache read command to read data from the flash memory chip; and if no, giving a general read command and the cache read command to read data from the flash memory chip. Accordingly, the method can effectively reduce time needed for executing the host read commands by using the cache read command to combine the host read commands which access continuous physical addresses and pre-read data stored in a next physical address.
    Type: Application
    Filed: September 28, 2009
    Publication date: March 3, 2011
    Inventor: Chih-Kang Yeh
  • Publication number: 20110055481
    Abstract: A controlling a cache memory includes: a data receiving unit to receive a sensor ID and data detected by the sensor; an attribute information acquiring unit to acquire attribute information corresponding to the sensor ID, from an attribute information memory, the attribute information memory storing the attribute information of the sensor mapped to the sensor ID; a sensor information memory to store information of a storage period, the sensor information memory including a cache memory storing the attribute information; and a cache memory control unit to acquire the attribute information from the attribute information acquiring unit when the attribute information is not stored in the cache memory, and store the acquired attribute information corresponding to the sensor ID in the cache memory during the storage period.
    Type: Application
    Filed: August 26, 2010
    Publication date: March 3, 2011
    Applicant: FUJITSU LIMITED
    Inventor: Masahiko MURAKAMI
  • Publication number: 20110055827
    Abstract: A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement.
    Type: Application
    Filed: August 25, 2009
    Publication date: March 3, 2011
    Applicant: International Business Machines Corporation
    Inventors: Jiang Lin, Lixin Zhang
  • Publication number: 20110055457
    Abstract: A method for giving program commands to a flash memory chip is provided, the method is suitable for writing data from a host system into the flash memory chip. In the present method, a plurality of host write commands and data corresponding to the host write commands are received from the host system by using a native command queuing (NCQ) protocol, and cache program commands are gived to the flash memory chip to write the data into the flash memory chip. Accordingly, the time for executing the host write commands is effectively shortened by writing the data through the cache program commands and the NCQ protocol.
    Type: Application
    Filed: October 15, 2009
    Publication date: March 3, 2011
    Applicant: PHISON ELECTRONICS CORP.
    Inventor: Chih-Kang Yeh
  • Publication number: 20110055526
    Abstract: There is provided a method and apparatus for accessing a memory according to a processor instruction. The apparatus includes: a stack offset extractor extracting an offset value from a stack pointer offset indicating a local variable in the processor instruction; a local stack storage including a plurality of items, each of which is formed of an activation bit indicating whether each item is activated, an offset storing an offset value of a stack pointer, and an element storing a local variable value of the stack pointer; an offset comparator comparing the extracted offset value with an offset value of each item and determining whether an item corresponding to the extracted offset value is present in the local stack storage; and a stack access controller controlling a processor to access the local stack storage or a cache memory according to a determining result of the offset comparator.
    Type: Application
    Filed: July 8, 2010
    Publication date: March 3, 2011
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young Su Kwon, Nak Woong Eum, Seong Mo Park
  • Publication number: 20110055530
    Abstract: A microprocessor includes a cache memory and a grabline instruction. The grabline instruction specifies a memory address that implicates a cache line of the memory. The grabline instruction instructs the microprocessor to initiate a zero-beat read-invalidate transaction on the bus to obtain ownership of the cache line. The microprocessor foregoes initiating the transaction on the bus when executing the grabline instruction if the microprocessor determines that a store to the cache line would cause an exception.
    Type: Application
    Filed: May 17, 2010
    Publication date: March 3, 2011
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: G. Glenn Henry, Colin Eddy, Rodney E. Hooker
  • Publication number: 20110055479
    Abstract: A thread (or other resource consumer) is compensated for contention for system resources in a computer system having at least one processor core, a last level cache (LLC), and a main memory. In one embodiment, at each descheduling event of the thread following an execution interval, an effective CPU time is determined. The execution interval is a period of time during which the thread is being executed on the central processing unit (CPU) between scheduling events. The effective CPU time is a portion of the execution interval that excludes delays caused by contention for microarchitectural resources, such as time spent repopulating lines from the LLC that were evicted by other threads. The thread may be compensated for microarchitectural contention by increasing its scheduling priority based on the effective CPU time.
    Type: Application
    Filed: August 28, 2009
    Publication date: March 3, 2011
    Applicant: VMware, Inc.
    Inventors: Richard West, Puneet Zaroo, Carl A. Waldspurger, Xiao Zhang
  • Publication number: 20110055483
    Abstract: A computer implemented method for use by a transaction program for managing memory access to a shared memory location for transaction data of a first thread, the shared memory location being accessible by the first thread and a second thread. A string of instructions to complete a transaction of the first thread are executed, beginning with one instruction of the string of instructions. It is determined whether the one instruction is part of an active atomic instruction group (AIG) of instructions associated with the transaction of the first thread. A cache structure and a transaction table which together provide for entries in an active mode for the AIG are located if the one instruction is part of an active AIG. The next instruction is executed under a normal execution mode in response to determining that the one instruction is not part of an active AIG.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Thomas J. Heller, JR.
  • Publication number: 20110055202
    Abstract: A predictive model is employed to schedule preemptive queries based on frequently utilized query paths in hierarchically structured data. The predictive model determining queries likely to be executed by a user or organization is generated and dynamically modified based on user or organization profiles, usage history, and similar factors. Queries are then executed according to a predefined schedule based on the predictive model and results cached. Cached results are provided to a requesting user more rapidly saving network and computing resources.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventor: Scott M. Heimendinger
  • Publication number: 20110055516
    Abstract: An innovative realization of computer hardware, software and firmware comprising a multiprocessor system wherein at least one processor can be configured to have a fixed instruction set and one or more processors can be statically or dynamically configured to implement a plurality of processor states in a plurality of technologies. The processor states may be instructions sets for the processors. The technologies may include programmable logic arrays.
    Type: Application
    Filed: August 20, 2010
    Publication date: March 3, 2011
    Applicant: FTL Systems Technology Corporation
    Inventor: John C. Willis
  • Publication number: 20110047332
    Abstract: A storage system includes a storage device that stores data, a cache memory that caches the data, an information storage unit that stores data configuration information indicating a configuration of the data and state information indicating a cache state of the data in the cache memory, a candidate data selection unit, a first determining unit and a data-to-be-written unit. The candidate data selection unit selects, according to the state information candidate data from the data cached in the cache memory. The first determination unit determines, according to the data configuration information, whether data relating to the candidate data is cached in the cache memory. The data-to-be-written selection unit selects, according to the determination made by the first determination unit, data to be written into the storage device, from the data cached in the cache memory.
    Type: Application
    Filed: August 23, 2010
    Publication date: February 24, 2011
    Applicant: FUJITSU LIMITED
    Inventor: Tooru KOBAYASHI
  • Publication number: 20110047362
    Abstract: Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.
    Type: Application
    Filed: August 19, 2009
    Publication date: February 24, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alexandre E. Eichenberger, Alan Gara, Kathryn M. O'Brien, Martin Ohmacht, Xiaotong Zhuang
  • Publication number: 20110047352
    Abstract: A data processing system includes at least a first through third processing nodes coupled by an interconnect fabric. The first processing node includes a master, a plurality of snoopers capable of participating in interconnect operations, and a node interface that receives a request of the master and transmits the request of the master to the second processing unit with a nodal scope of transmission limited to the second processing node. The second processing node includes a node interface having a directory. The node interface of the second processing node permits the request to proceed with the nodal scope of transmission if the directory does not indicate that a target memory block of the request is cached other than in the second processing node and prevents the request from succeeding if the directory indicates that the target memory block of the request is cached other than in the second processing node.
    Type: Application
    Filed: August 21, 2009
    Publication date: February 24, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul A. Ganfield, Guy L. Guthrie, David J. Krolak, Michael S. Siegel, William J. Starke, Jeffrey A. Stuecheli, Derek E. Williams
  • Publication number: 20110047314
    Abstract: A microprocessor breakpoint checks a load/store operation specifying a load/store virtual address of data whose first and second pieces are within first and second cache lines. A queue of entries each include first storage for an address associated with the operation and second storage for an indicator indicating whether there is a match between a page address portion of the virtual address and a page address portion of a breakpoint address. During a first pass through a load/store unit pipeline, the unit performs a first piece breakpoint check using the virtual address, populates the second storage indicator, and populates the first storage with a physical address translated from the virtual address. During the second pass, the unit performs a second piece breakpoint check using the indicator received from the second storage and an incremented version of a page offset portion of the load/store physical address received from the first storage.
    Type: Application
    Filed: October 28, 2009
    Publication date: February 24, 2011
    Applicant: VIA Technologies, Inc.
    Inventors: Bryan Wayne Pogor, Colin Eddy
  • Publication number: 20110047411
    Abstract: A data processing apparatus and method are provided for handling errors. The data processing apparatus comprises processing circuitry for performing data processing operations, a cache storage having a plurality of cache records for storing data values for access by the processing circuitry when performing the data processing operations, and a replicated address storage having a plurality of entries, each entry having a predetermined associated cache record within the cache storage and being arranged to replicate the address indication stored in the associated cache record. On detecting a cache record error when accessing a cache record of the cache storage, a record of a cache location avoid storage is allocated to store a cache record identifier for the accessed cache record.
    Type: Application
    Filed: August 20, 2009
    Publication date: February 24, 2011
    Applicant: ARM Limited
    Inventor: Damien Rene Gilbert Gille
  • Publication number: 20110047131
    Abstract: Before a portable device playbacks the media files or during the playback of one media file, a rule of generating a playlist is selected first by operating a user's interface of the portable device or by the portable device's default setting. The playlist including some multiple media files is then generated according to an anchor file and the selected rule. A rule for how to play the media files in the playlist is selected (or by system's default setting if not selected) to decide the playing sequence of the media files in the playlist from the anchor file.
    Type: Application
    Filed: September 15, 2009
    Publication date: February 24, 2011
    Inventor: Shih-Chia Huang