Addressing Of Memory Level In Which Access To Desired Data Or Data Block Requires Associative Addressing Means, E.g., Cache, Etc. (epo) Patents (Class 711/E12.017)

  • Publication number: 20110208918
    Abstract: Methods and apparatus relating to a hardware move elimination and/or next page prefetching are described. In some embodiments, a logic may provide hardware move eliminations based on stored data. In an embodiment, a next page prefetcher is disclosed. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 24, 2010
    Publication date: August 25, 2011
    Inventors: Shlomo Raikin, David J. Sager, Zeev Sperber, Evgeni Krimer, Ori Lempel, Stanislav Shwartsman, Adi Yoaz, Omer Golz
  • Publication number: 20110208919
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for quantifying a spatial distribution of accesses to storage systems and for determining spatial locality of references to storage addresses in the storage systems, are described. In one aspect, a method includes determining a measure of spatial distribution of accesses to a data storage system based on multiple distinct groups of accesses to the data storage system, and adjusting a caching policy used for the data storage system based on the determined measure of spatial distribution.
    Type: Application
    Filed: February 23, 2011
    Publication date: August 25, 2011
    Inventor: Arvind Pruthi
  • Publication number: 20110208920
    Abstract: A configuration of cached information stored within a cache is determined. One or more character omission rules are determined by: identifying the one or more optimizable characters based on the configuration, where the one or more optimizable characters are characters in the stored cached information that do not have an effect on an interpretation of the stored cached information by a requester computer; and determining, based on the configuration, one or more conditions under which omission of the one or more optimizable characters from the stored cached information produces a valid result in view of the configuration. One or more character omission rules are applied to the stored cached information by removing from the stored cached information the one or more optimizable characters that meet the one or more conditions.
    Type: Application
    Filed: May 5, 2011
    Publication date: August 25, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michel Betancourt, Bijal D. Patel, Dipak M. Patel, Joseph Spano
  • Publication number: 20110208914
    Abstract: There are provided a storage system, storage control unit and method of operating thereof. A storage system comprises a permanent storage subsystem comprising a first cache memory and a non-volatile storage medium, and a storage control unit operatively coupled to said subsystem and to a second cache memory operable to cache “dirty” data pending to be written to the permanent storage subsystem and to enable, responsive to at least one command by the control storage unit, destaging said “dirty” data or part thereof to the permanent storage subsystem.
    Type: Application
    Filed: February 22, 2011
    Publication date: August 25, 2011
    Applicant: INFINIDAT LTD.
    Inventors: Alex WINOKUR, Haim KOPYLOVITZ
  • Patent number: 8006039
    Abstract: A method for merging data including receiving a request from an input/output device to merge a data, wherein a merge of the data includes a manipulation of the data, determining if the data exists in a local cache memory that is in local communication with the input/output device, fetching the data to the local cache memory from a remote cache memory or a main memory if the data does not exist in the local cache memory, merging the data according to the request to obtain a merged data, and storing the merged data in the local cache, wherein the merging of the data is performed without using a memory controller within a control flow or a data flow of the merging of the data. A corresponding system and computer program product.
    Type: Grant
    Filed: February 25, 2008
    Date of Patent: August 23, 2011
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. Dunn, Robert J. Sonnelitter, III, Gary E. Strait
  • Publication number: 20110202729
    Abstract: A disjoint instruction for accessing operands in memory while executing in a processor of a plurality of processes interrogates a state indicator settable by other processors to determine if the disjoint instruction accessed the operands without an intervening store operation from another processor to the operand. A condition code is set based on the state indicator.
    Type: Application
    Filed: February 18, 2010
    Publication date: August 18, 2011
    Applicant: International Business Machines Corporation
    Inventors: Theodore J. Bohizic, Reid T. Copeland, Marcel Mitran
  • Publication number: 20110202724
    Abstract: Embodiments allow a smaller, simpler hardware implementation of an input/output memory management unit (IOMMU) having improved translation behavior that is independent of page table structures and formats. Embodiments also provide device-independent structures and methods of implementation, allowing greater generality of software (fewer specific software versions, in turn reducing development costs).
    Type: Application
    Filed: February 17, 2010
    Publication date: August 18, 2011
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Andrew G. KEGEL, Mark Hummel, Erich Boleyn
  • Publication number: 20110202831
    Abstract: Generating a report involves phases such as (a) database queries or other raw data accesses, (b) calculations such as data grouping, sorting, filtering, aggregation, (c) data presentation layout, (d) data formatting, and (e) rendering. When generating a modified version of a report, reusable interim results for phases (b), (c), and (d) are identified and retrieved from a cache instead of being recalculated. Newly calculated interim results are also cached for possible future use.
    Type: Application
    Filed: February 15, 2010
    Publication date: August 18, 2011
    Applicant: Microsoft Coproration
    Inventors: Robert Bruckner, Christopher Hays, Mason J. Warner, Nicoleta Cristache, Ian R. Roof
  • Publication number: 20110202708
    Abstract: An I/O enclosure module is provided with one or more I/O enclosures having a plurality of slots for receiving electronic devices. A host adapter is connected a first slot of the I/O enclosure module and is configured to connect a host to the I/O enclosure. A device adapter is connected to a second slot of the I/O enclosure module and is configured to connect a storage device to the I/O enclosure module. A flash cache is connected to a third slot of the I/O enclosure module and includes a flash-based memory configured to cache data associated with data requests handled through the I/O enclosure module. A primary processor complex manages data requests handled through the I/O enclosure module by communicating with the host adapter, device adapter, and flash cache to manage to the data requests.
    Type: Application
    Filed: February 17, 2010
    Publication date: August 18, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Michael T. Benhase, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Yu-Cheng Hsu, Xiaoyu Hu, Joseph S. Hyde, II, Roman A. Pletka, Alfred E. Sanchez
  • Publication number: 20110202725
    Abstract: A method and processor supporting architected instructions for tracking and determining set membership, such as by implementing Bloom filters. The apparatus includes storage arrays (e.g., registers) and an execution core configured to store an indication that a given value is a member of a set, including by executing an architected instruction having an operand specifying the given value, wherein executing comprises hashing applying a hash function to the value to determine an index into one of the storage arrays and setting a bit of the storage array corresponding to the index. An architected query instruction is later executed to determine if a query value is not a member of the set, including by applying the hash function to the query value to determine an index into the storage array and determining whether a bit at the index of the storage array is set.
    Type: Application
    Filed: February 18, 2010
    Publication date: August 18, 2011
    Inventors: John R. Rose, Lawrence A. Spracklen, Zoran Radovic
  • Publication number: 20110202793
    Abstract: A method performed by a domain name service client includes storing DNS entries in a local cache; sending a DNS query to another device to obtain an update to one of the DNS entries; determining whether a DNS response is received; and resetting a time-to-live (TTL) timer associated with the one of the DNS entries when the DNS response is not received.
    Type: Application
    Filed: February 12, 2010
    Publication date: August 18, 2011
    Applicant: Verizon Patent and Licensing, Inc.
    Inventor: Ce Xu
  • Publication number: 20110202730
    Abstract: An information processing apparatus according to the present invention is arranged in a client terminal connected to a server storing data via a network, wherein the information processing apparatus receives requests from one or a plurality of applications in the client terminal and controls transmission and reception of information to/from the server. The information processing apparatus includes an authentication information storage unit for storing authentication information of a user for accessing the server, and a request transmission unit for attaching the authentication information of the user of the client terminal to a request based on the request given by the application of the client terminal, and transmits the request to the server.
    Type: Application
    Filed: December 23, 2010
    Publication date: August 18, 2011
    Applicant: Sony Corporation
    Inventors: Shuhei SONODA, Tsutomu KAWACHI, Masayuki TAKADA
  • Publication number: 20110197017
    Abstract: High endurance non-volatile memory devices (NVMD) are described. A high endurance NVMD includes an I/O interface, a NVM controller, a CPU along with a volatile memory subsystem and at least one non-volatile memory (NVM) module. The volatile memory cache subsystem is configured as a data cache subsystem. The at least one NVM module is configured as a data storage when the NVMD is adapted to a host computer system. The I/O interface is configured to receive incoming data from the host to the data cache subsystem and to send request data from the data cache subsystem to the host. The at least one NVM module may comprise at least first and second types of NVM. The first type comprises SLC flash memory while the second type MLC flash. The first type of NVM is configured as a buffer between the data cache subsystem and the second type of NVM.
    Type: Application
    Filed: April 19, 2011
    Publication date: August 11, 2011
    Applicant: Super Talent Electronics, Inc.
    Inventors: I-Kang Yu, David Q. Chow, Charles C. Lee, Abraham Chih-Kang Ma, Ming-Shiang Shen
  • Publication number: 20110197028
    Abstract: Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory.
    Type: Application
    Filed: February 5, 2010
    Publication date: August 11, 2011
    Inventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
  • Publication number: 20110197029
    Abstract: A method and apparatus for accelerating a software transactional memory (STM) system is described herein. Annotation field are associated with lines of a transactional memory. An annotation field associated with a line of the transaction memory is initialized to a first value upon starting a transaction. In response to encountering a read operation in the transaction, then annotation field is checked. If the annotation field includes a first value, the read is serviced from the line of the transaction memory without having to search an additional write space. A second and third value in the annotation field potentially indicates whether a read operation missed the transactional memory or a tentative value is stored in a write space. Additionally, an additional bit in the annotation field, may be utilized to indicate whether previous read operations have been logged, allowing for subsequent redundant read logging to be reduced.
    Type: Application
    Filed: April 26, 2011
    Publication date: August 11, 2011
    Inventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Quinn Jacobson
  • Publication number: 20110197052
    Abstract: Described is a technology by which a virtual hard disk is maintained between a far (e.g., remote) backing store and a near (e.g., local) backing store, which among other advantages facilitates fast booting of a machine coupled to the virtual hard disk. Read requests are serviced from the near backing store (e.g., a differencing layer) when the data is available thereon, or from the far backing store (e.g., a base layer) when not. The near backing store may be configured with a cache layer that corresponds to the base layer and a write differencing layer that stores writes, or a single differencing layer may be used for both caching read data and for storing write data. A background copy operation may be used to fill the cache until the far backing store data is no longer needed.
    Type: Application
    Filed: February 8, 2010
    Publication date: August 11, 2011
    Applicant: Microsoft Corporation
    Inventors: Dustin L. Green, Jacob K. Oshins, Michael L. Neil
  • Publication number: 20110191321
    Abstract: Embodiments of the invention disclose an advertisement or segment of a webpage that displays suggested search queries as selectable links. Suggested queries may be based on content associated with the webpage, or the description of the webpage (such as a URL), or default suggestions. In one example, content of a page is crawled for terms that are mapped to suggested queries. Queries may be represented as textual links or multimedia images embedded in pages accessed over a network, and selection of a query may direct or enhance search engine traffic.
    Type: Application
    Filed: February 1, 2010
    Publication date: August 4, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: KRISHNA GADE, ANDREY YEGOROV, JOANNA CHAN, DANIEL C. FAIN, SANAZ AHARI, NITIN AGRAWAL
  • Publication number: 20110191540
    Abstract: Provided are a method, system, and computer program product for processing read and write requests in a storage controller. A host adaptor in the storage controller receives a write request from a host system for a storage address in a storage device. The host adaptor sends write information indicating the storage address updated by the write request to a device adaptor in the storage controller. The host adaptor writes the write data to a cache in the storage controller. The device adaptor indicates the storage address indicated in the write information to a modified storage address list stored in the device adaptor, wherein the modified storage address list indicates modified data in the cache for storage addresses in the storage device.
    Type: Application
    Filed: February 3, 2010
    Publication date: August 4, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lawrence Y. Chiu, Yu-Cheng Hsu, Sangeetha Seshadri
  • Publication number: 20110191539
    Abstract: A data processing apparatus is provided, configured to carry out data processing operations on behalf of a main data processing apparatus, comprising a coprocessor core configured to perform the data processing operations and a reset controller configured to cause the coprocessor core to reset. The coprocessor core performs its data processing in dependence on current configuration data stored therein, the current configuration data being associated with a current processing session. The reset controller is configured to receive pending configuration data from the main data processing apparatus, the pending configuration data associated with a pending processing session, and to store the pending configuration data in a configuration data queue. The reset controller is configured, when the coprocessor core resets, to transfer the pending configuration data from the configuration data queue to be stored in the coprocessor core, replacing the current configuration data.
    Type: Application
    Filed: February 3, 2010
    Publication date: August 4, 2011
    Applicant: ARM Limited
    Inventors: Ola Hugosson, Erik Persson, Pontus Borg
  • Publication number: 20110191541
    Abstract: Techniques for distributed cache management are provided. A server having backend resource includes a global cache and a global cache agent. Individual clients each have client cache agents and client caches. When data items associated with the backend resources are added, modified, or deleted in the client caches, the client cache agents report the changes to the global cache agent. The global cache agent records the changes and notifies the other client cache agents to update a status of the changes within their client caches. When the changes are committed to the backend resource each of the statuses in each of the caches are updated accordingly.
    Type: Application
    Filed: January 29, 2010
    Publication date: August 4, 2011
    Inventors: Lee Edward Lowry, Brent Thurgood, Stephen R. Carter
  • Publication number: 20110191544
    Abstract: A data cache wherein contents of the cache are arranged and organised according to a hierarchy. When a member of a first hierarchy is accessed, all contents of that member are copied to the cache. The cache may be arranged according to folders which contain data or blocks of data. A process for caching data using such an arrangement is also provided for.
    Type: Application
    Filed: April 24, 2009
    Publication date: August 4, 2011
    Applicant: NOKIA CORPORATION
    Inventors: Harsha Sathyanarayana Naga, Neeraj Nayan
  • Patent number: 7991958
    Abstract: A method for providing DRM files using caching includes identifying DRM files to be displayed in a file list in response to a request, decoding a number of first DRM files from among the identified DRM files and caching the first DRM files in a first memory space, and reading the first DRM files in the first memory space in response to the request. Then, a system displays the first DRM files as a file list in a display area. The second DRM files from among the identified DRM files other than the first DRM files are not initially decoded, and file data related to the second DRM files are cached in a second memory space. DRM files from among the second DRM files are subsequently decoded in response to a subsequent command.
    Type: Grant
    Filed: January 30, 2009
    Date of Patent: August 2, 2011
    Assignee: Pantech Co., Ltd.
    Inventor: Ho Hee Lee
  • Publication number: 20110182348
    Abstract: Frame data stored in an external memory is partitioned into a plurality of macroblocks, and a plurality of access units each comprising at least one macroblock are provided. A plurality of frames are fetched from the external memory by loading the plurality of access units in a predetermined sequence. A current data for decoding a macroblock of the first access unit and a reference data for decoding a macroblock of the second access unit are loaded from the first access unit, and respectively mapped to a first memory group and a second memory group of a circular cache according to the frame width.
    Type: Application
    Filed: January 26, 2010
    Publication date: July 28, 2011
    Inventors: Chien-Chang Lin, Chan-Shih Lin
  • Publication number: 20110179108
    Abstract: Systems and methods for aggregating information and delivering user specific content having at least one system server computer having a plurality of data feeds operatively coupled thereto through which the system receives new content. The system has least one content storage device and global cache member coupled to the plurality of client computers through the server. The method includes storing content received from an external source on the content storage device and defining and storing at least one user account content delivery preference on a server computer. The method also includes receiving new content from the external source, storing the new content data to a system cache memory and sorting the content and new content according to at least one user account preference stored on the server computer.
    Type: Application
    Filed: January 21, 2010
    Publication date: July 21, 2011
    Applicant: International Business Machines Corporation
    Inventors: Michael W. Sorenson, Timothy J. Finley
  • Publication number: 20110179308
    Abstract: A multiple-processor system 2 is provided where each processor 4-0, 4-1 can be dynamically switched between running in a locked mode where one processor 4-1 checks the operation of the other processor 4-0 and a split mode where each processor 4-0, 4-1 operates independently. Multiple auxiliary circuits 8-0, 8-1 provide auxiliary functions for the plurality of processors 4-0, 4-1. In the split mode, each auxiliary circuit 8-0, 8-1 separately provides auxiliary functions for a corresponding one of the processors 4-0, 4-1. To ensure coherency when each processor 4-0, 4-1 executes a common set of processing operations, in the locked mode a shared one of the auxiliary circuits 8-0 provides auxiliary functions for all of the processors 4-0, 4-1.
    Type: Application
    Filed: January 21, 2010
    Publication date: July 21, 2011
    Applicant: ARM Limited
    Inventors: Chiloda Ashan Senerath Pathirane, Antony John Penton
  • Publication number: 20110179226
    Abstract: The present invention provides a data processor capable of reducing power consumption at the time of execution of a spin wait loop for a spinlock. A CPU executes a weighted load instruction at the time of performing a spinlock process and outputs a spin wait request to a corresponding cache memory. When the spin wait request is received from the CPU, the cache memory temporarily stops outputting an acknowledge response to a read request from the CPU until a predetermined condition (snoop write hit, interrupt request, or lapse of predetermined time) is satisfied. Therefore, pipeline execution of the CPU is stalled and the operation of the CPU and the cache memory can be temporarily stopped, and power consumption at the time of executing a spin wait loop can be reduced.
    Type: Application
    Filed: January 19, 2011
    Publication date: July 21, 2011
    Inventor: Hirokazu TAKATA
  • Publication number: 20110179255
    Abstract: A processor 4 is provided with reset circuitry 48 which generates a reset signal to reset a plurality of state parameters. Partial reset circuitry 50 is additionally provided to reset a proper subset of this plurality of state parameters. The reset circuitry triggers a redirection of program flow. The partial reset circuitry permits a continuation of program flow. The partial reset circuitry may be used to place processors into a known state with a low latency before switching from a split mode of operation into a locked mode of operation.
    Type: Application
    Filed: January 21, 2010
    Publication date: July 21, 2011
    Applicant: ARM Limited
    Inventors: Chiloda Ashan Senerath Pathirane, Antony John Penton, Andrew Christopher Rose
  • Publication number: 20110179258
    Abstract: The described embodiments provide a system for executing instructions in a processor. In the described embodiments, upon detecting a return of input data for a deferred instruction while executing instructions in an execute-ahead mode, the processor determines whether a replay bit is set in a corresponding entry for the returned input data in a miss buffer. If the replay bit is set, the processor transitions to a deferred-execution mode to execute deferred instructions. Otherwise, the processor continues to execute instructions in the execute-ahead mode.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: SUN MICROSYSTEMS, INC.
    Inventors: Martin R. Karlsson, Sherman H. Yip, Shailender Chaudhry
  • Publication number: 20110173415
    Abstract: According to one embodiment, each of routers includes: a cache mechanism that stores data transferred to the other routers or processor elements; and a unit that reads out, when an access generated from each of the processor elements is transferred thereto, if target data of the access is stored in the cache mechanism, the data from the cache mechanism and transmits the data to the processor element as a request source.
    Type: Application
    Filed: September 2, 2010
    Publication date: July 14, 2011
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Jun Tanabe, Hiroyuki Usui
  • Publication number: 20110173391
    Abstract: A system and method to access data from a portion of a level two memory or from a level one memory is disclosed. In a particular embodiment, the system includes a level one cache and a level two memory. A first portion of the level two memory is coupled to an input port and is addressable in parallel with the level one cache.
    Type: Application
    Filed: January 14, 2010
    Publication date: July 14, 2011
    Applicant: QUALCOMM INCORPORATED
    Inventors: Suresh K. Venkumahanti, Christopher Edward Koob, Lucian Codrescu
  • Publication number: 20110173393
    Abstract: A cache memory according to the present invention includes: a first port for input of a command from the processor; a second port for input of a command from a master other than the processor; a hit determining unit which, when a command is input to said first port or said second port, determines whether or not data corresponding to an address specified by the command is stored in said cache memory; and a first control unit which performs a process for maintaining coherency of the data stored in the cache memory and corresponding to the address specified by the command and data stored in the main memory, and outputs the input command to the main memory as a command output from the master, when the command is input to the second port and said hit determining unit determines that the data is stored in said cache memory.
    Type: Application
    Filed: March 23, 2011
    Publication date: July 14, 2011
    Applicant: PANASONIC CORPORATION
    Inventor: Takanori ISONO
  • Publication number: 20110173390
    Abstract: There is provided a storage management system capable of utilizing division management with enhanced flexibility and of enhancing security of the entire system, by providing functions by program products in each division unit of a storage subsystem. The storage management system has a program-product management table stored in a shared memory in the storage subsystem and showing presence or absence of the program products, which provide management functions of respective resources to respective SLPRs. At the time of executing the management functions by the program products in the SLPRs of users in accordance with instructions from the users, the storage management system is referred to and execution of the management function having no program product is restricted.
    Type: Application
    Filed: March 28, 2011
    Publication date: July 14, 2011
    Inventors: Shuichi Yagi, Kozue Fujii, Tatsuya Murakami
  • Patent number: 7979639
    Abstract: Optimizing cache-resident area where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. An LU is a logical space resident in cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size. When the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory.
    Type: Grant
    Filed: December 10, 2008
    Date of Patent: July 12, 2011
    Assignee: Hitachi, Ltd.
    Inventor: Hideyuki Koseki
  • Publication number: 20110167243
    Abstract: Techniques and structures are disclosed for a processor supporting checkpointing to operate effectively in scouting mode while a maximum number of supported checkpoints are active. Operation in scouting mode may include using bypass logic and a set of register storage locations to store and/or forward in-flight instruction results that were calculated during scouting mode. These forwarded results may be used during scouting mode to calculate memory load addresses for yet other in-flight instructions, and the processor may accordingly cause data to be prefetched from these calculated memory load addresses. The set of register storage locations may comprise a working register file or an active portion of a multiported register file.
    Type: Application
    Filed: January 5, 2010
    Publication date: July 7, 2011
    Inventors: Sherman H. Yip, Paul Caprioli
  • Publication number: 20110167222
    Abstract: An unbounded transactional memory system which can process overflow data. The unbounded transactional memory system may include a host processor, a memory, and a memory processor. The host processor may include an execution unit to perform a transaction, and a cache to temporarily store data. The memory processor may store overflow data in overflow storage included in the memory in response to an overflow event in which the overflow data is generated in the cache during the transaction.
    Type: Application
    Filed: December 9, 2010
    Publication date: July 7, 2011
    Applicants: Samsung Electronics Co., Ltd., SNU R&DB Foundation
    Inventors: Jaejin LEE, Jong-Deok Choi, Seung-Mo Cho
  • Publication number: 20110167223
    Abstract: Memory access is accelerated by performing a burst read without any problems caused due to rewriting of data. A buffer memory device reads, in response to a read request from a processor, data from a main memory including cacheable and uncacheable areas. The buffer memory device includes an attribute obtaining unit which obtains the attribute of the area indicated by a read address included in the read request; an attribute determining unit which determines whether or not the attribute obtained by the attribute obtaining unit is burst-transferable; a data reading unit which performs a burst read of data including data held in the area indicated by the read address, when determined that the attribute obtained by the attribute obtaining unit is burst-transferable; and a buffer memory which holds the data burst read by the data reading unit.
    Type: Application
    Filed: March 17, 2011
    Publication date: July 7, 2011
    Applicant: PANASONIC CORPORATION
    Inventor: Takanori ISONO
  • Publication number: 20110161565
    Abstract: A flash memory storage system having a flash memory controller and a flash memory chip is provided. The flash memory controller configures a second physical unit of the flash memory chip as a midway cache physical unit corresponding to a first physical unit and temporarily stores first data corresponding to a first host write command and second data corresponding to a second host write command in the midway cache physical unit, wherein the first and second data corresponding to slow physical addresses of the first physical unit. Then, the flash memory controller synchronously copies the first and second data from the midway cache physical unit into the first physical unit, thereby shortening time for writing data into the flash memory chip.
    Type: Application
    Filed: February 22, 2010
    Publication date: June 30, 2011
    Applicant: PHISON ELECTRONICS CORP.
    Inventors: Lai-Hock Chua, Kheng-Chong Tan
  • Publication number: 20110161630
    Abstract: An apparatus and method is described herein for replacing faulty core components. General purpose hardware is provided to replace core pipeline components, such as execution units. In the embodiment of execution unit replacement, a proxy unit is provided, such that mapping logic is able to map instruction/operations, which correspond to faulty execution units, to the proxy unit. As a result, the proxy unit is able to receive the operations, send them to general purpose hardware for execution, and subsequently write-back the execution results to a register file; it essentially replaces the defective execution unit allowing a processor with defective units to be sold or continue operation.
    Type: Application
    Filed: December 28, 2009
    Publication date: June 30, 2011
    Inventors: Steven E. Raasch, Michael D. Powell, Shubhendu S. Mukherjee, Arijit Biswas
  • Publication number: 20110161879
    Abstract: A method of browsing images on an electronic device includes displaying an index. The index defines F items. Each item has an item number. A display shows G images with image numbers. All the image numbers are defined from 1 to N. The item number corresponds to the image number. The item with the item number F/2 is located at a center of the display. The item located on the left side of the display is considered as variable f. The index is moved along a first direction. The display shows the items with the item numbers from f to f+G?1. The item with the item number F/2 is moved to the center of the display if the item number f is smaller than a first value adjacent to 1 or bigger than a second value adjacent to F.
    Type: Application
    Filed: February 9, 2010
    Publication date: June 30, 2011
    Applicant: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventor: TENG-YU TSAI
  • Publication number: 20110161841
    Abstract: A request for a string of an application to be displayed during a Web browser or other client-based application session is received. The string is obtained and modified based on one or more pseudo localization settings associated with the session. The modified string, rather than the obtained string, is returned for display during the session.
    Type: Application
    Filed: December 29, 2009
    Publication date: June 30, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Ryan D. Parsell, Timothy John McCracken
  • Publication number: 20110161585
    Abstract: Methods and apparatus to efficiently process non-ownership load requests hitting modified line (M-line) in cache of a different processor are described. In one embodiment, a first agent changes the state of a first data and forwards it to a second, requesting agent who stores the first data in an alternative modified state. Other embodiments are also described.
    Type: Application
    Filed: December 26, 2009
    Publication date: June 30, 2011
    Inventors: SAILESH KOTTAPALLI, Jeffrey Baxter, James R. Vash, Bongjin Jung, Andrew Y. Sun
  • Publication number: 20110161549
    Abstract: A memory control device for controlling an access from a processing unit to a cache memory, the memory control device includes: an address estimation circuit for receiving a first read address of the cache memory from the processing unit and estimating a second read address on the basis of the first read address; an access start detection circuit for detecting an access start of accessing cache memory at the first read address and outputting an access start signal; a data control circuit for receiving read data from the cache memory and for outputting the read data to the processing unit; and a clock control circuit for controlling a read clock to be output to the processing unit in response to the access start signal, the processing unit receiving the read data from the data control circuit with the read clock.
    Type: Application
    Filed: December 10, 2010
    Publication date: June 30, 2011
    Applicant: FUJITSU SEMICONDUCTOR LIMITED
    Inventor: Akinori HASHIMOTO
  • Publication number: 20110161748
    Abstract: Embodiments of the invention are generally directed to systems, methods, and apparatuses for hybrid memory. In one embodiment, a hybrid memory may include a package substrate. The hybrid memory may also include a hybrid memory buffer chip attached to the first side of the package substrate. High speed input/output (HSIO) logic supporting a HSIO interface with a processor. The hybrid memory also includes packet processing logic to support a packet processing protocol on the HSIO interface. Additionally, the hybrid memory also has one or more memory tiles that are vertically stacked on the hybrid memory buffer.
    Type: Application
    Filed: December 31, 2009
    Publication date: June 30, 2011
    Inventors: Bryan Casper, Randy Mooney, Dave Dunning, Mozhgan Mansuri, James E. Jaussi
  • Publication number: 20110161600
    Abstract: A processor holds, in a plurality of respective cache lines, part of data held in a main memory unit. The processor also holds, in the plurality of respective cache lines, a tag address used to search for the data held in the cache lines and a flag indicating the validity of the data held in the cache lines. The processor executes a cache line fill instruction on a cache line corresponding to a specified address. Upon execution of the cache line fill instruction, the processor registers predetermined data in the cache line of the cache memory unit which has a tag address corresponding to the specified address and validates a flag in the cache line having the tag address corresponding to the specified address.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 30, 2011
    Applicant: Fujitsu Limited
    Inventors: Takahito Hirano, Iwao Yamazaki
  • Publication number: 20110161584
    Abstract: A system and method for servicing an inquiry command from a host device requesting inquiry data about a sequential device on a storage area network. The inquiry data may be cached by a circuitry coupled to the host device and the sequential device. The circuitry may reside in a router. In some embodiments, depending upon whether the sequential device is available to process the inquiry command, the circuitry may forward the inquiry command to the sequential device or process the inquiry command itself, utilizing a cached version of the inquiry data. The cached version may include information indicating that the sequential device is not available. In some embodiments, regardless whether the sequential device is available, the circuitry may process the inquiry command and return the inquiry data from a cache memory.
    Type: Application
    Filed: March 7, 2011
    Publication date: June 30, 2011
    Inventors: Stephen G. Dale, Bradfred W. Culp
  • Publication number: 20110161598
    Abstract: Embodiments of the present invention provide a method, system and computer program product for dual timer fragment caching. In an embodiment of the invention, a dual timer fragment caching method can include establishing both a soft timeout and also a hard timeout for each fragment in a fragment cache. The method further can include managing the fragment cache by evicting fragments in the fragment cache subsequent to a lapsing of a corresponding hard timeout. The management of the fragment cache also can include responding to multiple requests by multiple requestors for a stale fragment in the fragment cache with a lapsed corresponding soft timeout by returning the stale fragment from the fragment cache to some of the requestors, by retrieving and returning a new form of the stale fragment to others of the requestors, and by replacing the stale fragment in the fragment cache with the new form of the stale fragment with a reset soft timeout and hard timeout.
    Type: Application
    Filed: December 31, 2009
    Publication date: June 30, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rohit D. Kelapure, Gautam Singh, Christian Steege, Filip R. Zawadiak
  • Publication number: 20110161593
    Abstract: A cache unit comprising a register file that selects an entry indicated by a cache index of n bits (n is a natural number) that is used to search for an instruction cache tag, using multiplexer groups having n stages respectively corresponding to the n bits of the cache index. Among the multiplexer groups having n stages, a multiplexer group in an mth stage has 2(m-1) multiplex circuits. The multiplexer group in the mth stage uses a value of an mth bit (m is a natural number equal to or less than n) from the least significant bit in the cache index as a control signal. The multiplexer group in the mth stage switches all multiplex circuits included in the multiplexer group in the mth stage in accordance with the control signal.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 30, 2011
    Applicant: Fujitsu Limited
    Inventor: Iwao Yamazaki
  • Publication number: 20110161596
    Abstract: Techniques are generally described for methods, systems, data processing devices and computer readable media related to multi-core parallel processing directory-based cache coherence. Example systems may include one multi-core processor or multiple multi-core processors. An example multi-core processor includes a plurality of processor cores, each of the processor cores having a respective cache. The system may further include a main memory coupled to each multi-core processor. A directory descriptor cache may be associated with the plurality of the processor cores, where the directory descriptor cache may be configured to store a plurality of directory descriptors. Each of the directory descriptors may provide an indication of the cache sharing status of a respective cache-line-sized row of the main memory.
    Type: Application
    Filed: December 28, 2009
    Publication date: June 30, 2011
    Inventor: Tom Conte
  • Patent number: 7970998
    Abstract: A cache memory of the present invention includes a second cache memory that is operated in parallel with a first cache memory, a judgment unit which, when a cache miss occurs in both of the first cache memory and the second cache memory, makes a true or false judgment relating to an attribute of data for which memory access resulted in the cache miss, and a controlling unit which stores memory data in the second cache memory when a judgment of true is made, and stores the memory data in the first cache memory when a judgment of false is made.
    Type: Grant
    Filed: March 17, 2006
    Date of Patent: June 28, 2011
    Assignee: Panasonic Corporation
    Inventors: Takao Yamamoto, Tetsuya Tanaka, Ryuta Nakanishi, Masaitsu Nakajima, Keisuke Kaneko, Hazuki Okabayashi
  • Publication number: 20110153938
    Abstract: The present invention is directed towards systems and methods for providing static proximity load balancing via a multi-core intermediary device. An intermediary device providing global server load balancing identifies a size of a location database comprising static proximity information. The intermediary device stores the location database to an external storage of the intermediary device responsive to determining the size of the location database is greater than a predetermined threshold. A first packet processing engine on the device receives a domain name service request for a first location, determines that proximity information for the first location is not stored in a first memory cache, transmits a request to a second packet processing engine for proximity information of the first location, and transmits a request to the external storage for proximity information of the first location responsive to the second packet processing engine not having the proximity information.
    Type: Application
    Filed: December 23, 2009
    Publication date: June 23, 2011
    Inventors: Sergey Verzunov, Anil Shetty, Josephine Suganthi