Addressing Of Memory Level In Which Access To Desired Data Or Data Block Requires Associative Addressing Means, E.g., Cache, Etc. (epo) Patents (Class 711/E12.017)

  • Publication number: 20110040725
    Abstract: A method of increasing a processing performance by setting a suitable upper limit of a resources count for each processing request according to an arrangement of hardware such as a storage device or to contents of the processing request. A processing request acceptor accepts the processing request as a data query. An auxiliary storage device forms a storage area where a database is stored. A data operation executor analyzes the accepted processing request and executed the data operations on the basis of the analyzed result. A resource manager manages the respective data operations allocated to generated processes or threads. A buffer manager caches data of the data operations upon execution of the data operations from the auxiliary storage device to a memory, and determines whether or not the data as the target of the data operations are present in the cache.
    Type: Application
    Filed: February 10, 2010
    Publication date: February 17, 2011
    Inventor: Yuki SUGIMOTO
  • Publication number: 20110040938
    Abstract: Disclosed are an electronic apparatus and a method of controlling the same, the electronic apparatus comprising: a nonvolatile memory unit in which an application is stored; a volatile memory unit in which data based on execution of the application is stored; and a controller which stops supplying power to the volatile memory unit when the electronic apparatus is turned off if a remaining capacity of the volatile memory unit reaches a threshold value for initializing the volatile memory unit, and keeps the power supplied to the volatile memory to make the volatile memory unit retain the data based on the execution of the application even when the electronic apparatus is turned off if the remaining capacity does not reach the threshold value. With this, a memory leak that may be generated when using a STR mode can be effectively prevented.
    Type: Application
    Filed: March 5, 2010
    Publication date: February 17, 2011
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Nam-jae JEON, Seung-hoon LEE
  • Publication number: 20110035569
    Abstract: A superscalar pipelined microprocessor includes a register set defined by its instruction set architecture, a cache memory, execution units, and a load unit, coupled to the cache memory and distinct from the other execution units. The load unit comprises an ALU. The load unit receives an instruction that specifies a memory address of a source operand, an operation to be performed on the source operand to generate a result, and a destination register of the register set to which the result is to be stored. The load unit reads the source operand from the cache memory. The ALU performs the operation on the source operand to generate the result, rather than forwarding the source operand to any of the other execution units of the microprocessor to perform the operation on the source operand to generate the result. The load unit outputs the result for subsequent retirement to the destination register.
    Type: Application
    Filed: October 30, 2009
    Publication date: February 10, 2011
    Inventors: Gerard M. Col, Colin Eddy, Rodney E. Hooker
  • Publication number: 20110035552
    Abstract: A method of defining a dynamically adjustable user interface (“UI”) of a device is described. The method defines multiple UI elements for the UI, where each UI element includes multiple pixels. The method defines a display adjustment tool for receiving a single display adjustment parameter and in response adjusting the appearance of the UI by differentiating display adjustments to a first set of saturated pixels from the display adjustments to a second set of non-saturated pixels.
    Type: Application
    Filed: August 5, 2009
    Publication date: February 10, 2011
    Inventors: Patrick Heynen, Mike Stern, Andrew Bryant, Marian Goldeen, Bill Feth
  • Publication number: 20110035550
    Abstract: It is not uncommon for two or more wireless-enabled devices to spend most of their time in close proximity to one another. For example, a person may routinely carry a personal digital assistant (PDA) and a portable digital audio/video player, or a cellphone and a PDA, or a smartphone and a gaming device. When it is desirable to increase the memory storage capacity of a first such device, it may be possible to use memory on one or more of the other devices to temporarily store data from the first device.
    Type: Application
    Filed: October 22, 2010
    Publication date: February 10, 2011
    Applicant: Research In Motion Limited
    Inventor: Neil Adams
  • Publication number: 20110035551
    Abstract: A microprocessor includes an instruction decoder for decoding a repeat prefetch indirect instruction that includes address operands used to calculate an address of a first entry in a prefetch table having a plurality of entries, each including a prefetch address. The repeat prefetch indirect instruction also includes a count specifying a number of cache lines to be prefetched. The memory address of each of the cache lines is specified by the prefetch address in one of the entries in the prefetch table. A count register, initially loaded with the count specified in the prefetch instruction, stores a remaining count of the cache lines to be prefetched. Control logic fetches the prefetch addresses of the cache lines from the table into the microprocessor and prefetches the cache lines from the system memory into a cache memory of the microprocessor using the count register and the prefetch addresses fetched from the table.
    Type: Application
    Filed: October 15, 2009
    Publication date: February 10, 2011
    Inventors: Rodney E. Hooker, John Michael Greer
  • Publication number: 20110035570
    Abstract: A superscalar pipelined microprocessor includes a register set defined by an instruction set architecture of the microprocessor, execution units, and a store unit, coupled to the cache memory and distinct from the other execution units of the microprocessor. The store unit comprises an ALU. The store unit receives an instruction that specifies a source register of the register set and an operation to be performed on a source operand to generate a result. The store unit reads the source operand from the source register. The ALU performs the operation on the source operand to generate the result, rather than forwarding the source operand to any of the other execution units of the microprocessor to perform the operation on the source operand to generate the result. The store unit operatively writes the result to the cache memory.
    Type: Application
    Filed: October 30, 2009
    Publication date: February 10, 2011
    Inventors: Gerard M. Col, Colin Eddy, Rodney E. Hooker
  • Publication number: 20110029736
    Abstract: The storage controller of the present invention is able to reduce the amount of purge message communication and increase the processing performance of the storage controller. Each microprocessor creates and saves a purge message every time control information in the shared memory is updated. After a series of update processes are complete, the saved purge messages are transmitted to each microprocessor. To the control information, attribute corresponding to its characteristics is established, and cache control and purge control are executed depending on the attribute.
    Type: Application
    Filed: February 17, 2009
    Publication date: February 3, 2011
    Inventors: Kei Sato, Takeo Fujimoto, Osamu Sakaguchi
  • Publication number: 20110029737
    Abstract: In a method of synchronizing with a separated disk cache, the separated cache is configured to transfer cache data to a staging area of a storage device. An atomic commit operation is utilized to instruct the storage device to atomically commit the cache data to a mapping scheme of the storage device.
    Type: Application
    Filed: October 14, 2010
    Publication date: February 3, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Ruston Panabaker, Cenk Ergan, Michael R. Fortin
  • Publication number: 20110029730
    Abstract: A data processing system includes a storage system and caching storage controllers coupled to the storage system and to a storage network. The storage controllers operate in an active-active fashion to provide access to volumes of the storage system from any of the storage controllers in response to storage commands from the storage network. The storage controllers employ a distributed cache protocol in which (a) each volume is divided into successive chunks of contiguous blocks, and (b) either chunk ownership may be dynamically transferred among the storage controllers in response to the storage commands, or storage commands sent to a non-owning controller may be forwarded to the owning controller.
    Type: Application
    Filed: July 31, 2009
    Publication date: February 3, 2011
    Applicant: EMC CORPORATION
    Inventors: Colin D. Durocher, Roel van der Goot
  • Publication number: 20110029735
    Abstract: A method for managing an embedded system is provided. The method includes selecting one of a first memory and a second memory according to at least one criterion, where the selected memory is a source from which the embedded system reads commands of a program, and an access speed of the first memory is different from that of the second memory; and controlling the embedded system to execute the program by utilizing the selected memory as the source.
    Type: Application
    Filed: July 28, 2009
    Publication date: February 3, 2011
    Inventors: Ying-Chieh Chiang, Wei-Hsien Lin
  • Publication number: 20110029712
    Abstract: A memory device includes an on-board cache system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The cache system operates in a manner that can be transparent to a memory controller to which the memory device is connected. Alternatively, the memory controller can control the operation of the cache system.
    Type: Application
    Filed: October 11, 2010
    Publication date: February 3, 2011
    Applicant: MICRON TECHNOLOGY, INC.
    Inventor: David Resnick
  • Patent number: 7882299
    Abstract: A method of programming a non-volatile memory array using an on-chip write cache is disclosed. Individual data packets received by the memory system are stored in cache memory. More than one data packet may be stored in this way and then programmed to a single page of the non-volatile array. This results in more efficient use of storage space in the non-volatile array.
    Type: Grant
    Filed: December 21, 2004
    Date of Patent: February 1, 2011
    Assignee: SanDisk Corporation
    Inventors: Kevin M. Conley, Yoram Cedar
  • Publication number: 20110022799
    Abstract: A method to speed up access to an external storage device for accessing to the external storage device comprises the steps of: (a) during startup of a computer, setting up part of a physical memory of the computer as a cache memory for use by the external storage device, in the form of a continuous physical memory area outside the physical memory area that is managed by an operating system of the computer; (b) upon detection of a request to write data to the external storage device, writing the data to the cache memory; and (c) sending the data written in the cache memory to the external storage device to be saved therein.
    Type: Application
    Filed: July 26, 2010
    Publication date: January 27, 2011
    Applicant: BUFFALO INC.
    Inventor: Noriaki SUGAHARA
  • Publication number: 20110022798
    Abstract: A method for caching terminology data, including steps of: receiving a terminology request; determining that the terminology request is related to at least one uncached terminology concept; retrieving a complete concept set of the terminology concept as a cache unit, wherein the complete concept set includes the terminology concept, all other terminology concepts which are directly correlated or indirectly correlated through a non-transitive relationship to the terminology concept, properties of each terminology concept, and the non-transitive relationship between each terminology concept; retrieving transitive relationship information for the complete concept set, the transitive relationship information at least including identifiers of terminology concepts which are correlated through the transitive relationship to each terminology concept in the complete concept set; and caching the cache unit and the transitive relationship information of the cache unit. A corresponding device caches terminology data.
    Type: Application
    Filed: June 28, 2010
    Publication date: January 27, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xue Qiao Hou, Gang Hu, Bo Li, Jing Li, Haifeng Liu, Sheng Ping Liu
  • Publication number: 20110022801
    Abstract: An apparatus, system, and method are disclosed for redundant write caching. The apparatus, system, and method are provided with a plurality of modules including a write request module, a first cache write module, a second cache write module, and a trim module. The write request module detects a write request to store data on a storage device. The first cache write module writes data of the write request to a first cache. The second cache write module writes the data to a second cache. The trim module trims the data from one of the first cache and the second cache in response to an indicator that the storage device stores the data. The data remains available in the other of the first cache and the second cache to service read requests.
    Type: Application
    Filed: July 30, 2010
    Publication date: January 27, 2011
    Inventor: David Flynn
  • Publication number: 20110022818
    Abstract: An IOMMU for controlling requests by an I/O device to a system memory of a computer system includes control logic and a cache memory. The control logic may translate an address received in a request from the I/O device. If the request includes a transaction layer protocol (TLP) packet with a process address space identifier (PASID) prefix, the control logic may perform a two-level guest translation. Accordingly, the control logic may access a set of guest page tables to translate the address received in the request. A pointer in a last guest page table points to a first table in a set of nested page tables. The control logic may use the pointer in a last guest page table to access the set of nested page tables to obtain a system physical address (SPA) that corresponds to a physical page in the system memory. The cache memory stores completed translations.
    Type: Application
    Filed: July 24, 2009
    Publication date: January 27, 2011
    Inventors: Andrew G. Kegel, Mark D. Hummel, Stephen D. Glaser
  • Publication number: 20110022800
    Abstract: A method for selecting a cache way, the method includes: selecting an initially selected cache way out of multiple cache ways of a cache module for receiving a data unit; the method being characterized by including: searching, if the initially selected cache way is locked, for an unlocked cache way, out of at least one group of cache ways that are located at predefined offsets from the first cache way.
    Type: Application
    Filed: April 11, 2008
    Publication date: January 27, 2011
    Applicant: Freescale Semiconductor, Inc.
    Inventors: Rotem Porat, Moshe Anschel, Alon Eldar, Amit Gur, Shai Koren, Itay Peled
  • Publication number: 20110022774
    Abstract: According to a cache memory control method of an embodiment, a data write position in a segment of a cache memory is changed to an address to which a lower bit of a logical block address of write data is added as an offset. Then, even if writing is completed within the segment of the cache memory, the remaining regions of the segment is not wasted.
    Type: Application
    Filed: May 20, 2010
    Publication date: January 27, 2011
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kazuya TAKADA, Kenji YOSHIDA
  • Publication number: 20110022654
    Abstract: A method for providing information to dispersed users based on an integrative cache on a communication network is provided. The method includes collecting information on the communication network, integrating the collected information and storing the collected information as a cache in a database, receiving an information request from the terminal device of the dispersed users, and determining in a symmetrical area of the communication network if the requested information exists as a cache in the database to control the path of the requested information.
    Type: Application
    Filed: October 9, 2009
    Publication date: January 27, 2011
    Applicant: ARA Networks Co. Ltd.
    Inventor: Jai Hyuk LEE
  • Patent number: 7873786
    Abstract: A compression device recognizes patterns of data and compressing the data, and sends the compressed data to a decompression device that identifies a cached version of the data to decompress the data. Both the compression device and the decompression device cache the data in packets they receive. Each device has a disk, on which each device writes the data in the same order. The compression device looks for repetitions of any block of data between multiple packets or datagrams that are transmitted across the network. The compression device encodes the repeated blocks of data by replacing them with a pointer to a location on disk. The decompression device receives the pointer and replaces the pointer with the contents of the data block that it reads from its disk.
    Type: Grant
    Filed: June 28, 2010
    Date of Patent: January 18, 2011
    Assignee: Juniper Networks, Inc.
    Inventors: Amit P. Singh, Balraj Singh, Vanco Burzevski
  • Publication number: 20110010500
    Abstract: Improved thrashing aware and self configuring cache architectures that reduce cache thrashing without increasing cache size or degrading cache hit access time, for a DSP. In one example embodiment, this is accomplished by selectively caching only the instructions having a higher probability of recurrence to considerably reduce cache thrashing.
    Type: Application
    Filed: July 13, 2010
    Publication date: January 13, 2011
    Inventors: Tushar P. Ringe, Abhijit Giri
  • Publication number: 20110010499
    Abstract: A storage system including a storage, has a first power supplier for supplying electronic power, a second power supplier for supplying electronic power when the first power supplier not supplying electronic power to the storage system, a cache memory for storing data sent out from a host, a non-volatile memory for storing data stored in the cache memory, and a controller for writing the data stored in the cache memory into the non-volatile memory when the second supplier supplying electronic power to the storage system, for stopping the writing and for deleting data stored in the non-volatile memory so until a free space volume of the non-volatile memory being not less than a volume of the data stored in the cache memory when the first supplier restoring electronic power to the storage system.
    Type: Application
    Filed: July 7, 2010
    Publication date: January 13, 2011
    Applicant: Fujitsu Limited
    Inventors: Nina Tsukamoto, Yuji Hanaoka, Terumasa Haneda, Atsushi Uchida, Yoko Kawano
  • Publication number: 20110010520
    Abstract: In an embodiment, a non-transparent memory unit is provided which includes a non-transparent memory and a control circuit. The control circuit may manage the non-transparent memory as a set of non-transparent memory blocks. Software executing on one or more processors may request a non-transparent memory block in which to process data. The control circuit may allocate a first block, and may return an address (or other indication) of the allocated block so that the software can access the block. The control circuit may also provide automatic data movement between the non-transparent memory and a main memory system to which the non-transparent memory unit is coupled. For example, the automatic data movement may include filling data from the main memory system to the allocated block, or flushing the data in the allocated block to the main memory system after the processing of the allocated block is complete.
    Type: Application
    Filed: July 10, 2009
    Publication date: January 13, 2011
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Publication number: 20110004887
    Abstract: A method of rendering magnified pointing indicia including the steps of monitoring application program interface messaging and intercepting a call for a unique system pointer identifier. A stored collection of predefined vector shapes is accessed and from that a predefined vector shape from the collection is selected which is correlated to the current system pointer identifier. A convergence point may be established for maximum pointing indicia magnification in addition to a user-selectable desktop magnification level. The vector shape is scaled in synchronization with the desktop magnification level up to the convergence point whereby the vector shape is no longer scaled up once the convergence point is reached. The scaled vector shape is rasterized and displayed to an end user operating a computer.
    Type: Application
    Filed: July 1, 2010
    Publication date: January 6, 2011
    Applicant: FREEDOM SCIENTIFIC, INC.
    Inventors: Anthony Bowman Stringer, Garald Lee Voorhees
  • Patent number: 7865668
    Abstract: A method, system, and computer program product for two-sided, dynamic cache injection control are provided. An I/O adapter generates an I/O transaction in response to receiving a request for the transaction. The transaction includes an ID field and a requested address. The adapter looks up the address in a cache translation table stored thereon, which includes mappings between addresses and corresponding address space identifiers (ASIDs). The adapter enters an ASID in the ID field when the requested address is present in the cache translation table. IDs corresponding to device identifiers, address ranges and pattern strings may also be entered. The adapter sends the transaction to one of an I/O hub and system chipset, which in turn, looks up the ASID in a table stored thereon and injects the requested address and corresponding data in a processor complex when the ASID is present in the table, indicating that the address space corresponding to the ASID is actively running on a processor in the complex.
    Type: Grant
    Filed: December 18, 2007
    Date of Patent: January 4, 2011
    Assignee: International Business Machines Corporation
    Inventors: Thomas A. Gregg, Rajaram B. Krishnamurthy
  • Publication number: 20100332755
    Abstract: An apparatus and method for improving synchronization between threads in a multi-core processor system are provided. An apparatus includes a memory, a first processor core, and a second processor core. The memory includes a shared ring buffer for storing data units, and stores a plurality of shared variables associated with accessing the shared ring buffer. The first processor core runs a first thread and has a first cache associated therewith. The first cache stores a first set of local variables associated with the first processor core. The first thread controls insertion of data items into the shared ring buffer using at least one of the shared variables and the first set of local variables. The second processor core runs a second thread and has a second cache associated therewith. The second cache stores a second set of local variables associated with the second processor core.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 30, 2010
    Inventors: Tian Bu, Girish Chandranmenon, Pak-Ching Lee
  • Publication number: 20100332850
    Abstract: A method (and structure) of enhancing efficiency in processing using a secure environment on a computer, includes, for each line of a cache, providing an associated object identification label field associated with the line of cache, the object identification label field storing a value that identifies an owner of data currently stored in the line of cache.
    Type: Application
    Filed: September 9, 2010
    Publication date: December 30, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Richard Harold Boivie
  • Publication number: 20100333098
    Abstract: Various techniques for dynamically allocating instruction tags and using those tags are disclosed. These techniques may apply to processors supporting out-of-order execution and to architectures that supports multiple threads. A group of instructions may be assigned a tag value from a pool of available tag values. A tag value may be usable to determine the program order of a group of instructions relative to other instructions in a thread. After the group of instructions has been (or is about to be) committed, the tag value may be freed so that it can be re-used on a second group of instructions. Tag values are dynamically allocated between threads; accordingly, a particular tag value or range of tag values is not dedicated to a particular thread.
    Type: Application
    Filed: June 30, 2009
    Publication date: December 30, 2010
    Inventors: Paul J. Jordan, Robert T. Golla, Jama I. Barreh
  • Publication number: 20100332578
    Abstract: A time-invariant method and apparatus for performing modular reduction that is protected against cache-based and branch-based attacks is provided. The modular reduction technique adds no performance penalty and is side-channel resistant. The side-channel resistance is provided through the use of lazy evaluation of carry bits, elimination of data-dependent branches and use of even cache accesses for all memory references.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 30, 2010
    Inventors: Vinodh Gopal, Gilbert M. Wolrich, Wajdi K. Feghali, James D. Guilford, Erdinc Ozturk, Martin G. Dixon
  • Publication number: 20100332846
    Abstract: Method and apparatus for constructing an index that scales to a large number of records and provides a high transaction rate. New data structures and methods are provided to ensure that an indexing algorithm performs in a way that is natural (efficient) to the algorithm, while a non-uniform access memory device sees IO (input/output) traffic that is efficient for the memory device. One data structure, a translation table, is created that maps logical buckets as viewed by the indexing algorithm to physical buckets on the memory device. This mapping is such that write performance to non-uniform access SSD and flash devices is enhanced. Another data structure, an associative cache is used to collect buckets and write them out sequentially to the memory device as large sequential writes. Methods are used to populate the cache with buckets (of records) that are required by the indexing algorithm.
    Type: Application
    Filed: June 25, 2010
    Publication date: December 30, 2010
    Applicant: SimpliVT Corporation
    Inventors: Paul Bowden, Arthur J. Beaverson
  • Publication number: 20100332717
    Abstract: Provided is a method that, in the case of managing areas of a non-volatile memory of an information recording module by a file system, increases the speed of processing for writing file data and file system management information, and furthermore prevents a decrease in the rewriting lifetime of the non-volatile memory. The information recording module (2) is provided with a page cache control unit (217) that stores page cache information (224) in the non-volatile memory (22) of the information recording module (2) and performs control such that a specific physical block is used as a cache when writing small-sized data. Also, an access module (1) is provided with a page cache information setting unit (104) that sets information necessary for page cache control in the information recording module (2).
    Type: Application
    Filed: February 27, 2009
    Publication date: December 30, 2010
    Inventors: Takuji Maeda, Shigekazu Kogita, Shinji Inoue, Hiroki Etoh, Makoto Ochi, Masahiro Nakamura
  • Publication number: 20100332587
    Abstract: Systems and methods of the present invention provide for returning website content after being requested by a client. A static component may be requested, which may be updated, and a dynamic component may be requested. The combination of static and dynamic website content may be returned to the client.
    Type: Application
    Filed: June 30, 2009
    Publication date: December 30, 2010
    Applicant: THE GO DADDY GROUP, INC.
    Inventor: Greg Schwimer
  • Publication number: 20100332800
    Abstract: An instruction control device connects to a cache memory that stores data frequently used among data stored in a main memory. The instruction control device includes: a first free-space determining unit that determines whether there is free space in an instruction buffer; a second free-space determining unit that manages an instruction fetch request queue that stores an instruction fetch data to be sent from the cache memory to the main memory, and determines whether a move-in buffer in the cache memory has free space for at least two entries if the first free-space determining unit determines that there is free space; and an instruction control unit that outputs an instruction prefetch request to the cache memory in accordance with an address boundary corresponding to a line size of the cache line, if the second free-space determining unit determines that the move-in buffer has free space.
    Type: Application
    Filed: June 29, 2010
    Publication date: December 30, 2010
    Applicant: Fujitsu Limited
    Inventor: Ryuichi Sunayama
  • Publication number: 20100332743
    Abstract: A system and a method for writing cache data and a system and a method for reading cache data are disclosed. The system for writing the cache data includes: an on-chip memory device, configured to cache received write requests and write data associated with the write requests and sort the write requests; a request judging device, configured to extract the sorted write requests and the write data associated with the write requests according to write time sequence restriction information of an off-chip memory device; and an off-chip memory device controller, configured to write the write data extracted by the request judging device in the off-chip memory device. With a combination of the on-chip and off-chip memory devices, a large-capacity data storage space and a high-speed read and write efficiency is achieved.
    Type: Application
    Filed: September 8, 2010
    Publication date: December 30, 2010
    Applicant: Huawei Technologies Co., Ltd.
    Inventors: Qin Zheng, Haiyan Luo, Hui Lu, Junliang Lin, Yunfeng Bian
  • Publication number: 20100332756
    Abstract: Methods and apparatus relating to processing out of order transactions for mirrored subsystems are described. In one embodiment, a device (that is mirroring data from another device) includes a cache to track out of order write operations prior to writing the data from the write operations to memory. A register may be used to track the state of the cache and cause acknowledgement of commitment of the data to memory once all cache entries, as recorded at a select point by the register, are emptied or otherwise invalidated. Other embodiments are also disclosed.
    Type: Application
    Filed: June 30, 2009
    Publication date: December 30, 2010
    Inventors: Mark A. Yarch, Pankaj Kumar, Hang T. Nguyen
  • Publication number: 20100332700
    Abstract: Data storage controllers and data storage devices employing lossless or lossy data compression and decompression to provide accelerated data storage and retrieval bandwidth. In one embodiment of the invention, a composite disk controller provides data storage and retrieval acceleration using multiple caches for data pipelining and increased throughput. In another embodiment of the invention, the disk controller with acceleration is embedded in the storage device and utilized for data storage and retrieval acceleration.
    Type: Application
    Filed: January 15, 2010
    Publication date: December 30, 2010
    Applicant: Realtime Data LLC
    Inventor: James J. Fallon
  • Publication number: 20100332753
    Abstract: Synchronizing threads on loss of memory access monitoring. Using a processor level instruction included as part of an instruction set architecture for a processor, a read, or write monitor to detect writes, or reads or writes respectively from other agents on a first set of one or more memory locations and a read, or write monitor on a second set of one or more different memory locations are set. A processor level instruction is executed, which causes the processor to suspend executing instructions and optionally to enter a low power mode pending loss of a read or write monitor for the first or second set of one or more memory locations. A conflicting access is detected on the first or second set of one or more memory locations or a timeout is detected. As a result, the method includes resuming execution of instructions.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 30, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Jan Gray, David Callahan, Burton Jordan Smith, Gad Sheaffer, Ali-Reza Adl-Tabatabai, Bratin Saha
  • Publication number: 20100332807
    Abstract: Performing non-transactional escape actions within a hardware based transactional memory system. A method includes at a hardware thread on a processor beginning a hardware based transaction for the thread. Without committing or aborting the transaction, the method further includes suspending the hardware based transaction and performing one or more operations for the thread, non-transactionally and not affected by: transaction monitoring and buffering for the transaction, an abort for the transaction, or a commit for the transaction. After performing one or more operations for the thread, non-transactionally, the method further includes resuming the transaction and performing additional operations transactionally. After performing the additional operations, the method further includes either committing or aborting the transaction.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 30, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Gad Sheaffer, Jan Gray, Martin Taillefer, Ali-Reza Adl-Tabatabai, Bratin Saha, Vadim Bassin, Robert Y. Geva, David Callahan
  • Publication number: 20100332759
    Abstract: A program is obfuscated by reordering its instructions. Original instruction addresses are mapped to target addresses. A cache efficient obfuscated program is realized by restricting target addresses of a sequence of instructions to a limited set of the disjoint ranges (33a-d) of target addresses, which are at lease half filled with instructions. Mapped address steps (34) are provided between the target addresses to which successive ones of the original instruction addresses are mapped. The address steps (34) include first address steps within at least a first one of the mutually disjoint ranges (33a-d). Between said first address steps, second address steps within at least a second one of the mutually disjoint ranges (33a-d). Thus, a deviation from successive addresses for logically successive instructions is realized.
    Type: Application
    Filed: February 9, 2009
    Publication date: December 30, 2010
    Applicant: NXP B.V.
    Inventor: Marc Vauclair
  • Publication number: 20100332754
    Abstract: Systems and methods are provided for caching media data to thereby enhance media data read and/or write functionality and performance. A multimedia apparatus, comprises a cache buffer configured to be coupled to a storage device, wherein the cache buffer stores multimedia data, including video and audio data, read from the storage device. A cache manager coupled to the cache buffer, wherein the cache buffer is configured to cause the storage device to enter into a reduced power consumption mode when the amount of data stored in the cache buffer reaches a first level.
    Type: Application
    Filed: August 31, 2010
    Publication date: December 30, 2010
    Applicant: COREL INC.
    Inventors: Yung-Hsiao Lai, Andy Chao Hung
  • Patent number: 7861040
    Abstract: A memory apparatus including: a cache control section to control a cache memory for an auxiliary storage apparatus; a volatile memory; and a nonvolatile memory, wherein the cache memory for the auxiliary storage apparatus is configured to have a volatile cache memory provided in the volatile memory and a nonvolatile cache memory provided in the nonvolatile memory, and wherein the cache control section accesses the nonvolatile cache memory using a write back method.
    Type: Grant
    Filed: November 15, 2007
    Date of Patent: December 28, 2010
    Assignee: Konica Minolta Business Technologies, Inc.
    Inventors: Kenji Okuyama, Tomohiro Suzuki, Yuji Tamura, Tetsuya Ishikawa, Hiroyasu Nishimura, Tomoya Ogawa, Fumikage Uchida, Nao Moromizato, Munetoshi Eguchi
  • Publication number: 20100325360
    Abstract: In one embodiment, a multi-core processor includes a plurality of processor cores that each includes a cache and that uses a management target area allocated as a main memory in a memory area. The multi-core processor includes a state managing unit and a management-target-area. The state managing unit manages a first state in which the small area is not allocated to the processor core and a second state in which the small area is allocated to the processor core for each small area included in the management target area. The management-target-area increasing and decreasing unit increases and decreases the management target area by increasing and decreasing the small area in the first state in the management target area.
    Type: Application
    Filed: June 11, 2010
    Publication date: December 23, 2010
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Yumi Yoshitake, Shunsuke Sasaki
  • Publication number: 20100325357
    Abstract: The present invention is directed towards systems and methods for integrating cache managing and application firewall processing in a networked system. In various embodiments, an integrated cache/firewall system comprises an application firewall operating in conjunction with a cache managing system in operation on an intermediary device. In various embodiments, the application firewall processes a received HTTP response to a request by a networked entity serviced by the intermediary device. The application firewall generates metadata from the HTTP response and stores the metadata in cache with the HTTP response. When a subsequent request hits in the cache, the metadata is identified to a user session associated with the subsequent request. In various embodiments, the application firewall can modify a cache-control header of the received HTTP response, and can alter the cookie-setting header of the cached HTTP response.
    Type: Application
    Filed: June 22, 2009
    Publication date: December 23, 2010
    Inventors: Anoop Kandi Reddy, Craig Steven Anderson, Prakash Khemani
  • Publication number: 20100325359
    Abstract: Embodiments for tracing dataflow for a computer program are described. The computer program includes machine instructions that are executable on a microprocessor. A decoding module can be configured to decode machine instructions obtained from a computer memory. In addition, a dataflow primitive engine can receive a decoded machine instruction from the decoding module and generate at least one dataflow primitive for the decoded machine instruction based on a dataflow primitive classification into which the decoded machine instruction are categorized by the dataflow primitive engine. A dataflow state table can be configured to track addressed data locations that are affected by dataflow. The dataflow primitives can be applied to the dataflow state table to update a dataflow status for the addressed data locations affected by the decoded machine instruction.
    Type: Application
    Filed: June 23, 2009
    Publication date: December 23, 2010
    Applicant: Microsoft Corporation
    Inventors: Nitin K. Goel, Mark Wodrich
  • Publication number: 20100325631
    Abstract: A method and apparatus for dual-target register allocation is described, intended to enable the efficient mapping/renaming of registers associated with instructions within a pipelined microprocessor architecture.
    Type: Application
    Filed: June 15, 2010
    Publication date: December 23, 2010
    Inventors: Rajesh Patel, James Dundas, Adi Yoaz
  • Publication number: 20100325345
    Abstract: To facilitate the management of a storage system that uses a flash memory as a storage area. A controller of the storage system provided with a flash memory chip manages a surplus capacity value of the flash memory chip, and transmits a value based on the surplus capacity value to a management server, on the basis of at least one of a definition of a parity group, a definition of an internal LU, and a definition of a logical unit. The management server displays a state of the storage system by using the received value based on the surplus capacity value.
    Type: Application
    Filed: August 24, 2009
    Publication date: December 23, 2010
    Inventors: Shotaro OHNO, Manabu Obana
  • Publication number: 20100325352
    Abstract: A hierarchically-structured computer mass storage system and method. The mass storage system includes a mass storage memory drive, control logic on the mass storage memory drive that includes a controller and one or more devices for executing a hierarchical storage management technique, a volatile memory cache configured to be accessed by the control logic, and first and second non-volatile storage arrays on the mass storage memory drive and comprising, respectively, first and second non-volatile memory devices. The first and second non-volatile memory devices have properties including access times and write endurance, and at least one of the access time and the write endurance of the first non-volatile memory devices is faster or higher, respectively, than the second non-volatile memory devices. Desired data storage localities on the storage arrays are determined through access patterns and selectively utilizing the properties of the memory devices to match the data storage requirements.
    Type: Application
    Filed: June 15, 2010
    Publication date: December 23, 2010
    Applicant: OCZ TECHNOLOGY GROUP, INC.
    Inventors: Franz Michael Schuette, William J. Allen
  • Patent number: 7856523
    Abstract: A Random Access Memory (RAM) based Content Addressable Memory (CAM) architecture is disclosed. In an implementation, the CAM architecture includes a CAM data structure associated with a RAM to store one or more tags and associated data values. Each of the tags includes one or more bit fields which are utilized as an index for referencing a look-up table. One or more look-up tables may be realized for supporting memory operations facilitating efficient transfer modes available in the RAM.
    Type: Grant
    Filed: April 11, 2007
    Date of Patent: December 21, 2010
    Assignee: Microsoft Corporation
    Inventor: Ray A. Bittner, Jr.
  • Publication number: 20100318632
    Abstract: A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.
    Type: Application
    Filed: June 16, 2009
    Publication date: December 16, 2010
    Applicant: Microsoft Corporation
    Inventors: Won Suk Yoo, Anil K. Ruia, Himanshu Patel, Ning Lin, Chittaranjan Pattekar