Cache Bypassing Patents (Class 711/138)
  • Patent number: 8626866
    Abstract: A network caching system has a multi-protocol caching filer coupled to an origin server to provide storage virtualization of data served by the filer in response to data access requests issued by multi-protocol clients over a computer network. The multi-protocol caching filer includes a file system configured to manage a sparse volume that “virtualizes” a storage space of the data to thereby provide a cache function that enables access to data by the multi-protocol clients. To that end, the caching filer further includes a multi-protocol engine configured to translate the multi-protocol client data access requests into generic file system primitive operations executable by both the caching filer and the origin server.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: January 7, 2014
    Assignee: NetApp, Inc.
    Inventors: Jason Ansel Lango, Robert M. English, Paul Christopher Eastham, Qinghua Zheng, Brian Mederic Quirion, Peter Griess, Matthew Benjamin Amdur, Kartik Ayyar, Robert Lieh-Yuan Tsai, David Grunwald, J. Chris Wagner, Emmanuel Ackaouy, Ashish Prakash
  • Patent number: 8612685
    Abstract: A processor having a cache memory provided therein controls use of the cache memory based on operation mode information which changeably designates use/no-use of a cache memory and on designation of cache memory use in an access instruction word in a program at the time of an access to a main storage memory from the program in operation.
    Type: Grant
    Filed: October 14, 2008
    Date of Patent: December 17, 2013
    Assignee: NEC Corporation
    Inventor: Shintaro Momose
  • Patent number: 8601213
    Abstract: A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated. A metric is calculated for the data block as a function of the residency time. The metric may further be calculated as a function of the data block size. One or more data blocks stored in cache memory are evaluated by comparing a respective metric of the one or more data blocks with the metric of the data block to be stored. A determination is then made to either store the data block on the disk drive or flush the one or more data blocks from the cache memory and store the data block in the cache memory. In this manner, the cache memory may be more efficiently utilized by storing smaller data blocks with lesser residency times by flushing larger data blocks with significant residency times from the cache memory.
    Type: Grant
    Filed: November 3, 2008
    Date of Patent: December 3, 2013
    Assignee: Teradata US, Inc.
    Inventors: Douglas Brown, John Mark Morris
  • Patent number: 8595451
    Abstract: A method for caching data in a storage medium implementing tiered data structures may include storing a first portion of critical data at the instruction of a storage control module. The first portion of critical data may be separated into data having different priority levels based upon at least one data utilization characteristic associated with a file system implemented by the storage control module. The method may also include storing a second portion of data at the instruction of the storage control module. The second storage medium may have at least one performance, reliability, or security characteristic different from the first storage medium.
    Type: Grant
    Filed: November 4, 2010
    Date of Patent: November 26, 2013
    Assignee: LSI Corporation
    Inventors: Brian McKean, Mark Ish
  • Patent number: 8578097
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: November 5, 2013
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8578089
    Abstract: Implementations described and claimed herein provide a method and system for comparing a storage location related to a new write command on a storage device with storage locations of a predetermined number of write commands stored in a first table to determine frequency of write commands to the storage location. If the frequency is determined to be higher than a first threshold, the data related to the write command is stored in a write cache.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: November 5, 2013
    Assignee: Seagate Technology LLC
    Inventors: Ron Watts, Jack Lakey
  • Patent number: 8578111
    Abstract: A device includes a data collector module, a policy module, and an optimizer module. The data collector module is to collect values for a plurality of device parameters. The policy module is to receive the values for the plurality of device parameters and update a policy table. The optimizer module is to receive the policy table from the policy module, determine, based on the policy table, whether to proceed with buffered input/output or un-buffered input/output for a read call, and instruct a read module of a backup application to proceed with either buffered input/output or un-buffered input/output for the read call.
    Type: Grant
    Filed: October 11, 2011
    Date of Patent: November 5, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Venkatesh Marisamy, Kanthimathi Vedaraman
  • Patent number: 8572327
    Abstract: A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.
    Type: Grant
    Filed: August 19, 2011
    Date of Patent: October 29, 2013
    Assignee: Google Inc.
    Inventors: Timo Burkard, David Presotto
  • Patent number: 8566607
    Abstract: In a first aspect, a first cryptography method is provided. The first method includes the steps of (1) in response to receiving a request to perform a first operation on data in a first memory cacheline, accessing data associated with the first memory cacheline; (2) performing cryptography on data of the first memory cacheline when necessary; and (3) speculatively accessing data associated with a second memory cacheline based on the first memory cacheline before receiving a request to perform an operation on data in the second memory cacheline. Numerous other aspects are provided.
    Type: Grant
    Filed: August 26, 2005
    Date of Patent: October 22, 2013
    Assignee: International Business Machines Corporation
    Inventors: William T. Flynn, David A. Shedivy
  • Patent number: 8566531
    Abstract: A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.
    Type: Grant
    Filed: August 21, 2009
    Date of Patent: October 22, 2013
    Assignee: Google Inc.
    Inventors: Timo Burkard, David Presotto
  • Patent number: 8560779
    Abstract: A method and structure for processing an application program on a computer. In a memory of the computer executing the application, an in-memory cache structure is provided for normally temporarily storing data produced in the processing. An in-memory storage outside the in-memory cache structure is provided in the memory, for by-passing the in-memory cache structure for temporarily storing data under a predetermined condition. A sensor detects an amount of usage of the in-memory cache structure used to store data during the processing. When it is detected that the amount of usage exceeds the predetermined threshold, the processing is controlled so that the data produced in the processing is stored in the in-memory storage rather than in the in-memory cache structure.
    Type: Grant
    Filed: May 20, 2011
    Date of Patent: October 15, 2013
    Assignee: International Business Machines Corporation
    Inventors: Claris Castillo, Michael J. Spreitzer, Malgorzata Steinder
  • Publication number: 20130246712
    Abstract: Various embodiments of the invention concern methods and apparatuses for power and time efficient load handling. A compiler may identify producer loads, consumer reuse loads, consumer forwarded loads, and producer/consumer hybrid loads. Based on this identification, performance of the load may be efficiently directed to a load value buffer, store buffer, data cache, or elsewhere. Consequently, accesses to cache are reduced, through direct loading from load value buffers and store buffers, thereby efficiently processing the loads.
    Type: Application
    Filed: May 3, 2013
    Publication date: September 19, 2013
    Inventors: WEI LIU, YOUFENG WU, CHRISTOPHER WILKERSON, HERBERT HUM
  • Patent number: 8539144
    Abstract: A nonvolatile semiconductor memory device includes a memory cell array having a plurality of banks and a cache block corresponding to each of the plurality of banks. The cache block has a predetermined data storage capacity. A page buffer is included which corresponds to each of the plurality of banks. A programming circuit programs all of the plurality of banks except a last of said banks with page data. The page data is loaded through each page buffer and programmed into each cache block such that when page data for the last bank is loaded into the page buffer, the loaded page data and the page data programmed into the respective cache blocks are programmed into respective corresponding banks.
    Type: Grant
    Filed: July 30, 2012
    Date of Patent: September 17, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong-Hyuk Chae, Young-Ho Lim
  • Patent number: 8527711
    Abstract: A method to preview new cacheable content may include adding a skip-cache element to a request to preview the new cacheable content before replacing any existing content in a cache or caching the new content. The method may also include bypassing cache processing for the request in response to the request including the skip-cache element and the skip-cache element being defined in a cache policy.
    Type: Grant
    Filed: December 27, 2006
    Date of Patent: September 3, 2013
    Assignee: International Business Machines Corporation
    Inventors: Madhu K. Chetuparambil, Ching-Chi A. Chow, Darl Crick, Andrew J. Ivory, Nirmala Kodali
  • Patent number: 8516577
    Abstract: In one embodiment, the present invention includes a method for identifying a termination sequence for an atomic memory operation executed by a first thread, associating a timer with the first thread, and preventing the first thread from execution of a memory cluster operation after completion of the atomic memory operation until a prevention window has passed. This method may be executed by regulation logic associated with a memory execution unit of a processor, in some embodiments. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 22, 2010
    Date of Patent: August 20, 2013
    Assignee: Intel Corporation
    Inventors: Michael S. Bair, David W. Burns, Robert S. Chappell, Prakash Math, Leslie A. Ong, Pankaj Raghuvanshi, Shlomo Raikin, Raanan Sade, Michael D. Tucknott, Igor Yanover
  • Publication number: 20130212335
    Abstract: A storage system is migrated without stopping service provision by a host computer. By this means, in a migration-source storage system, data of the cache memory is destaged, and, next, data received from the host computer is directly written in a logical unit by bypassing the cache memory. On the other hand, in a migration-destination storage system, communication with the migration-source storage system is performed to set setting information of a logical unit of the migration object into a logical unit management table and set a writing mode for the cache memory to a cache-bypass mode. After that, the migration-source storage system blocks a path to the host computer. The migration-destination storage system receives a report of the path block from the migration-source storage system and then opens a path between the own system and the host computer.
    Type: Application
    Filed: February 10, 2012
    Publication date: August 15, 2013
    Inventors: Mika Teranishi, Hiroji Shibuya, Shunji Murayama, Toshio Kimura, Kazushige Nagamatsu
  • Patent number: 8504777
    Abstract: A method includes determining if a data processing instruction is a decorated access instruction with cache bypass, and determining if the data processing instruction generates a cache hit to a cache. When the data processing instruction is determined to be a decorated access instruction with cache bypass and the data processing instruction is determined to generate a cache hit, the method further includes invalidating a cache entry of the cache associated with the cache hit; and performing by a memory controller of the memory, a decoration operation specified by the data processor instruction on a location in the memory designated by a target address of the data processor instruction, wherein the performing the decorated access includes the memory controller performing a read of a value of the location in memory, modifying the value to generate a modified value, and writing the modified value to the location.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: August 6, 2013
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Patent number: 8499119
    Abstract: Aspects relate to systems and methods for providing the ability to customize content delivery. A device can cache multiple presentations. The device can establish a cache depth upon initiation of the subscription service. The device can provide an interface to select a cache depth. The cache depth can be the number of presentations the device will maintain on the device at a given time.
    Type: Grant
    Filed: April 6, 2009
    Date of Patent: July 30, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Sajith Balraj, An Mei Chen
  • Patent number: 8489820
    Abstract: A network storage server includes a main buffer cache to buffer writes requested by clients before committing them to primary persistent storage. The server further uses a secondary cache, implemented as low-cost, solid-state memory, such as flash memory, to store data evicted from the main buffer cache or data read from the primary persistent storage. To prevent bursts of writes to the secondary cache, data is copied from the main buffer cache to the secondary cache speculatively, before there is a need to evict data from the main buffer cache. Data can be copied to the secondary cache as soon as the data is marked as clean in the main buffer cache. Data can be written to secondary cache at a substantially constant rate, which can be at or close to the maximum write rate of the secondary cache.
    Type: Grant
    Filed: March 18, 2008
    Date of Patent: July 16, 2013
    Assignee: NetApp, Inc
    Inventor: Daniel J. Ellard
  • Patent number: 8473682
    Abstract: According to one embodiment, a cache unit transferring data from a memory connected to the cache unit via a bus incompatible with a critical word first (CWF) to an L1-cache having a first line size and connected to the cache unit via a bus compatible with the CWF. The unit includes cache and un-cache controllers. The cache controller includes an L2-cache and a request converter. The L2-cache has a second line size greater than or equal to the first line size. The request converter converts a first refill request into a second refill request when a head address of a burst transfer of the first refill request is in the L2-cache. The un-cache controller transfers the second refill request to the memory, receives data to be processed corresponding to the second refill request from the memory, and transfers the received data to the L1-cache.
    Type: Grant
    Filed: November 24, 2010
    Date of Patent: June 25, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Soichiro Hosoda
  • Patent number: 8458755
    Abstract: Media content, based on a predetermined set of constraints, from a content provider is delivered to a local cache of a user device before viewing the media. A client asset manager process resides in the user device, an asset list at the content provider site, and the media assets are located at a remote site.
    Type: Grant
    Filed: August 14, 2012
    Date of Patent: June 4, 2013
    Assignee: Disney Enterprises, Inc.
    Inventors: Scott F. Watson, Eric C. Haseltine, Eric Freeman, Elisabeth M. Freeman, Aaron P. LaBerge, Adam T. Fritz
  • Patent number: 8443160
    Abstract: With a computer system having a host computer and first and second storage apparatuses, the second storage apparatus virtualizes first logical units in the first storage apparatus and provides them as second logical units to the host computer, collects configuration information about each first logical unit, and sets each piece of the collected configuration information to each corresponding second logical unit. The host computer adds a path to the second logical units and deletes a path to the first logical units. The second storage apparatus copies data stored in the first logical units to a storage area provided by the second storage device and associates the storage area with the second logical units.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: May 14, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Hideo Saito, Yoshiaki Eguchi, Masayuki Yamamoto, Akira Yamamoto
  • Patent number: 8433854
    Abstract: In some embodiments, an electronic system may include a cache located between a mass storage and a system memory, and code stored on the electronic system to prevent storage of stream data in the cache and to send the stream data directly between the system memory and the mass storage based on a comparison of first metadata of a first request for first information and pre-boot stream information stored in a previous boot context. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 25, 2008
    Date of Patent: April 30, 2013
    Assignee: Intel Corporation
    Inventors: R. Scott Tetrick, Dale Juenemann, Jordan Howes, Jeanna Matthews, Steven Wells, Glenn Hinton, Oscar Pinto
  • Publication number: 20130103909
    Abstract: In one embodiment, a system comprises a memory and a memory controller that provides a cache access path to the memory and a bypass-cache access path to the memory, receives requests to read graph data from the memory on the bypass-cache access path and receives requests to read non-graph data from the memory on the cache access path. A method comprises receiving a request at a memory controller to read graph data from a memory on a bypass-cache access path, receiving a request at the memory controller to read non-graph data from the memory through a cache access path, and arbitrating, in the memory controller, among the requests using arbitration.
    Type: Application
    Filed: October 25, 2011
    Publication date: April 25, 2013
    Applicant: Cavium, Inc.
    Inventors: Jeffrey Pangborn, Gregg A. Bouchard, Rajan Goyal, Richard E. Kessler, Aseem Maheshwari
  • Patent number: 8424001
    Abstract: A cache image including only cache entries with valid durations of at least a configured deployment date for a virtual machine image is prepared via an application server for the virtual machine image. The virtual machine image is deployed to at least one other application server as a virtual machine with the cache image including only the cache entries with the valid durations of at least the configured deployment date for the virtual machine image.
    Type: Grant
    Filed: March 29, 2012
    Date of Patent: April 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Erik J. Burckart, Andrew J. Ivory, Todd E. Kaplinger, Aaron K. Shook
  • Patent number: 8407399
    Abstract: Methods, apparatus and computer medium for enforcing one or more cache management policies are disclosed herein. In some embodiments, a flash memory of a storage device includes a plurality of flash memory dies each flash memory die including a respective cache storage area and a respective main storage area. A determination is made, for data that is received from an external host device to which main storage area the received data is addressed thereby specifying one of the plurality of flash memory dies as a target die for the received data. Whenever the received data is written into a cache storage area before being written into a main storage area, the received data is written into the cache storage area of the specified target die.
    Type: Grant
    Filed: October 29, 2008
    Date of Patent: March 26, 2013
    Assignee: SanDisk IL Ltd.
    Inventors: Menahem Lasser, Itshak Afriat, Opher Lieber
  • Patent number: 8386718
    Abstract: According to embodiments described in the specification, a method and apparatus for managing memory in a mobile electronic device are provided. The method comprises: receiving a request to install an application; receiving at least one indication of data intended to be maintained in a shared cache; determining, based on the at least one indication, whether data corresponding to the intended data exists in the shared cache; upon a negative determination, writing the intended data to the shared cache; and repeating the receiving at least one indication, the determining and the writing for at least one additional application.
    Type: Grant
    Filed: November 18, 2009
    Date of Patent: February 26, 2013
    Assignee: Research In Motion Limited
    Inventor: Ankur Aggarwal
  • Patent number: 8381098
    Abstract: A method, computer program product, and system for webpage request handling is described. A method may comprise recording, in a memory, a change time for each of a plurality of elements of a website available from an origin server, each time a change to any one of the plurality of elements occurs. The method may further comprise updating a system-last-modified time of the website to a latest change time.
    Type: Grant
    Filed: March 29, 2010
    Date of Patent: February 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Mark Carl Hampton, Eric Martinez de Morentin, Kenneth Sabir
  • Patent number: 8369971
    Abstract: A media system is disclosed that uses preemptive recording of media files to reduce playback latency when media tracks are subsequently selected for playback during the recording process. The media system comprises a primary storage device capable of storing media files and a secondary storage device capable of reading digital media files from a removable storage medium. The system also includes a media player capable of playing media files stored on the primary storage device and a recorder that is connected to read digital media data from the secondary storage device. The recorder stores media files corresponding to the digital media data of the removable storage medium on the primary storage device.
    Type: Grant
    Filed: April 11, 2006
    Date of Patent: February 5, 2013
    Assignee: Harman International Industries, Incorporated
    Inventors: Nicholas Murrells, Mark Sears
  • Patent number: 8359574
    Abstract: A development application can provide an integrated development environment that interfaces with one or more data sources that will be used by the application under development. Sample data from the source(s) can be used to aid the coding process and/or testing the application under development. The development application can maintain a cache to support offline access of data from the source(s) to allow development to continue when a source cannot be accessed and/or when a developer wishes not to access a particular source. Code elements can be included in the application under development to cause the application under development to access the cached data based on settings in the development application. The added code elements can automatically be removed when the application is released.
    Type: Grant
    Filed: January 16, 2009
    Date of Patent: January 22, 2013
    Assignee: Adobe Systems Incorporated
    Inventors: Sunil Bannur, Mayank Kumar
  • Patent number: 8352680
    Abstract: A method and system for file-system based caching can be used to improve efficiency and security at network sites. In one set of embodiments, the delivery of content and storing content component(s) formed during generation of the content may be performed by different software components. Content that changes at a relatively high frequency or is likely to be regenerated between requests may not have some or all of its corresponding files cached. Additionally, extra white space may be removed before storing to reduce the file size. File mapping may be performed to ensure that a directory within the cache will have an optimal number of files. Security at the network site may be increased by using an internally generated filename that is not used or seen by the client computer. Many variations may be used is achieving any one or more of the advantages described herein.
    Type: Grant
    Filed: September 9, 2011
    Date of Patent: January 8, 2013
    Assignee: Open Text S.A.
    Inventors: Conleth S. O'Connell, Jr., Maxwell J. Berenson, N. Issac Rajkumar
  • Patent number: 8327078
    Abstract: A computer-implemented method for managing data transfer in a multi-level memory hierarchy that includes receiving a fetch request for allocation of data in a higher level memory, determining whether a data bus between the higher level memory and a lower level memory is available, bypassing an intervening memory between the higher level memory and the lower level memory when it is determined that the data bus is available, and transferring the requested data directly from the higher level memory to the lower level memory.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: December 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, Jr., Robert J. Sonnelitter, III
  • Patent number: 8326895
    Abstract: A computer readable storage medium for associating a phase with an activation of a computer program that supports garbage collection include: a plurality of stacks, each stack including at least one stack frame that includes an activation count; and a processor with logic for performing steps of: zeroing the activation count whenever the program creates a new stack frame and after garbage collection is performed; determining whether an interval has transpired during program execution; examining each stack frame's content and incrementing the activation count for each frame of the stacks once the interval has transpired; detecting the phase whose activation count is non-zero and associating the phase with the activation; and ensuring that when the phase ends, an action is immediately performed.
    Type: Grant
    Filed: May 22, 2010
    Date of Patent: December 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Stephen J Fink, David P. Grove
  • Patent number: 8321633
    Abstract: A memory card, connected to a host, includes a NAND flash memory and a memory controller. The NAND flash memory includes multiple pages, and each page includes multiple sectors. The memory controller receives sector data and a corresponding sector address from the host. The memory controller enables the sector data to be transferred to the NAND flash memory over a first data bus, via a buffer memory, when the sector address is an address for accessing a first sector in a selected page. The memory controller enables the sector data to be transferred to the NAND flash memory over a second data bus, bypassing the buffer memory, when the sector address is an address for accessing a sector other than the first sector in the selected page.
    Type: Grant
    Filed: August 3, 2007
    Date of Patent: November 27, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Kyong-Ae Kim
  • Publication number: 20120297145
    Abstract: A method and structure for processing an application program on a computer. In a memory of the computer executing the application, an in-memory cache structure is provided for normally temporarily storing data produced in the processing. An in-memory storage outside the in-memory cache structure is provided in the memory, for by-passing the in-memory cache structure for temporarily storing data under a predetermined condition. A sensor detects an amount of usage of the in-memory cache structure used to store data during the processing. When it is detected that the amount of usage exceeds the predetermined threshold, the processing is controlled so that the data produced in the processing is stored in the in-memory storage rather than in the in-memory cache structure.
    Type: Application
    Filed: May 20, 2011
    Publication date: November 22, 2012
    Inventors: Claris Castillo, Michael J. Spreitzer, Malgorzata Steinder
  • Patent number: 8316185
    Abstract: A cached memory system that can handle high-rate input data and ensure that an embedded DSP can meet real-time constraints is described. The cached memory system includes a cache memory located close to a processor core, an on-chip memory at the next higher memory level, and an external main memory at the topmost memory level. A cache controller handles paging of instructions and data between the cache memory and the on-chip memory for cache misses. A direct memory exchange (DME) controller handles user-controlled paging between the on-chip memory and the external memory. A user/programmer can arrange to have the instructions and data required by the processor core to be present in the on-chip memory well in advance of when they are actually needed by the processor core.
    Type: Grant
    Filed: June 3, 2010
    Date of Patent: November 20, 2012
    Assignee: QUALCOMM Incorporated
    Inventors: Gilbert Christopher Sih, Charles E. Sakamaki, De D. Hsu, Jian Wei, Richard Higgins
  • Patent number: 8312217
    Abstract: A method for storing data, comprises the steps of: defining one or more intervals for one or more virtual disks, wherein each of the intervals has data; receiving a storage command in a cache, wherein the command having a logical address and a data block; determining a respective interval for the data block corresponding to the logical address of the data block; determining whether the data of the respective interval is to be written to a corresponding storage unit; and receiving a next storage command.
    Type: Grant
    Filed: December 30, 2009
    Date of Patent: November 13, 2012
    Assignee: Rasilient Systems, Inc.
    Inventors: Yee-Hsiang Sean Chang, Yiqiang Ding, Bo Leng
  • Patent number: 8301715
    Abstract: A host device is provided comprising an interface configured to communicate with a storage device having a public memory area and a private memory area, wherein the public memory area stores a virtual file that is associated with content stored in the private memory area. The host device also comprises a cache, a host application, and a server. The server is configured to receive a request for the virtual file from the host application, send a request to the storage device for the virtual file, receive the content associated with the virtual file from the private memory area of the storage device, wherein the content is received by bypassing the cache, generate a response to the request from the host application, the response including the content, and send the response to the host application. In one embodiment, the server is a hypertext transfer protocol (HTTP) server.
    Type: Grant
    Filed: June 29, 2010
    Date of Patent: October 30, 2012
    Assignee: SanDisk IL Ltd.
    Inventors: Eyal Ittah, Judah Gamliel Hahn, Yehuda Drori, Joseph Meza, In-Soo Yoon, Ofir Cooper
  • Patent number: 8301838
    Abstract: An approach is provided for providing an application-level cache. A caching application configures at least one memory of a mobile terminal into an application-level cache with a locked region and a floating region. The caching application then causes, at least in part, actions that result in caching, into each of the locked region and the floating region, of data items that are anticipated to be requested via an application of the mobile terminal.
    Type: Grant
    Filed: November 4, 2009
    Date of Patent: October 30, 2012
    Assignee: Nokia Corporation
    Inventors: Nikolai Grigoriev, Sylvain Legault
  • Patent number: 8301694
    Abstract: A host device is provided comprising an interface configured to communicate with a storage device having a public memory area and a private memory area, wherein the public memory area stores a virtual file that is associated with content stored in the private memory area. The host device also comprises a cache, a host application, and a server. The server is configured to receive a request for the virtual file from the host application, send a request to the storage device for the virtual file, receive the content associated with the virtual file from the private memory area of the storage device, wherein the content is received by bypassing the cache, generate a response to the request from the host application, the response including the content, and send the response to the host application. In one embodiment, the server is a hypertext transfer protocol (HTTP) server.
    Type: Grant
    Filed: June 9, 2010
    Date of Patent: October 30, 2012
    Assignee: SanDisk IL Ltd.
    Inventors: Eyal Ittah, Judah Gamliel Hahn, Yehuda Drori, Joseph Meza, In-Soo Yoon
  • Patent number: 8291173
    Abstract: A memory hub includes first and second link interfaces for coupling to respective data busses, a data path coupled to the first and second link interfaces and through which data is transferred between the first and second link interfaces, and further includes a write bypass circuit coupled to the data path to couple write data on the data path and temporarily store the write data to allow read data to be transferred through the data path while the write data is temporarily stored. A method for writing data to a memory location in a memory system is provided which includes accessing read data in the memory system, providing write data to the memory system, and coupling the write data to a register for temporary storage. The write data is recoupled to the memory bus and written to the memory location following provision of the read data.
    Type: Grant
    Filed: July 20, 2010
    Date of Patent: October 16, 2012
    Assignee: Micron Technology, Inc.
    Inventors: Douglas A. Larson, Jeffrey J. Cronin
  • Patent number: 8291169
    Abstract: A method of providing history based done logic includes receiving a cache line in a L2 cache; determining if the cache line has a history of access at least three times on a previous call into the L2 cache; providing the cache line directly to a processor if the history of access was less then the at least three times; and loading the cache line into an L1 cache if the history of access was the at least three times.
    Type: Grant
    Filed: May 28, 2009
    Date of Patent: October 16, 2012
    Assignee: International Business Machines Corporation
    Inventor: David A. Luick
  • Patent number: 8285971
    Abstract: A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, an instruction sequencing unit that fetches instructions for execution by the at least one execution unit, and an address generation accelerator. The address generation accelerator, responsive to an initiation signal received from the instruction sequencing unit, computes and outputs first and second effective addresses of operands of an operation.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: October 9, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy
  • Publication number: 20120254550
    Abstract: An apparatus and method are described for implementing an exclusive lower level cache (LLC) policy within a computer processor. For example, one embodiment of a computer processor comprises: a mid-level cache circuit (MLC) for storing a first set of cache lines containing instructions and/or data; a lower level cache circuit (LLC) for storing a second set of cache lines of instructions and/or data; and an insertion circuit for implementing a policy for inserting or replacing cache lines within the LLC based on values of use recency and use frequency associated with the lines.
    Type: Application
    Filed: April 1, 2011
    Publication date: October 4, 2012
    Inventors: Jayesh Gaur, Mainak Chaudhuri, Sreenivas Subramoney
  • Patent number: 8281106
    Abstract: A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, and an instruction sequencing unit that fetches instructions for execution by the execution unit. The processor further includes an operand data structure and an address generation accelerator. The operand data structure specifies a first relationship between addresses of sequential accesses within a first address region and a second relationship between addresses of sequential accesses within a second address region. The address generation accelerator computes a first address of a first memory access in the first address region by reference to the first relationship and a second address of a second memory access in the second address region by reference to the second relationship.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: October 2, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy
  • Patent number: 8272020
    Abstract: Media content, based on a predetermined set of constraints, from a content provider is delivered to a local cache of a user device before viewing the media. A client asset manager process resides in the user device, an asset list at the content provider site, and the media assets are located at a remote site.
    Type: Grant
    Filed: July 30, 2003
    Date of Patent: September 18, 2012
    Assignee: Disney Enterprises, Inc.
    Inventors: Scott F. Watson, Eric C. Haseltine, Eric Freeman, Elisabeth M. Freeman, Aaron P. LaBerge, Adam T. Fritz
  • Patent number: 8271738
    Abstract: In a multiprocessor environment, by executing cache-inhibited reads or writes to registers, a scan communication is used to rapidly access registers inside and outside a chip originating the command. Cumbersome locking of the memory location may be thus avoided. Setting of busy latches at the outset virtually eliminates the chance of collisions, and status bits are set to inform the requesting core processor that a command is done and free of error, if that is the case.
    Type: Grant
    Filed: May 2, 2008
    Date of Patent: September 18, 2012
    Assignee: International Business Machines Corporation
    Inventors: James Stephen Fields, Jr., Michael Stephen Floys, Paul Frank Lecocq, Larry Scott Leitner, Kevin Franklin Reick
  • Patent number: 8250309
    Abstract: A data processor comprising: a control register operable to store a cache control value; and data accessing logic responsive to a data access instruction and to said cache control value to look for data to be accessed in a cache if said cache control value has a predetermined value and not to look for said data to be accessed in said cache if said cache control value does not have said predetermined value.
    Type: Grant
    Filed: January 28, 2005
    Date of Patent: August 21, 2012
    Assignee: ARM Limited
    Inventors: Patrick Gerard McGlew, Andrew Burdass
  • Patent number: 8243313
    Abstract: A method is disclosed. The method includes identifying a received object to be cached, calculating a time to rasterize the object, determining if the rasterize time is greater than a time to reuse a rasterized image of the object, caching the object if the reuse time is greater than the rasterize time and caching the rasterized image of the object if the rasterize time is greater than the reuse time.
    Type: Grant
    Filed: May 26, 2009
    Date of Patent: August 14, 2012
    Assignee: InfoPrint Solutions Company LLC
    Inventors: John Varga, Dennis Carney
  • Patent number: 8234440
    Abstract: A nonvolatile semiconductor memory device includes a memory cell array having a plurality of banks and a cache block corresponding to each of the plurality of banks. The cache block has a predetermined data storage capacity. A page buffer is included which corresponds to each of the plurality of banks. A programming circuit programs all of the plurality of banks except a last of said banks with page data. The page data is loaded through each page buffer and programmed into each cache block such that when page data for the last bank is loaded into the page buffer, the loaded page data and the page data programmed into the respective cache blocks are programmed into respective corresponding banks.
    Type: Grant
    Filed: September 22, 2011
    Date of Patent: July 31, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong-Hyuk Chae, Young-Ho Lim