Cache Flushing Patents (Class 711/135)
  • Patent number: 9830081
    Abstract: A method and system for synchronizing caches after reboot are described. In a cached environment, a host server stores a cache counter associated with the cache, which can be stored in the cache itself or in another permanent storage device. When data blocks are written to the cache, metadata for each data block is also written to the cache. This metadata includes a block counter based on a value of the cache counter. After a number of data operations are performed in the cache, the value of the cache counter is updated. Then, each data block is selectively updated based on a comparison of the value of the cache counter with a value of the block counter in the metadata for the corresponding data block.
    Type: Grant
    Filed: January 16, 2015
    Date of Patent: November 28, 2017
    Assignee: NetApp, Inc.
    Inventors: Somasundaram Krishnasamy, Brian McKean, Yanling Qi
  • Patent number: 9817607
    Abstract: Described are techniques for processing read and write requests in a system having a NUMA (non-uniform memory access) configuration. Such techniques may include receiving, at a front end adapter of the system, a write request, to write first data to a first storage device, storing a first copy of the first data in first memory local to a first domain, copying, using a first inter-storage processor communication connection, the first data from the first memory to a third memory of a third domain thereby creating a second copy of the first data in the third memory; and determining, in accordance with a first heuristic and first criteria, whether to use the first copy of the first data stored in the first memory or the second copy of the first data stored in the third memory as a source when writing the first data to the first storage device.
    Type: Grant
    Filed: June 20, 2014
    Date of Patent: November 14, 2017
    Assignee: EMC IP Holding Company LLC
    Inventor: David W. Harvey
  • Patent number: 9798631
    Abstract: This document relates to data storage techniques. One example can buffer write commands and cause the write commands to be committed to storage in flush epoch order. Another example can maintain a persistent log of write commands that are arranged in the persistent log in flush epoch order. Both examples may provide a prefix consistent state in the event of a crash.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: October 24, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James W. Mickens, Amar Phanishayee, Vijaychidambaram Velayudhan Pillai
  • Patent number: 9792212
    Abstract: In accordance with embodiments disclosed herein, there is provided systems and methods for providing a virtual shared cache mechanism. A processing device includes a plurality of clusters allocated into a virtual private shared cache. Each of the clusters includes a plurality of cores and a plurality of cache slices co-located within the plurality of cores. The processing device also includes a virtual shared cache including the plurality of clusters such that the cache data in the plurality of cache slices is shared among the plurality of clusters.
    Type: Grant
    Filed: September 12, 2014
    Date of Patent: October 17, 2017
    Assignee: Intel Corporation
    Inventors: Yen-Cheng Liu, Aamer Jaleel, Bongjin Jung, Zeshan A. Chishti, Adrian C. Moga, Eric Delano, Ren Wang
  • Patent number: 9767041
    Abstract: Apparatus, systems, and methods to manage memory operations are described. In one example, a controller comprises logic to receive a first transaction to operate on a first data element in the cache memory, perform a lookup operation for the first data element in the volatile memory and in response to a failed lookup operation, to generate a cache scrub hint forward the cache scrub hint to a cache scrub engine and identify one or more cache lines to scrub based at least in part on the cache scrub hint. Other examples are also disclosed and claimed.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: September 19, 2017
    Assignee: Intel Corporation
    Inventors: Aravindh V. Anantaraman, Zvika Greenfield, Israel Diamand, Anant V. Nori, Pradeep Ramachandran, Nir Misgav
  • Patent number: 9769184
    Abstract: A method and system for discovering inappropriate and/or illegitimate use of Web page content, comprising: monitoring access to a first Web page by a user; comparing information from the first Web page to information from a second known legitimate Web page; and determining whether the first Web page is legitimate based on the compared information.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: September 19, 2017
    Assignee: LOOKINGGLASS CYBER SOLUTIONS
    Inventors: Steve Smith, Vlad Serban, Andy Walker, Greg Ogorek
  • Patent number: 9727453
    Abstract: A memory system or flash card may include an algorithm or process for managing the handling of large tables in memory. A delta may be used for each table to accumulate updates. There may be a plurality of deltas for a multi-level delta structure. In one example, the first level delta is stored in random access memory (RAM), while the other level deltas are stored in the flash memory. Multiple-level deltas may improve the number of flash writes and reduce the number and amount of each flush to the actual table in flash. The use of multi-level deltas may improve performance by more efficiently writing to the table in flash.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: August 8, 2017
    Assignee: SanDisk Technologies LLC
    Inventor: Opher Lieber
  • Patent number: 9720693
    Abstract: A processor core in an instruction block-based microarchitecture includes a control unit that allocates instructions into an instruction window in bulk by fetching blocks of instructions and associated resources including control bits and operands at once. Such bulk allocation supports increased efficiency in processor core operations by enabling consistent management and policy implementation across all the instructions in the block during execution. For example, when an instruction block branches back on itself, it may be reused in a refresh process rather than being re-fetched from the instruction cache. As all of the resources for that instruction block are in one place, the instructions can remain in place and only valid bits need to be cleared. Bulk allocation also facilitates operand sharing by instructions in a block and explicit messaging among instructions.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: August 1, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Douglas C. Burger, Aaron Smith, Jan Gray
  • Patent number: 9658962
    Abstract: Methods and systems for improved control of traffic generated by a processor are described. In an embodiment, when a device generates a pre-fetch request for a piece of data or an instruction from a memory hierarchy, the device includes a pre-fetch identifier in the request. This identifier flags the request as a pre-fetch request rather than a non-pre-fetch request, such as a time-critical request. Based on this identifier, the memory hierarchy can then issue an abort response at times of high traffic which suppresses the pre-fetch traffic, as the pre-fetch traffic is not fulfilled by the memory hierarchy. On receipt of an abort response, the device deletes at least a part of any record of the pre-fetch request and if the data/instruction is later required, a new request is issued at a higher priority than the original pre-fetch request.
    Type: Grant
    Filed: January 13, 2014
    Date of Patent: May 23, 2017
    Assignee: Imagination Technologies Limited
    Inventor: Jason Meredith
  • Patent number: 9652338
    Abstract: A method for determining a delay in a dynamic, event driven, checkpoint interval. In one embodiment, the method includes the steps of determining the number of network bits to be transferred; determining the target bit transfer rate; calculating the next cycle delay as the number of bits to be transferred divided by the target bit transfer rate. In another aspect, the invention relates to a method for delaying a checkpoint interval. In one embodiment, the method includes the steps of monitoring the transfer of a prior batch of network data and delaying a subsequent checkpoint until the transfer of a prior batch of network data has reached a certain predetermined level of completion. In another embodiment, the predetermined level of completion is 100%.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: May 16, 2017
    Assignee: Stratus Technologies Bermuda Ltd.
    Inventors: Thomas D. Bissett, Paul A. Leveille, Srinivasu Chinta
  • Patent number: 9626296
    Abstract: Method and apparatus for tracking a prefetch list of a list prefetcher associated with a computer program in the event the list prefetcher cannot track the computer program. During a first execution of a computer program, the computer program outputs checkpoint indications. Also during the first execution of the computer program, a list prefetcher builds a prefetch list for subsequent executions of the computer program. As the computer program executes for the first time, the list prefetcher associates each checkpoint indication with a location in the building prefetch list. Upon subsequent executions of the computer program, if the list prefetcher cannot track the prefetch list to the computer program, the list prefetcher waits until the computer program outputs the next checkpoint indication. The list prefetcher is then able to jump to the location of the prefetch list associated with the checkpoint indication.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: April 18, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Thomas M. Gooding
  • Patent number: 9588900
    Abstract: Techniques described herein generally include methods and systems related to cooperatively caching data in a chip multiprocessor. Cooperatively caching of data in the chip multiprocessor is managed based on an eviction rate of data blocks from private caches associated with each individual processor core in the chip multiprocessor. The eviction rate of data blocks from each private cache in the cooperative caching system is monitored and used to determine an aggregate eviction rate for all private caches. When the aggregate eviction rate exceeds a predetermined value, for example the threshold beyond which network flooding can occur, the cooperative caching system for the chip multiprocessor is disabled, thereby avoiding network flooding of the chip multiprocessor.
    Type: Grant
    Filed: July 25, 2012
    Date of Patent: March 7, 2017
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Ezekiel Kruglick
  • Patent number: 9558120
    Abstract: Techniques and mechanism to provide a cache of cache tags in determining an access to cached data. In an embodiment, a tag storage stores a first set including tags associated with respective data locations of a cache memory. A cache of cache tags store a subset of tags stored by the tag storage. In response to any determination that a tag of the first set is to be stored to the cache of cache tags, all tags of the first set are stored to the first portion. Any storage of tags of the first set to the cache of cache tags includes storage of the tags of the first set to only a first portion of the cache of cache tags. In another embodiment, a replacement table is maintained for use in determining, based on an indicated level of activity for a set of the cache of cache tags, whether the set is to be selected for eviction and replacement of cached tags.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: January 31, 2017
    Assignee: Intel Corporation
    Inventors: Dyer Rolan, Nevin Hyuseinova, Blas A. Cuesta, Qiong Cai
  • Patent number: 9513886
    Abstract: A compiler tool-chain may automatically compile an application to execute on a limited local memory (LLM) multi-core processor by including automated heap management transparently to the application. Management of the heap in the LLM for the application may include identifying access attempts to a program variable, transferring the program variable to the LLM, when not already present in the LLM, and returning a local address for the program variable to the application. The application then accesses the program variable using the local address transparently without knowledge about data in the LLM. Thus, the application may execute on a LLM multi-core processor as if the LLM multi-core processor has an unlimited heap space.
    Type: Grant
    Filed: January 28, 2014
    Date of Patent: December 6, 2016
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Ke Bai, Aviral Shrivastava
  • Patent number: 9501402
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Grant
    Filed: February 19, 2014
    Date of Patent: November 22, 2016
    Assignee: INTEL CORPORATION
    Inventor: Sanjeev N. Trika
  • Patent number: 9465807
    Abstract: A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval.
    Type: Grant
    Filed: October 18, 2013
    Date of Patent: October 11, 2016
    Assignee: International Business Machines Corporation
    Inventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Patent number: 9460101
    Abstract: A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: October 4, 2016
    Assignee: International Business Machines Corporation
    Inventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Patent number: 9431076
    Abstract: A memory system, a semiconductor memory device and methods of operating the same may perform a read operation on the basis of flag data stored in a flag register, without reading the flag data stored in a memory array, when performing the read operation, so that a time taken for the read operation may be reduced.
    Type: Grant
    Filed: November 8, 2013
    Date of Patent: August 30, 2016
    Assignee: SK Hynix Inc.
    Inventor: Jee Yul Kim
  • Patent number: 9430389
    Abstract: A method performed by a processor is described. The method includes executing an instruction. The instruction has an address as an operand. The executing of the instruction includes sending a signal to cache coherence protocol logic of the processor. In response to the signal, the cache coherence protocol logic issues a request for ownership of a cache line at the address. The cache line is not in a cache of the processor. The request for ownership also indicates that the cache line is not to be sent to the processor.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: August 30, 2016
    Assignee: Intel Corporation
    Inventors: Jesus Corbal, Lisa K. Wu, George Z. Chrysos, Andrew T. Forsyth, Ramacharan Sundararaman
  • Patent number: 9424183
    Abstract: An operating method of a data storage device includes receiving a write request, determining whether it is possible to perform a first write operation of simultaneously writing a plurality of bits in each of memory cells coupled to one word line of a nonvolatile memory apparatus, and performing a garbage collection operation for the nonvolatile memory apparatus, according to a determination result, and generating first merged data.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: August 23, 2016
    Assignee: SK Hynix Inc.
    Inventor: Yu Mi Kim
  • Patent number: 9418019
    Abstract: An embodiment includes a system, comprising: a cache configured to store a plurality of cache lines, each cache line associated with a priority state from among N priority states; and a controller coupled to the cache and configured to: search the cache lines for a cache line with a lowest priority state of the priority states to use as a victim cache line; if the cache line with the lowest priority state is not found, reduce the priority state of at least one of the cache lines; and select a random cache line of the cache lines as the victim cache line if, after performing each of the searching of the cache lines and the reducing of the priority state of at least one cache line K times, the cache line with the lowest priority state is not found. N is an integer greater than or equal to 3; and K is an integer greater than or equal to 1 and less than or equal to N?2.
    Type: Grant
    Filed: May 2, 2014
    Date of Patent: August 16, 2016
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kevin Lepak, Tarun Nakra, Khang Nguyen, Murali Chinnakonda, Edwin Silvera
  • Patent number: 9414200
    Abstract: A system, a method, a device, and a computer program product for transmission of data between a user device and a server. A first data received from the user device and a second data received from the server are processed. A determination is made whether to store at least a portion of the second data in at least one memory. The stored portion of the second data is provided to the user device in response to receiving the first data.
    Type: Grant
    Filed: March 25, 2014
    Date of Patent: August 9, 2016
    Assignee: AltioStar Networks, Inc.
    Inventors: Kuntal Chowdhury, Ashraf M. Dahod
  • Patent number: 9398094
    Abstract: When a checkpoint comes, the control section selects some of a plurality of small areas which are transfer targets in the memory as small areas to be transferred to the outside of the own computer through the save area (indirect transfer small areas), and selects the others as small areas to be transferred to the outside of the own computer not through the save area (direct transfer small areas). Within a period in which updating from the own computer to the memory is suspended, the control section copies stored data in the small areas selected as the indirect transfer small areas from the memory to the save area with use of the copy section, and in parallel to the copying, transfers stored data in the small areas selected as the direct transfer small areas from the memory to the outside of the own computer with use of the communication section.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: July 19, 2016
    Assignee: NEC CORPORATION
    Inventors: Risako Uchida, Shinji Abe
  • Patent number: 9372810
    Abstract: A method is provided for collaborative caching between a server cache (104) of a server computer (102) and an array cache (112) of a storage array (110) coupled to the server computer. The method includes collecting instrumentation data on the server cache and the array cache of the storage array and, based on the instrumentation data, adjusting the operation of at least one of the server cache and the array cache.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: June 21, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Douglas L Voigt
  • Patent number: 9355041
    Abstract: One embodiment of the present invention is a memory subsystem that includes a sliding window tracker that tracks memory accesses associated with a sliding window of memory page groups. When the sliding window tracker detects an access operation associated with a memory page group within the sliding window, the sliding window tracker sets a reference bit that is associated with the memory page group and is included in a reference vector that represents accesses to the memory page groups within the sliding window. Based on the values of the reference bits, the sliding window tracker causes the selection a memory page in a memory page group that has fallen into disuse from a first memory to a second memory. Because the sliding window tracker tunes the memory pages that are resident in the first memory to reflect memory access patterns, the overall performance of the memory subsystem is improved.
    Type: Grant
    Filed: December 12, 2013
    Date of Patent: May 31, 2016
    Assignee: NVIDIA Corporation
    Inventors: John Mashey, Cameron Buschardt, James Leroy Deming, Jerome F. Duluk, Jr., Brian Fahs
  • Patent number: 9286215
    Abstract: According to an aspect of the embodiment, a cache controller sets, when power supply capacity information is acquired at an update period, a size of a permitted area in which the writing of dirty data is permitted and a size of an inhibited area in which the writing of the dirty data is inhibited in a cache memory, according to the power supply capacity information. The cache controller stores the dirty data or read data read out from a disk array in the permitted area, or stores only the read data in the inhibited area.
    Type: Grant
    Filed: January 25, 2011
    Date of Patent: March 15, 2016
    Assignee: FUJITSU LIMITED
    Inventor: Kentarou Yuasa
  • Patent number: 9280489
    Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue.
    Type: Grant
    Filed: March 21, 2013
    Date of Patent: March 8, 2016
    Assignee: SAP SE
    Inventor: Ivan Schreter
  • Patent number: 9280845
    Abstract: The present disclosure provides systems and methods for multi-path rendering on tile based architectures including executing, with a graphics processing unit (GPU), a query pass, executing, with the GPU, a condition true pass based on the query pass without executing a flush operation, executing, with the GPU, a condition false pass based on the query pass without executing a flush operation, and responsive to executing the condition true pass and the condition false pass, executing, with the GPU, a flush operation.
    Type: Grant
    Filed: January 14, 2014
    Date of Patent: March 8, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Murat Balci, Christopher Paul Frascati, Avinash Seetharamaiah
  • Patent number: 9274865
    Abstract: A method, apparatus and a data storage device for implementing enhanced buffer management for storage devices. An amount of emergency power for the storage device is used to determine a time period for the storage device between emergency power loss and actual shut down of electronics. A time period for the storage device for storing write cache data to non-volatile storage is used to identify the amount of write cache data that can be safely written from the write cache to non-volatile memory after an emergency power loss, and using the write cache threshold for selected buffer management techniques for providing enhanced storage device performance, including enhanced SSD or HDD performance.
    Type: Grant
    Filed: August 1, 2013
    Date of Patent: March 1, 2016
    Assignee: HGST Netherlands B.V.
    Inventor: Jeffrey L. Furlong
  • Patent number: 9268600
    Abstract: A transactional memory (TM) includes a selectable bank of hardware algorithm prework engines, a selectable bank of hardware lookup engines, and a memory unit. The memory unit stores result values (RVs), instructions, and lookup data operands. The transactional memory receives a lookup command across a bus from one of a plurality of processors. The lookup command includes a source identification value, data, a table number value, and a table set value. In response to the lookup command, the transactional memory selects one hardware algorithm prework engine and one hardware lookup engine to perform the lookup operation. The selected hardware algorithm prework engine modifies data included in the lookup command. The selected hardware lookup engine performs a lookup operation using the modified data and lookup operands provided by the memory unit. In response to performing the lookup operation, the transactional memory returns a result value and optionally an instruction.
    Type: Grant
    Filed: August 20, 2013
    Date of Patent: February 23, 2016
    Assignee: Netronome Systems, Inc.
    Inventor: Gavin J. Stark
  • Patent number: 9262283
    Abstract: A method for reading a kernel log upon a kernel panic in an operation system is applicable to a computing device including a processing unit and a storage unit, coupled to the processing unit, for storing the kernel and including a log backup partition and a user data partition. The method includes the computing device performing the operating system by the kernel; the computing device generating a kernel log upon performing the operating system, and writing the kernel log into the log backup partition; and upon a kernel panic occurring and then the processing unit being reset, the computing device performing a kernel initialization procedure including reading and then writing the kernel log in the log backup partition into the user data partition, wherein the kernel log in the log backup partition includes information of a process of operating the kernel before the processing unit is reset.
    Type: Grant
    Filed: November 5, 2013
    Date of Patent: February 16, 2016
    Assignees: Inventec Appliances (Pudong) Corporation, INVENTEC APPLIANCES CORP., Inventec Appliances (Jiangning) Corporation
    Inventors: Haoliang Zhou, Yexin Chen, Yongcai Bian
  • Patent number: 9229858
    Abstract: An example method of managing memory for an application includes identifying a plurality of regions of a heap storing one or more objects of a first type and one or more objects of a second type. Each object of the second type stores a memory address of an object of the first type. The method also includes selecting a set of target collection regions of the heap. The method includes in a concurrent marking phase, marking one or more reachable objects of the first type as live data. The method further includes for each region of the plurality maintaining a calculation of live data in the respective region. The method also includes traversing the objects of the first type marked in the concurrent marking phase and evacuating a set of traversed objects from a target collection region to a destination region of the heap.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: January 5, 2016
    Assignee: Red Hat, Inc.
    Inventor: Christine H. Flood
  • Patent number: 9229864
    Abstract: Flushing cache memory of dirty metadata in a plurality of file systems without either letting the caches reach their maximum capacity, or using so much of the total system IO process bandwidth that host system IO process requests are unreasonably delayed, may include determining the length of an interval between sync operations for each individual one of the plurality of file system, and how to divide a system wide maximum sync process IO operation bandwidth fairly between various ones of the plurality of file systems. A computer dynamically measures overall system operation rates, and calculates an available portion of a current calculated sync operation bandwidth for each file system. The computer also measures file system operation rates and determines how long a time period should be between sync operations in each file system.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: January 5, 2016
    Assignee: EMC Corporation
    Inventors: Kumar Kanteti, William Davenport, Philippe Armangau
  • Patent number: 9176886
    Abstract: Embodiments of the present invention relate to the filling of cache memory for cache memory initialization. In one embodiment, cache architecture dependent data is loaded into cacheable memory. The flow of initialization execution is transferred to the cache architecture dependent data in response to a trigger that indicates that an initialization of cache memory has been initiated. Each line contained in cache memory is filled using the cache architecture dependent data. The flow of initialization execution is returned back to the place in the initialization process from which it was transferred when the filling of cache memory is completed.
    Type: Grant
    Filed: October 30, 2006
    Date of Patent: November 3, 2015
    Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventor: Craig A. Vanzante
  • Patent number: 9165667
    Abstract: An electronic device with a solid state drive and associated control method are provided. The electronic device includes: a host; a power supply component, for providing electric power to the host and the solid state drive; and the solid state drive including a control unit electrically connected to the host through a bus, a cache memory electrically connected to the control component, and a flash memory electrically connected to the control component. When a remaining power of the power supply component decreases to a threshold value, the host controls the solid state drive to enter a data secure mode and disables the cache memory; and when the remaining power of the power supply component is above the threshold value, the host controls the solid state drive to enter a high performance mode and enables the cache memory.
    Type: Grant
    Filed: February 24, 2014
    Date of Patent: October 20, 2015
    Assignee: LITE-ON TECHNOLOGY CORPORATION
    Inventors: Jen-Yu Hsu, Kuang-Jung Chang, Chia-Hua Liu, Chao-Ton Yang, Sin-Yu Lin
  • Patent number: 9152589
    Abstract: A method and apparatus are disclosed for providing a DMA process. Accordingly, a DMA process is initiated for moving data from contiguous first locations to contiguous second locations and to a third location or third locations. Within the DMA process the data from each of the contiguous first locations is retrieved and stored in a corresponding one of the contiguous second locations and in the third location or corresponding one of the third locations. The DMA process is performed absent retrieving the same data a second other time prior to storing of same within the corresponding one of the contiguous second locations and in the third location or corresponding one of the third locations.
    Type: Grant
    Filed: April 3, 2014
    Date of Patent: October 6, 2015
    Inventors: Michael Bowler, Neil Hamilton
  • Patent number: 9087006
    Abstract: Storage system(s) for storing data in physical storage in a recurring manner, method(s) of operating thereof, and corresponding computer program product(s).
    Type: Grant
    Filed: May 29, 2012
    Date of Patent: July 21, 2015
    Assignee: INFINIDAT LTD.
    Inventors: Yechiel Yochai, Michael Dorfman, Efraim Zeidner
  • Patent number: 9081685
    Abstract: A data processing apparatus has data processing circuitry for performing data processing operations on data, and a hierarchical cache structure for storing at least a subset of the data for access by the data processing circuitry. The hierarchical cache structure has first and second level caches, and data evicted from the first level cache is routed to the second level cache under the control of second level cache access control circuitry. Cache maintenance circuitry performs a cache maintenance operation in both the first level cache and the second level cache. The access control circuitry is responsive to maintenance indication data to modify the eviction handling operation performed in response to the evicted data, so as to cause the required cache maintenance for the second level cache to be incorporated within the eviction handling operation.
    Type: Grant
    Filed: January 15, 2013
    Date of Patent: July 14, 2015
    Assignee: ARM Limited
    Inventors: Gilles Eric Grandou, Philippe Jean-Pierre Raphalen, Andrea Mascheroni
  • Patent number: 9081686
    Abstract: A management technique for input/output operations (JO) leverages a hypervisor's position as an intermediary between virtual machines (VMs) and storage devices servicing the VMs to facilitate improvements in overall I/O performance for the VMs. According to this new I/O management technique, the hypervisor sends write requests from VMs destined for storage devices to an I/O staging device that provides higher I/O performance than the storage devices, for caching in the I/O staging device in a write-back mode. Once the I/O staging device has received and acknowledged the write request, the hypervisor immediately provides an acknowledgement to the requesting VM. Later on and asynchronously with respect to the write requests from the VMs, the hypervisor reads the write data from the I/O staging device and sends it over to the storage devices for storage therein.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: July 14, 2015
    Assignee: VMware, Inc.
    Inventor: Daniel James Beveridge
  • Patent number: 9047194
    Abstract: Technologies generally described herein relate to cache directories in multi-core processors. Various examples may include, methods, systems, and devices. A first tile may receive a request to transfer a thread from the first tile to a second tile. An instruction may be sent from the first tile to map a virtual cache identifier to identifiers of caches of the first and second tiles. The thread may be transferred from the first tile to the second tile. Thereafter, a request may be generated for a data block. After a determination that the data block is not stored in the second tile's cache, and that the virtual cache identifier is mapped to the first and second cache identifiers, a request may be sent for the data block to the first tile.
    Type: Grant
    Filed: July 18, 2012
    Date of Patent: June 2, 2015
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Yan Solihin
  • Publication number: 20150149704
    Abstract: For each data change occurring transaction created as part of a write operation initiated for one or more tables in a main-memory-based DBMS, a transaction log entry can be written to a private log buffer corresponding to the transaction. All transaction log entries in the private log buffer can be flushed to a global log buffer upon completion of the transaction to which the private log buffer corresponds.
    Type: Application
    Filed: April 3, 2014
    Publication date: May 28, 2015
    Inventors: Juchang Lee, Beomsoo Kim, Kyu Hwan Kim, Jaeyun Noh, Sang Kyun Cha
  • Patent number: 9043550
    Abstract: A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.
    Type: Grant
    Filed: November 6, 2013
    Date of Patent: May 26, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Publication number: 20150143055
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines, a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, periodically check the image modification flags and write only the memory address of the flagged cache rows in the defined log. The processor unit is further arranged to monitor the free space available in the defined log and to trigger an interrupt if the free space available falls below a specific amount.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 21, 2015
    Inventors: Guy L. Guthrie, Naresh Nayar, Geraint North, William J. Starke, Albert J. Van Norstrand, JR.
  • Patent number: 9037803
    Abstract: In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool.
    Type: Grant
    Filed: March 6, 2013
    Date of Patent: May 19, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sam S. Lightstone, Adam J. Storm
  • Publication number: 20150134913
    Abstract: A method for cleaning files stored in a mobile terminal is disclosed. The mobile terminal receives a file cleaning instruction from a user. In response to the file cleaning instruction, the mobile terminal identifies cache files based on the cache files' associated information and past user activities on the cache files and groups the identified cache files and their associated information into multiple cache file categories. At least one of the multiple cache file categories is located in an extended storage device of the mobile terminal (e.g., a SD card). Next, the mobile terminal displays information of the multiple cache file categories on the display, each cache file category having an associated file cleaning option and cleans at least one of the multiple cache file categories from the mobile terminal in accordance with a user choice of the corresponding file cleaning option.
    Type: Application
    Filed: November 7, 2014
    Publication date: May 14, 2015
    Inventors: Ruimin Huang, Ming Xu
  • Patent number: 9032160
    Abstract: In a first embodiment, a method and computer program product for use in a storage system comprising quiescing IO commands the sites of an ACTIVE/ACTIVE storage system, the active/active storage system having at least two storage sites communicatively coupled via a virtualization layer, creating a change set, unquiescing IO commands by the virtualization layers, transferring data of a change set to the other sites of the active/active storage system by the virtualization layer, and flushing the data by the virtualization layer. In a second embodiment, a method and computer program product for use in a storage system comprising fracturing a cluster of an active/active storage system; wherein the cluster includes at least two sites, stopping IO on a first site of the cluster; and rolling to a point in time on the first site.
    Type: Grant
    Filed: December 29, 2011
    Date of Patent: May 12, 2015
    Assignee: EMC Corporation
    Inventors: Assaf Natanzon, Saar Cohen, Steven R. Bromling
  • Patent number: 9032157
    Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache comprising a plurality of cache rows (1210), each cache row comprising a cache line (1214) and an image modification flag (1217) indicating a modification of said cache line caused by the running of the virtual machine; and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit comprises a replication manager adapted to define a log (200) in the memory prior to running the virtual machine in said first operating mode; and said cache further includes a cache controller (122) adapted to periodically check said image modification flags; write only the memory address of the flagged cache lines in the defined log and subsequently clear the image modification flags.
    Type: Grant
    Filed: December 11, 2012
    Date of Patent: May 12, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Geraint North, William J. Starke, Phillip G. Williams
  • Patent number: 9032158
    Abstract: A method of identifying a cache line of a cache memory (180) for replacement, is disclosed. Each cache line in the cache memory has a stored sequence number and a stored transaction data stream identifying label. A request (e.g., 400) associated with a label identifying a transaction data stream is received. The label corresponds to the stored transaction data stream identifying label of the cache line. The stored sequence number of the cache line is compared with a response sequence number. The response sequence number is associated with the stored transaction data stream identifying label of the cache line. The cache line is identified for replacement based on the comparison.
    Type: Grant
    Filed: April 26, 2011
    Date of Patent: May 12, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventor: David Charles Ross
  • Patent number: 9021209
    Abstract: A processing node tracks probe activity level associated with its cache. The processing node and/or processing system further predicts an idle duration. If the probe activity level increases above a threshold probe activity level, and the idle duration prediction is above a threshold idle duration threshold, the processing node flushes its cache to prevent probes to the cache. If the probe activity level is above the threshold probe activity level but the predicted idle duration is too short, the performance state of the processing node is increased above its current performance state to provide enhanced performance capability in responding to the probe requests.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: April 28, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander Branover, Maurice B. Steinman
  • Patent number: 9021208
    Abstract: An information processing device includes a memory and a processor coupled to the memory, wherein the processor executes a process comprising selecting data included in a same file as deletion target data from the memory when deleting the data cached in the memory at the caching from the memory and deleting the deletion target data and the data selected at the selecting, from the memory.
    Type: Grant
    Filed: January 25, 2013
    Date of Patent: April 28, 2015
    Assignee: Fujitsu Limited
    Inventors: Akira Ochi, Yasuo Koike, Toshiyuki Maeda, Tomonori Furuta, Fumiaki Itou, Tadahiro Miyaji, Kazuhisa Fujita