Cache Flushing Patents (Class 711/135)
-
Patent number: 10942852Abstract: Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage devices, for data processing and storage. One of the method includes: maintaining a plurality of tiers of storage devices and one or more tiers of caches by a storage system for storing blockchain data, the plurality of tiers of storage devices including at least a higher-tier storage device and a lower-tier storage device; determining that a blockchain data object in a data log file stored in a lower-tier storage device is an active data object, wherein the blockchain data object is block data, transaction data, or state data; and writing the blockchain data object into a cache of the one or more tiers of caches.Type: GrantFiled: December 12, 2019Date of Patent: March 9, 2021Assignee: Advanced New Technologies Co., Ltd.Inventor: Shikun Tian
-
Patent number: 10936436Abstract: A computer-implemented method includes the following. A start time for a backup of data in a system is received. At the start time, a snapshot of the data in the system is captured. When an environment of the system is a database environment, the capturing includes setting a status of a database to backup mode and freezing data files in the database while permitting applications to run. When the environment of the system is a non-database environment, and when a file system type is general parallel file system (GPFS), the capturing includes caching new input/output operations to a cache and starting a timer (counter) for flushing the cache. At a specified time, the snapshot is copied to media, setting the status of the database to normal mode, and providing copying notifications to users.Type: GrantFiled: August 7, 2018Date of Patent: March 2, 2021Assignee: Saudi Arabian Oil CompanyInventor: Ahmed Saad Alsalim
-
Patent number: 10930354Abstract: Devices and techniques for enhanced flush transfer efficiency via flush prediction in a storage device are described herein. User data from a user data write can be stored in a buffer. The size of the user data stored in the buffer can be smaller than a write width for a storage device subject to the write. This size difference results in buffer free space. A flush trigger can be predicted. Additional data can be marshaled in response to the prediction of the flush trigger. The size of the additional data is less than or equal to the buffer free space. The additional data can be stored in the buffer free space. The contents of the buffer can be written to the storage device in response to the flush trigger.Type: GrantFiled: February 24, 2020Date of Patent: February 23, 2021Assignee: Micron Technology, Inc.Inventor: David Aaron Palmer
-
Patent number: 10909012Abstract: A system for managing software-defined persistent memory includes a CPU, a PCIe switch, one or more random access memory modules, and one or more NVMe SSDs. The PCIe switch is configured to communicate with one or more host devices. The CPU and the PCIe switch are configured to generate, for each host device, a persistent memory controller data structure that has configuration data to enable the CPU and the PCIe switch to emulate a persistent memory controller when interacting with the host device. The CPU and the PCIe switch are configured to receive instructions from the one or more host devices and persistently store write data in one or more NVMe SSDs or retrieve read data from the one or more NVMe SSDs based on the instructions from the one or more host devices, and use at least a portion of the RAM as cache memory to temporarily store at least one of the read data from the one or more NVMe SSDs or the write data intended to be persistently stored in the one or more NVMe SSDs.Type: GrantFiled: November 12, 2018Date of Patent: February 2, 2021Assignee: H3 Platform, Inc.Inventor: Yuan-Chih Yang
-
Patent number: 10904405Abstract: The present invention provides an image processing apparatus comprising detecting alteration of any application held in the image processing apparatus; determining, in a case where alteration has been detected, whether or not the use of the image processing apparatus needs to be restricted based on the application in which alteration has been detected; and displaying, in a display unit and as a result of the determination, in a case where the use of the image processing apparatus needs to be restricted, a message indicating that alteration of the application has been detected, and restricting the use of the image processing apparatus, and in a case where the use of the image processing apparatus need not be restricted, display, in the display unit, a message indicating that alteration of the application has been detected.Type: GrantFiled: July 1, 2019Date of Patent: January 26, 2021Assignee: Canon Kabushiki KaishaInventors: Atsushi Ikeda, Takeshi Kogure, Hiroaki Koike, Naoto Sasagawa
-
Patent number: 10896132Abstract: Provided computer-implemented methods for prioritizing cache objects for deletion may include (1) tracking, at a computing device, a respective time an externally-accessed object spends in an external cache, (2) queuing, when the externally-accessed object is purged from the external cache, the externally-accessed object in a first queue, (3) queuing, when an internally-accessed object is released, the internally-accessed object in a second queue, (4) prioritizing objects within the first queue, based on a cache-defined internal age factor and on respective times the objects spend in the external cache and respective times the objects spend in an internal cache, (5) prioritizing objects within the second queue based on respective times the objects spend in the internal cache, (6) selecting an oldest object having a longest time in any of the first queue and the second queue, and (7) deleting the oldest object. Various other methods, systems, and computer-readable media are disclosed.Type: GrantFiled: May 16, 2018Date of Patent: January 19, 2021Assignee: Veritas Technologies LLCInventors: Jitendra Patidar, Anindya Banerjee
-
Patent number: 10884739Abstract: Systems and methods for load canceling in a processor that is connected to an external interconnect fabric are disclosed. As a part of a method for load canceling in a processor that is connected to an external bus, and responsive to a flush request and a corresponding cancellation of pending speculative loads from a load queue, a type of one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor, is converted from load to prefetch. Data corresponding to one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor is accessed and returned to cache as prefetch data. The prefetch data is retired in a cache location of the processor.Type: GrantFiled: May 24, 2018Date of Patent: January 5, 2021Assignee: INTEL CORPORATIONInventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
-
Patent number: 10885004Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.Type: GrantFiled: June 19, 2018Date of Patent: January 5, 2021Assignee: Intel CorporationInventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur, Benjamin Graniello
-
Patent number: 10866738Abstract: An example device in accordance with an aspect of the present disclosure includes a plurality of memory segments corresponding to at least one memory channel of a computing system that is to receive a memory module. A performance attribute of an Advanced Configuration and Power Interface (ACPI) table is set to indicate performance of at least one of the plurality of memory segments, and is usable for memory allocation by an operating system memory manager.Type: GrantFiled: January 14, 2019Date of Patent: December 15, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Vincent Nguyen, Thierry Fevrier, David Engler
-
Patent number: 10846221Abstract: A memory system includes a write buffer and a controller. The controller, when, among write data, first write data which are grouped into a transaction are inputted to the write buffer, receives total size information of a transaction for completion of commit of the first write data. The controller checks, at a time of performing an actual flush operation for the write buffer check, in the case where it is determined that commit-uncompleted first write data are included in the write buffer, a size of a space left in the write buffer by simulating a flush operation with the commit-uncompleted first write data excluded from the simulated flush operation, compares a checked size of the space left in the write buffer and the total size information, and determines whether to include the commit-uncompleted first write data in the actual flush operation depending on a comparison result.Type: GrantFiled: April 16, 2019Date of Patent: November 24, 2020Assignee: SK hynix Inc.Inventor: Hae-Gi Choi
-
Patent number: 10838722Abstract: A processor includes a global register to store a value of an interrupted block count. A processor core, communicably coupled to the global register, may, upon execution of an instruction to flush blocks of a cache that are associated with a security domain: flush the blocks of the cache sequentially according to a flush loop of the cache; and in response to detection of a system interrupt: store a value of a current cache block count to the global register as the interrupted block count; and stop execution of the instruction to pause the flush of the blocks of the cache. After handling of the interrupt, the instruction may be called again to restart the flush of the cache.Type: GrantFiled: December 20, 2018Date of Patent: November 17, 2020Assignee: Intel CorporationInventors: Gideon Gerzon, Dror Caspi, Arie Aharon, Ido Ouziel
-
Patent number: 10838727Abstract: A processing device is provided which includes memory and at least one processor. The memory includes main memory and cache memory in communication with the main memory via a link. The at least one processor is configured to receive a request for a cache line and read the cache line from main memory. The at least one processor is also configured to compress the cache line according to a compression algorithm and, when the compressed cache line includes at least one byte predicted not to be accessed, drop the at least one byte from the compressed cache line based on whether the compression algorithm is determined to successfully compress the cache line according to a compression parameter.Type: GrantFiled: December 14, 2018Date of Patent: November 17, 2020Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Shomit N. Das, Kishore Punniyamurthy, Matthew Tomei, Bradford M. Beckmann
-
Patent number: 10831593Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The operation and the fault memory address translation are flushed from the hardware accelerator including augmenting the operation with an entity identifier. The switchboard forwards the operation with the fault memory address translation and the entity identifier from the hardware accelerator to a second buffer. The operating system repairs the fault memory address translation. The operating system sends the operation to the processing core utilizing an effective address based on the entity identifier. The switchboard, supported by the processing core, forwards the operation with the repaired memory address translation to a first buffer and the hardware accelerator executes the operation with the repaired address.Type: GrantFiled: January 3, 2019Date of Patent: November 10, 2020Assignee: International Business Machines CorporationInventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
-
Patent number: 10831667Abstract: Various aspects are described herein. In some aspects, the disclosure provides techniques for accessing tag information in a memory line. The techniques include determining an operation to perform on at least one memory line of a memory. The techniques further include performing the operation by accessing only a portion of the at least one memory line, wherein the only the portion of the at least one memory line comprises one or more flag bits that are independently accessible from remaining bits of the at least one memory line.Type: GrantFiled: October 29, 2018Date of Patent: November 10, 2020Assignee: Qualcomm IncorporatedInventors: Bharat Kumar Rangarajan, Chulmin Jung, Rakesh Misra
-
Patent number: 10782969Abstract: A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a vector cache line write back instruction. The vector cache line write back instruction is to indicate a source packed memory indices operand that is to include a plurality of memory indices. The processor also includes a cache coherency system coupled with the packed data registers and the decode unit. The cache coherency system, in response to the vector cache line write back instruction, to cause, any dirty cache lines, in any caches in a coherency domain, which are to have stored therein data for any of a plurality of memory addresses that are to be indicated by any of the memory indices of the source packed memory indices operand, to be written back toward one or more memories. Other processors, methods, and systems are also disclosed.Type: GrantFiled: May 14, 2018Date of Patent: September 22, 2020Assignee: Intel CorporationInventors: Kshitij A. Doshi, Thomas Willhalm
-
Patent number: 10761759Abstract: Preventing duplicate entries of identical data in a storage device, including: receiving a write request to write data to the storage device; calculating one or more signatures for the data associated with the write request; determining whether any of the calculated signatures match a calculated signature contained in a recently read signature buffer, each entry in the recently read signature buffer associating a calculated signature for data that has been read with an address of a storage location within the storage device where the data is stored; and responsive to determining that one of the calculated signatures matches a calculated signature contained in the recently read signature buffer, determining whether the data associated with the calculated signature is a duplicate of data stored at a particular address that is associated with the calculated signature contained in the recently read signature buffer.Type: GrantFiled: January 27, 2017Date of Patent: September 1, 2020Assignee: Pure Storage, Inc.Inventors: Ronald S. Karr, Ethan L. Miller
-
Patent number: 10754778Abstract: Methods and systems for improved control of traffic generated by a processor are described. In an embodiment, when a device generates a pre-fetch request for a piece of data or an instruction from a memory hierarchy, the device includes a pre-fetch identifier in the request. This identifier flags the request as a pre-fetch request rather than a non-pre-fetch request, such as a time-critical request. Based on this identifier, the memory hierarchy can then issue an abort response at times of high traffic which suppresses the pre-fetch traffic, as the pre-fetch traffic is not fulfilled by the memory hierarchy. On receipt of an abort response, the device deletes at least a part of any record of the pre-fetch request and if the data/instruction is later required, a new request is issued at a higher priority than the original pre-fetch request.Type: GrantFiled: April 17, 2017Date of Patent: August 25, 2020Assignee: MIPS Tech, LLCInventor: Jason Meredith
-
Patent number: 10757034Abstract: A queue flushing method includes scanning, by a queue flushing processor, a flushing status of a valid queue from a queue information table to determine a target queue whose flushing status is “to be flushed” in the queue information table, where the queue information table records the flushing status of the valid queue; modifying, by the queue flushing processor, the flushing status of the target queue to “start flushing”; and flushing, by the queue flushing processor, the target queue, where the flushing status of the target queue is modified to “flushing complete” after the target queue is flushed.Type: GrantFiled: February 28, 2018Date of Patent: August 25, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Qin Zheng, Renjie Qu
-
Patent number: 10757213Abstract: A method for pushing data in a Content-Centric Networking (CCN) network comprises receiving a message from a source at an input interface device of a node device in the CCN network, the node device executing a CCN protocol. A determination is made that the received message is an interest-notification message. The received message is identified as the interest-notification message by including a Type field, a Content Name field, and a Cacheable Object field. The Type field indicates that the received message is pushing data within the CCN network. The Content Name field associates a hierarchical name to the data being pushed within the CCN network. The Cacheable Object field includes a cacheable object representing the data being pushed within the CCN network. The cacheable object is extracted in response to the received message being an interest-notification message. The cacheable object is placed in a cache at the node device.Type: GrantFiled: August 14, 2015Date of Patent: August 25, 2020Assignee: Futurewei Technologies, Inc.Inventors: Ravishankar Ravindran, Guo-qiang Wang
-
Patent number: 10740241Abstract: Embodiments of the present disclosure relate to a method and apparatus for managing cache. The method comprises determining a cache flush time period of the cache for a lower-layer storage device associated with the cache. The method further comprises: in response to a length of the cache flush time period being longer than a threshold length of time, in response to receiving a write request, determining whether data associated with the write request has been stored into the cache. The method further comprises: in response to a miss of the data in the cache, storing the write request and the data in the cache without returning a write completion message for the write request.Type: GrantFiled: June 1, 2018Date of Patent: August 11, 2020Assignee: EMC IP Holding Company LLCInventors: Ruiyong Jia, Xinlei Xu, Lifeng Yang, Xiongcheng Li, Jian Gao
-
Patent number: 10713165Abstract: A high-bandwidth adaptive cache reduces unnecessary cache accesses by providing a laundry counter indicating whether a given adaptive cache region has any dirty frames to allow write-back without a preparatory adaptive cache read. An optional laundry list allows the preparatory adaptive cache read to also be avoided if the tags of the data being written back match all tags of the data in the adaptive cache that is dirty.Type: GrantFiled: February 12, 2018Date of Patent: July 14, 2020Assignee: WISCONSIN ALUMNI RESEARCH FOUNDATIONInventors: Jason Lowe-Power, David A. Wood, Mark D. Hill
-
Patent number: 10705969Abstract: A dedupable cache is disclosed. The dedupable cache may include cache memory including both a dedupable read cache and a non-dedupable write buffer. The dedupable cache may also include a deduplication engine to manage reads from and writes to the dedupable read cache, and may return a write status signal indicating whether a write to the dedupable read cache was successful or not. The dedupable cache may also include a cache controller, which may include: a cache hit/miss check to determine whether an address in a request may be found in the dedupable read cache; a hit block to manage data accesses when the requested data may be found in the dedupable read cache; a miss block to manage data accesses when the requested data is not found in the dedupable read cache; and a history storage to store information about accesses to the data in the dedupable read cache.Type: GrantFiled: March 23, 2018Date of Patent: July 7, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Mu Tien Chang, Andrew Chang, Dongyan Jiang, Hongzhong Zheng
-
Patent number: 10705962Abstract: Embodiment of this disclosure provides a mechanism to use a portion of an inactive processing element's private cache as an extended last-level cache storage space to adaptively adjust the size of shared cache. In one embodiment, a processing device is provided. The processing device comprising a cache controller is to identify a cache line to evict from a shared cache. An inactive processing core is selected by the cache controller from a plurality of processing cores associated with the shared cache. Then, a private cache of the inactive processing core is notified of an identifier of a cache line associated with the shared cache. Thereupon, the cache line is evicted from the shared cache to install in the private cache.Type: GrantFiled: December 21, 2017Date of Patent: July 7, 2020Assignee: Intel CorporationInventors: Carl J. Beckmann, Robert G. Blankenship, Chyi-Chang Miao, Chitra Natarajan, Anthony-Trung D. Nguyen
-
Patent number: 10691594Abstract: The present disclosure is directed to systems and methods that include cache operation storage circuitry that selectively enables/disables the Cache Line Flush (CLFLUSH) operation. The cache operation storage circuitry may also selectively replace the CLFLUSH operation with one or more replacement operations that provide similar functionality but beneficially and advantageously prevent an attacker from placing processor cache circuitry in a known state during a timing-based, side channel attack such as Spectre or Meltdown. The cache operation storage circuitry includes model specific registers (MSRs) that contain information used to determine whether to enable/disable CLFLUSH functionality. The cache operation storage circuitry may include model specific registers (MSRs) that contain information used to select appropriate replacement operations such as Cache Line Demote (CLDEMOTE) and/or Cache Line Write Back (CLWB) to selectively replace CLFLUSH operations.Type: GrantFiled: June 29, 2018Date of Patent: June 23, 2020Assignee: Intel CorporationInventors: Vadim Sukhomlinov, Kshitij Doshi
-
Patent number: 10678576Abstract: A technique for managing data storage for virtual machines in a data storage system includes receiving, from a virtual machine administrative program, a request to operate a virtual machine disk (VMD) at a different service level from one at which the data storage system is currently operating the VMD. In response to receiving the request, the data storage system migrates the VMD from a first set of storage extents providing a first service level to a second set of storage extents providing a second service level.Type: GrantFiled: June 30, 2015Date of Patent: June 9, 2020Assignee: EMC IP Holding Company LLCInventors: Alan L. Taylor, Anil K. Koluguri, William C. Whitney, Arun Joseph, William S. Burney, Somchai Pitchayanonnetr
-
Patent number: 10659551Abstract: Disclosed is a method and apparatus for performing steps to cause encoded information to be stored at a client device during a first network session between a server and the client device. To cause encoded information to be stored at a client device, the server first determines a set of network resource requests that encode the information. These network resource requests may include requests for one or more specific URLs and/or requests for one or more files. The server then causes the client device to initiate the network resource requests. The server may cause this initiation by, for example, redirecting the client device to the network resources. The client device initiating the network resource requests causes data representative of the network resource requests to be stored at the client device.Type: GrantFiled: December 4, 2014Date of Patent: May 19, 2020Assignee: RavenWhite Security, Inc.Inventors: Bjorn Markus Jakobsson, Ari Juels
-
Patent number: 10628321Abstract: Various embodiments include methods and devices for implementing progressive flush of a cache memory of a computing device. Various embodiments may include determining an activity state of a region of the cache memory, issuing a start cache memory flush command in response to determining that the activity state of the region is idle, flushing the region in response to the start cache memory flush command, determining that the activity state of the region is active, issuing an abort cache memory flush command in response to determining that the activity state of the region is active, and aborting flushing the region in response to the abort cache memory flush command.Type: GrantFiled: February 28, 2018Date of Patent: April 21, 2020Assignee: Qualcomm IncorporatedInventors: Andrew Torchalski, Edwin Jose, Joshua Stubbs
-
Patent number: 10621093Abstract: A heterogeneous computing system includes a first processor and a second processor that are heterogeneous. The second processor is configured to sequentially execute a plurality of kernels offloaded from the first processor. A coherency controller is configured to classify each of the plurality of kernels into one of a first group and a second group, based on attributes of instructions included in each of the plurality of kernels before the plurality of kernels are executed and is further configured to reclassify one of the plurality of kernels from the second group to the first group based on a transaction generated between the first processor and the second processor during execution of the one of the plurality of kernels.Type: GrantFiled: August 10, 2018Date of Patent: April 14, 2020Assignee: Samsung Electronics Co., Ltd.Inventor: Hyunjun Jang
-
Patent number: 10599356Abstract: A method and apparatus for utilizing virtual machines to pool memory from disparate server systems that may have disparate types of memory is described. The method may include establishing communication between a pool virtual machine and two or more publisher virtual machines. The method may also include aggregating, by the pool virtual machine, portions of memory from each of two or more publisher servers to generate a pool of memory, and providing an application with access to the pool of memory, through the pool virtual machine.Type: GrantFiled: February 10, 2015Date of Patent: March 24, 2020Assignee: HIVEIO INC.Inventors: Chetan Venkatesh, Jin Liu, Qian Zhang, Pu Paul Zhang
-
Patent number: 10599353Abstract: This application sets forth techniques for managing the allocation of storage space within a storage device that is communicably coupled to a computing device. Requests are received from a plurality of applications executing on the computing device, in which each request specifies a respective amount of storage space to be reserved within the storage device. Detection is performed for the availability of a minimum amount of free space that corresponds to an optimal amount of space for executing at least one application of the plurality of applications. A respective priority ranking for each application is identified based on historical data gathered for the applications. Based on the priority rankings, a subset of requests from the plurality of requests is established. For each request of the subset, at least a portion of the respective amount of space specified by the request is reserved while maintaining the minimum amount of free space.Type: GrantFiled: October 4, 2017Date of Patent: March 24, 2020Assignee: Apple Inc.Inventors: Mark A. Pauley, Cameron S. Birse, Kazuhisa Yanagihara, Susan M. Grady, Timothy P. Hannon
-
Patent number: 10592323Abstract: A storage system maintains a cache and a non-volatile storage. An error recovery component queries a cache component to determine whether modified customer data exists in a memory preserve cache. In response to determining that the modified customer data exists in the memory preserve cache, and in response to a failure beyond a threshold number of times of initial microcode load (IML) attempts to recover the modified customer data, an error notification is transmitted for manual intervention to avoid loss of the modified customer data.Type: GrantFiled: October 23, 2017Date of Patent: March 17, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos
-
Systems and methods for monitoring data synchronization progress in a multi-hop data recovery system
Patent number: 10592160Abstract: The disclosed computer-implemented method for monitoring data synchronization progress in a multi-hop data recovery system may include (i) calculating a number of data blocks to be synchronized, (ii) setting each element of a synchronization data structure to dirty, (iii) determining a dirty bytes counter, (iv) transmitting a portion of the data blocks to be synchronized, (v) receiving an acknowledgement corresponding to the transmitted portion of the data blocks, (vi) setting a set of elements within the synchronization data structure corresponding to the transmitted portion of the data blocks to clean, (vii) determining a pending dirty bytes counter that indicates a current number of elements within the synchronization data structure that are set to dirty, and (viii) transmitting the dirty bytes counter and the pending dirty bytes counter. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: November 9, 2018Date of Patent: March 17, 2020Assignee: Veritas Technologies LLCInventors: Anish Vaidya, Sunil Hasbe, Om Prakash Agarwal, Rushikesh Patil, Ashit Kumar, Venkata Sreenivasa Rao Nagineni -
Patent number: 10585744Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator pulls an operation from a first buffer and adjusts a receive credit value in a first window context operatively coupled to the hypervisor. The receive credit value to limit a first quantity of one or more first tasks in the first buffer. The hardware accelerator determines at least one memory address translation related to the operation having a fault. The switchboard forwards the operation with the fault memory address translation from the hardware accelerator to a second buffer. The operation and the fault memory address translation are flushed from the hardware accelerator, and the operating system repairs the fault memory address translation.Type: GrantFiled: November 2, 2017Date of Patent: March 10, 2020Assignee: International Business Machines CorporationInventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
-
Patent number: 10582549Abstract: A wireless hotspot operation method used in a wireless hotspot device includes the steps described below. A remote device is wirelessly connected to the wireless hotspot device and whether a physical address of the remote device exists in an address table is determined, wherein the address table includes a plurality of entries each configured to record a connected device physical address of a connected device. When the physical address does not exist in the address table, whether a capacity of the address table exceeds a threshold value is determined. When the capacity exceeds the threshold value, one of the connection parameters of the connected device satisfying a removing criterion is determined. The connected device physical address of the connected device with the connection parameter satisfying the removing criterion is removed from the address table. The physical address of the remote device is added to the address table.Type: GrantFiled: October 30, 2018Date of Patent: March 3, 2020Assignee: PEGATRON CORPORATIONInventor: Ying-Chieh Mou
-
Patent number: 10579523Abstract: The subject matter described herein relates to a file system with adaptive flushing for an electronic device. The file system keeps data in memory much longer and its policy for flushing in-memory write cache to storage is application-aware and adaptive. More specifically, what parts of the cached data are ready for flushing could be determined according to the access characteristic of an application. In addition, when to do flushing can be selected flexibly at least partly based on user input interactions with an application of the electronic device or with the electronic device. Further, a multi-priority scheduling mechanism for scheduling data units that are ready to be flushed could be employed, which ensures fairness among applications and further improves flushing performance.Type: GrantFiled: August 15, 2014Date of Patent: March 3, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Jinglei Ren, Chieh-Jan Mike Liang, Thomas Moscibroda
-
Patent number: 10573391Abstract: Devices and techniques for enhanced flush transfer efficiency via flush prediction in a storage device are described herein. User data from a user data write can be stored in a buffer. The size of the user data stored in the buffer can be smaller than a write width for a storage device subject to the write. This size difference results in buffer free space. A flush trigger can be predicted. Additional data can be marshalled in response to the prediction of the flush trigger. The size of the additional data is less than or equal to the buffer free space. The additional data can be stored in the buffer free space. The contents of the buffer can be written to the storage device in response to the flush trigger.Type: GrantFiled: December 3, 2018Date of Patent: February 25, 2020Assignee: Micron Technology, Inc.Inventor: David Aaron Palmer
-
Patent number: 10572337Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The operation and the fault memory address translation are flushed from the hardware accelerator including augmenting the operation with an entity identifier. The switchboard forwards the operation with the fault memory address translation and the entity identifier from the hardware accelerator to a second buffer. The operating system repairs the fault memory address translation. The operating system sends the operation to the processing core utilizing an effective address based on the entity identifier. The switchboard, supported by the processing core, forwards the operation with the repaired memory address translation to a first buffer and the hardware accelerator executes the operation with the repaired address.Type: GrantFiled: May 1, 2017Date of Patent: February 25, 2020Assignee: International Business Machines CorporationInventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
-
Patent number: 10558569Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to control a cache. An example method includes monitoring cache lines in a cache, the cache lines storing recently written data to the cache, the recently written data corresponding to main memory, comparing a total quantity of the cache lines to a threshold that is less than a cache line storage capacity of the cache, and causing a write back of at least one of the cache lines to the main memory when a store event causes the total quantity of the cache lines to satisfy the threshold.Type: GrantFiled: October 31, 2013Date of Patent: February 11, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Hans Boehm, Dhruva Chakrabarti
-
Patent number: 10552153Abstract: Method and apparatus for efficient range-based memory write back is described herein. One embodiment of an apparatus includes a system memory, a plurality of hardware processor cores each of which includes a first cache, a decoder circuitry to decode an instruction having fields for a first memory address and a range indicator, and an execution circuitry to execute the decoded instruction. Together, the first memory address and the range indicator define a contiguous region in the system memory that includes one or more cache lines. An execution of the decoded instruction causes any instances of the one or more cache lines in the first cache to be invalidated. Additionally, any invalidated instances of the one or more cache lines that are dirty are to be stored to system memory.Type: GrantFiled: March 31, 2017Date of Patent: February 4, 2020Assignee: Intel CorporationInventors: Ren Wang, Chunhui Zhang, Qixiong J. Bian, Bret L. Toll, Jason W. Brandt
-
Patent number: 10552344Abstract: A secure enclave circuit stores an enclave page cache map to track contents of a secure enclave in system memory that stores secure data containing a page having a virtual address. An execution unit is to, in response to a request to evict the page from the secure enclave: block creation of translations of the virtual address; record one or more hardware threads currently accessing the secure data in the secure enclave; send an inter-processor interrupt to one or more cores associated with the one or more hardware threads, to cause the one or more hardware threads to exit the secure enclave and to flush translation lookaside buffers of the one or more cores; and in response to detection of a page fault associated with the virtual address for the page in the secure enclave, unblock the creation of translations of the virtual address.Type: GrantFiled: December 26, 2017Date of Patent: February 4, 2020Assignee: Intel CorporationInventors: Carlos V. Rozas, Ittai Anati, Francis X. McKeen, Krystof Zmudzinski, Ilya Alexandrovich, Somnath Chakrabarti, Dror Caspi, Meltem Ozsoy
-
Patent number: 10545816Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator pulls an operation from a first buffer and adjusts a receive credit value in a first window context operatively coupled to the hypervisor. The receive credit value to limit a first quantity of one or more first tasks in the first buffer. The hardware accelerator determines at least one memory address translation related to the operation having a fault. The switchboard forwards the operation with the fault memory address translation from the hardware accelerator to a second buffer. The operation and the fault memory address translation are flushed from the hardware accelerator, and the operating system repairs the fault memory address translation.Type: GrantFiled: May 1, 2017Date of Patent: January 28, 2020Assignee: International Business Machines CorporationInventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
-
Patent number: 10530883Abstract: Systems, methods, and software for operating a content delivery system to purge cached content are provided herein. In one example, purge messages are transferred for delivery to content delivery nodes (CDNs) in the content delivery system. The CDNs receive the messages, purge content associated with the messages, and compile purge summaries based on the messages. The CDNs further periodically transfer the purge summaries to one another to compare the messages received, and gather purge information for purge messages that may have been inadvertently missed by the CDNs.Type: GrantFiled: March 26, 2014Date of Patent: January 7, 2020Assignee: Fastly Inc.Inventors: Bruce Spang, Tyler B. McMullen
-
Patent number: 10528418Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The switchboard forwards the operation with the fault memory address translation from the hardware accelerator to a second buffer. The operation and the fault memory address translation are flushed from the hardware accelerator, and the operating system repairs the fault memory address translation. The switchboard forwards the operation with the repaired memory address translation from the second buffer to a first buffer and the hardware accelerator executes the operation with the repaired address.Type: GrantFiled: October 30, 2017Date of Patent: January 7, 2020Assignee: International Business Machines CorporationInventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
-
Patent number: 10521236Abstract: In an embodiment, a processor includes a branch prediction circuit and a plurality of processing engines. The branch prediction circuit is to: detect a coherence operation associated with a first memory address; identify a first branch instruction associated with the first memory address; and predict a direction for the identified branch instruction based on the detected coherence operation. Other embodiments are described and claimed.Type: GrantFiled: March 29, 2018Date of Patent: December 31, 2019Assignee: Intel CorporationInventors: Christopher Wilkerson, Binh Pham, Patrick Lu, Jared Warner Stark, IV
-
Patent number: 10514865Abstract: Techniques for a managing concurrent I/Os in a file system may include receiving a sequence of conflicting I/O lists of write data stored in a cache, the sequence specifying a sequential order in which the I/O lists are to be flushed to a file stored on non-volatile storage; determining a first I/O list of the sequence having a conflict with a second I/O list of the sequence, wherein the conflict between the first I/O list and the second I/O list is a first common block written to by both the first and second I/O lists; and performing first processing that modifies the first I/O list and the second I/O list to remove the conflict.Type: GrantFiled: April 24, 2018Date of Patent: December 24, 2019Assignee: EMC IP Holding Company LLCInventors: Ivan Bassov, Hao Fang
-
Patent number: 10509725Abstract: Techniques are provided for performing a flush operation in a non-coherent cache. In response to determining to perform a flush operation, a cache unit flushes certain data items. The flush operation may be performed in response to a lapse of a particular amount of time, such as a number of cycles, or an explicit flush instruction that does not indicate any cache entry or data item. The cache unit may store change data that indicates which entry stores a data item that has been modified but not yet been flushed. The change data may be used to identify the entries that need to be flushed. In one technique, a dirty cache entry that is associated with one or more relatively recent changes is not flushed during a flush operation.Type: GrantFiled: March 8, 2013Date of Patent: December 17, 2019Assignee: Oracle International CorporationInventors: Sungpack Hong, Hassan Chafi, Eric Sedlar
-
Patent number: 10503694Abstract: Deleting items based on user interaction is disclosed, including: determining a set of items to delete from a device; generating a representation associated with the set of items; presenting the representation associated with the set of items at the device; detecting a user input operation associated with modifying the presentation of the representation associated with the set of items at the device; and deleting at least a portion of the set of items based at least in part on the modified presentation of the representation associated with the set of items.Type: GrantFiled: June 19, 2017Date of Patent: December 10, 2019Assignee: Alibaba Group Holding LimitedInventor: Aiqing Chen
-
Patent number: 10496544Abstract: In one embodiment, aggregated write back in a direct mapped two level memory in accordance with the present description, aggregates a dirty block or other subunit of data being evicted from a near memory of a two level memory system, with other spatially co-located dirty subunits of data in a sector or other unit of data for write back to a far memory of the two level memory system. In one embodiment, dirty spatially co-located subunits are scrubbed and aggregated with one or more spatially co-located dirty subunits being evicted. In one embodiment, a write combining buffer is utilized to aggregate spatially co-located dirty subunits prior to being transferred to a far memory write buffer in a write back operation. Other aspects are described herein.Type: GrantFiled: December 29, 2016Date of Patent: December 3, 2019Assignee: Intel CorporationInventors: Zhe Wang, Christopher B. Wilkerson, Zeshan A. Chishti
-
Patent number: 10489298Abstract: An apparatus for assisting a flush of a cache is described herein. The apparatus comprises processing element. The processing element is to probe a cache line at an offset address and write the cache line at the offset address to a non-volatile memory in response to a flush instruction at a first address.Type: GrantFiled: July 28, 2015Date of Patent: November 26, 2019Assignee: Hewlett Packard Enterprise Development LPInventors: Derek Alan Sherlock, Shawn Walker
-
Patent number: 10489237Abstract: In one example, a processor may include a processor core with a central processing unit as well as a processor cache separate from the processor core. The processor may also include flushing circuitry. The flushing circuitry may identify a power loss event for the processor. In response, the flushing circuitry may selectively power the processor by providing power to the processor cache but not to the processor core. The flushing circuitry may further flush data content of the processor cache to a non-volatile memory separate from the processor.Type: GrantFiled: December 19, 2014Date of Patent: November 26, 2019Assignee: Hewlett Packard Enterprise Development LPInventors: David Engler, Mark Kapoor, Patrick Raymond