Cache Flushing Patents (Class 711/135)
  • Patent number: 10482028
    Abstract: A mechanism is described for facilitating optimization of cache associated with graphics processors at computing devices. A method of embodiments, as described herein, includes introducing coloring bits to contents of a cache associated with a processor including a graphics processor, wherein the coloring bits to represent a signal identifying one or more caches available for use, while avoiding explicit invalidations and flushes.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: November 19, 2019
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Patent number: 10482033
    Abstract: A memory controller includes a dirty group detector configured to, in response to receiving a request for writing data to a memory, modify addresses of a cache group related to a physical address of the memory, increase counters corresponding to the modified addresses of the cache group, and detect whether the cache group is in a dirty state based on the counters; and a dirty list manager configured to manage the cache group in the dirty state and a dirty list including dirty bits according to a result of the detecting; wherein the dirty bits indicate whether a cache set included in the cache group is in the dirty state.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: November 19, 2019
    Assignees: SAMSUNG ELECTRONICS CO., LTD, SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Sangheon Lee, Dongwoo Lee, Kiyoung Choi, Soojung Ryu
  • Patent number: 10466932
    Abstract: A technique for managing data storage in a data storage system is disclosed. Data blocks are written to a data storage system cache, pluralities of the data blocks being organized into cache macro blocks, the IO cache macro blocks having a fixed size. Access requests for the data blocks are processed, wherein processing includes generating block access statics. Using access statics, data blocks stored in the cache macroblocks having block a access times that overlap are identified. Data blocks identified as having overlapping access times are rearranged into one or more overlap cache macroblocks.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: November 5, 2019
    Assignee: EMC IP Holding Company LLC
    Inventor: Alexey Valentinovich Romanovskiy
  • Patent number: 10452294
    Abstract: In one or more embodiments, one or more systems, methods, and/or processes may configure an input/output memory management unit; may receive, from a device associated with an information handling system, a request for an allocation of storage of a memory medium of the information handling system; may allocate the storage of the memory medium without an interaction with a processor of the information handling system and without an interaction with an operating system executed by the processor; may add a table entry, associated with the allocation of storage of the memory medium, to a page table; may provide update information to the operating system; may provide a success response to the device; may store first data from the device; and may provide second data to the device.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: October 22, 2019
    Assignee: Dell Products L.P.
    Inventors: Shyamkumar Thiyagarajan Iyer, Vadhiraj Sankaranarayanan
  • Patent number: 10452539
    Abstract: Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for performing actions during simulation of an application interacting with a hybrid memory system, actions including providing a first range of virtual addresses corresponding to a first type of memory in the hybrid memory system, and a second range of virtual addresses corresponding to a second type of memory in the hybrid memory system, receiving a data packet that is to be stored in the hybrid memory system, determining a virtual address assigned to the data packet, the virtual address being provided in cache block metadata associated with the data packet, and storing the data packet in one of the first type of memory and the second type of memory in the hybrid memory system based on the virtual address, the first range of virtual addresses, and the second range of virtual addresses.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: October 22, 2019
    Assignee: SAP SE
    Inventor: Ahmad Hassan
  • Patent number: 10452431
    Abstract: A memory system may include: a memory device; and a controller, wherein the controller includes: a receiving unit suitable for receiving a plurality of tasks from a host; and a task processing unit suitable for re-arranging the plurality of the tasks based on the number of the plurality of the tasks and a priority order, and performing the re-arranged tasks.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: October 22, 2019
    Assignee: SK hynix Inc.
    Inventors: An-Ho Choi, Jun-Seop Chung
  • Patent number: 10445236
    Abstract: A thread on a processor core executes one or more instructions to write file data for a file into a persistent memory save area. The instructions to write the file data have the effect of storing the file data for the file in the cache associated with the processor core. The thread running on the processor core flushes the file data from the cache to the persistent memory save area while retaining the file data in the cache. The thread running on the processor core copies the file data from the cache for the processor core into a persistent copy of the file that is stored in persistent memory.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: October 15, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventor: Thomas Boyle
  • Patent number: 10430083
    Abstract: Disclosed is a method of operating a memory system which executes a plurality of commands including a write command and a trim command. The memory system includes a memory device, which includes a plurality of blocks. The method further includes performing garbage collection for generating a free block, calculating a workload level in performing the garbage collection, and changing a command schedule based on the workload level.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: October 1, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Byung-gook Kim, Young-bong Kim, Seung-hwan Ha, Sang-hoon Yoo
  • Patent number: 10423540
    Abstract: Provided are an apparatus, system, and method to determine a cache line in a first memory device to be evicted for an incoming cache line in a second memory device. An incoming cache line is read from the second memory device. A plurality of cache lines in the first memory device are processed to determine an eviction cache line of the plurality of cache lines in the first memory device having a least number of bits that differ from corresponding bits in the incoming cache line. Bits from the incoming cache line that are different from the bits in the eviction cache line are written to the eviction cache line in the first memory device.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: September 24, 2019
    Assignee: INTEL CORPORATION
    Inventors: Helia Naeimi, Qi Zeng
  • Patent number: 10417138
    Abstract: Provided are a computer program product, system, and method for considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage. One of a plurality of densities for one of a plurality of groups of tracks is incremented in response to determining at least one of that the group is not ready to destage and that one of the tracks in the group in the cache transitions to being ready to destage. A determination is made of a group frequency indicating a frequency at which tracks in the group are modified. At least one of the density and the group frequency is used for each of the groups to determine whether to destage the group. The tracks in the group in the cache are destaged to the storage in response to determining to destage the group.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: September 17, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Lokesh M. Gupta
  • Patent number: 10417046
    Abstract: The present disclosure provides a display method for operating systems, a display device for operating systems, and a multi-system terminal. The display method includes: running multiple operating systems simultaneously; and displaying each of the multiple operating systems in a preset display mode. The multi-system operating system that are run simultaneously is displayed on one or more display screens, and restarting a terminal is avoided when switching the operating systems, thus a user operates the multiple operating system simultaneously or separately, which facilitates user's operation and improves user's experience.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: September 17, 2019
    Assignee: Yulong Computer Telecommunication Scientific (Shenzhen) Co., Ltd.
    Inventors: Chiqiang Wu, Zhengyi Huang
  • Patent number: 10387330
    Abstract: Apparatuses, systems, methods, and program products are disclosed for cache replacement. An apparatus includes a cache memory structure, a processor, and memory that stores code executable by the processor. The code is executable by the processor to receive a value to be stored in the cache memory structure, identify, in response to determining that the received value is not currently stored in an entry of the cache memory structure, a least recently used (“LRU”) set of entries of the cache memory structure where the received value can be stored, and select a least frequently used (“LFU”) entry of the identified LRU set of entries for storing the received value.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: August 20, 2019
    Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD
    Inventor: Daniel J. Colglazier
  • Patent number: 10339053
    Abstract: Examples disclosed herein relate to variable cache flushing. Some examples disclosed herein a storage controller may detect a cache flush failure and, in response, may execute a first reattempt of the cache flush after a first time period has elapsed. The storage controller may adjust durations of time periods between subsequent reattempts of the cache flush based on various factors.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: July 2, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Daniel J. Mazina, Matt Gates, David C. Burden
  • Patent number: 10339049
    Abstract: A garbage collection facility is provided for memory management within a computer. The facility implements, in part, grouping of infrequently accessed data units in a designated transient memory area, and includes designating an area of the memory as a transient memory area and an area as a conventional memory area, and counting, for each data unit in the transient or conventional memory areas a number of accesses to the data unit. The counting provides a respective access count for each data unit. For each data unit in the transient memory area or the conventional memory area, a determination is made whether the respective access count is below a transient threshold ascertained to separate frequently accessed data units and infrequently used data units. Data units with respective access counts below the transient threshold are grouped together as transient data units within the transient memory area.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: July 2, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giles R. Frazier, Michael K. Gschwind, Christian Jacobi, Younes Manton, Anthony Saporito, Chung-Lung K. Shum
  • Patent number: 10338870
    Abstract: Provided is an electronic device capable of effectively using a wireless tag. A second communication-control unit controls communication with a wireless tag. The second system-control unit reads management information that manages the data write area of the wireless tag via the second communication-control unit. There is empty area in the management information, the second system-control unit writes setup information that has status information indicating the setup state in the empty area.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: July 2, 2019
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Takeshi Hamakawa
  • Patent number: 10331364
    Abstract: A system configuration containing a host, solid state drive (“SSD”), and controller able to perform a hybrid mode non-volatile memory (“NVM”) access is disclosed. Upon receiving a command with a logical block address (“LBA”) for accessing information stored in NVM, a secondary flash translation layer (“FTL”) index table is loaded to a first cache and entries in a third cache is searched to determine validity associated with stored FTL table. When the entries in the third cache are invalid, the FTL index table in the second cache is searched to identify valid FTL table entries. If the second cache contains invalid FTL index table, a new FTL index table is loaded from NVM to the second cache. The process subsequently loads at least a portion of FTL table indexed by the FTL index table in the third cache.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: June 25, 2019
    Assignee: CNEX Labs, Inc.
    Inventor: Yiren Ronnie Huang
  • Patent number: 10325217
    Abstract: An application analysis computer obtains reports from user terminals identifying operational states of instances of an application being processed by the user terminals. Sequences of the operational states that the instances of the application have transitioned through while being processed by the user terminals are identified. Common operational states that occur in a plurality of the sequences are identified. For each of the common operational states, a frequency of occurrence of the common operational state is determined. For each state transition between the common operational states in the sequences, a frequency of occurrence of the state transition is determined. State predictive metrics are generated based on the frequencies of occurrence of the common operational states and the frequencies of occurrence of the state transitions. The state predictive metrics are communicated, such as to an application server to control access to the application by user terminals.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: June 18, 2019
    Assignee: CA, Inc.
    Inventors: Satnam Singh, Sanjay Vyas, Rajendra Arcot Gopalakrishna, Rammohan Varadarajan
  • Patent number: 10311193
    Abstract: A method for changing a signal value of an FPGA at runtime, including the steps of loading an FPGA hardware configuration with at least one signal value onto the FPGA, running the FPGA hardware configuration on the FPGA, setting the signal value for transfer to the FPGA, determining writeback data from the signal value, writing the writeback data as status data to a configuration memory of the FPGA, and transferring the status data from the configuration memory to the functional level of the FPGA. A method is also provided for performing an FPGA build, including the steps of creating an FPGA hardware configuration with a plurality of signal values, arranging signal values in adjacent areas of the FPGA hardware configuration, ascertaining memory locations of a configuration memory for status data of the plurality of signal values on the basis of the FPGA hardware configuration, and creating a list containing signal values.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: June 4, 2019
    Assignee: dSPACE digital signal processing and control engineering GmbH
    Inventors: Heiko Kalte, Lukas Funke
  • Patent number: 10289479
    Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The switchboard forwards the operation with the fault memory address translation from the hardware accelerator to a second buffer. The operation and the fault memory address translation are flushed from the hardware accelerator, and the operating system repairs the fault memory address translation. The switchboard forwards the operation with the repaired memory address translation from the second buffer to a first buffer and the hardware accelerator executes the operation with the repaired address.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: May 14, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
  • Patent number: 10289558
    Abstract: Embodiments of the present disclosure perform procedures that manipulate a memory system's local cache line eviction policy so that critical “dirty” cache lines are evicted from last level caches as late as possible. Embodiments can selectively handle cache lines in a manner that can renew the liveliness of “dirty” cache lines so that a local “least recently used” (LRU) eviction policy treats them as though they were recently accessed rather than evicting them. Embodiments perform read operations and manipulate the age or “active” status of cache lines by performing procedures which modify “dirty” cache lines to make them appear active to the processor. Embodiments of the present disclosure can also invalidate “clean” cache lines so that “dirty” lines automatically stay in the cache.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: May 14, 2019
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Md Kamruzzaman
  • Patent number: 10282295
    Abstract: A method includes monitoring, at a cache coherence directory, states of cachelines stored in a cache hierarchy of a data processing system using a plurality of entries of the cache coherence directory. Each entry of the cache coherence directory is associated with a corresponding cache page of a plurality of cache pages, and each cache page representing a corresponding set of contiguous cachelines. The method further includes selectively evicting cachelines from a first cache of the cache hierarchy based on cacheline utilization densities of cache pages represented by the corresponding entries of the plurality of entries of the cache coherence directory.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: May 7, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: William L. Walker, Michael W. Boyer, Yasuko Eckert, Gabriel H. Loh
  • Patent number: 10268599
    Abstract: For a cache in which a plurality of frequently accessed data segments are temporarily stored, reference count information of the plurality of data segments, in conjunction with least recently used (LRU) information, is used to determine a length of time to retain the plurality of data segments in the cache according to a predetermined weight, where notwithstanding the LRU information, those of the plurality of data segments having a higher reference counts are retained longer than those having lower reference counts.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: April 23, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Joseph S. Hyde, II, Subhojit Roy
  • Patent number: 10261916
    Abstract: The described embodiments include a computing device with two or more translation lookaside buffers (TLB). During operation, the computing device updates an entry in the TLB based on a virtual address to physical address translation and metadata from a page table entry that were acquired during a page table walk. The computing device then computes, based on a lease length expression, a lease length for the entry in the TLB. Next, the computing device sets, for the entry in the TLB, a lease value to the lease length, wherein the lease value represents a time until a lease for the entry in the TLB expires, wherein the entry in the TLB is invalid when the associated lease has expired. The computing device then uses the lease value to control operations that are allowed to be performed using information from the entry in the TLB.
    Type: Grant
    Filed: November 25, 2016
    Date of Patent: April 16, 2019
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Amro Awad, Sergey Blagodurov, Arkaprava Basu, Mark H. Oskin, Gabriel H. Loh, Andrew G. Kegel, David S. Christie, Kevin J. McGrath
  • Patent number: 10262138
    Abstract: An attacker who gains control of a computer system using malicious software (malware) may be able to do anything to the data on the system. One type of malware, sometimes referred to as ransomware, can encrypt the contents of a hard drive or other data repository, preventing those contents from being accessed by their rightful owners. A ransomware attack can be greatly disruptive to an individual or business, and result in loss of data and loss of computer system uptime, impacting overall computing productivity. By detecting that ransomware is operating on a computer (e.g. by correlating between the original data and content in different cache layers), the negative effects of the ransomware may be mitigated or avoided.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: April 16, 2019
    Assignee: PAYPAL, INC.
    Inventor: Shlomi Boutnaru
  • Patent number: 10241922
    Abstract: Provided is a processor including a plurality of devices. The processor includes a source processing device configured to identify data to request from another device, and a destination processing device configured to, in response to a request for the identified data from the source processing device using credit-based flow control, transmit the identified data to the source processing device using the credit-based flow control. The source processing device includes a credit buffer used for the credit-based flow control, the credit buffer being allocable to include a cache region configured to cache the transmitted identified data received by the source processing device.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: March 26, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Woong Seo
  • Patent number: 10229047
    Abstract: A method and apparatus of wear leveling control for storage class memory are disclosed. According to the present invention, whether current data to be written to a nonvolatile memory corresponds to a write cache hit is determined. If the current data to be written corresponds to the write cache hit, the current data are written to a write cache as well as to a designated location in the nonvolatile memory different from a destined location in the nonvolatile memory. If the current data to be written corresponds to a write cache miss, the current data are written to the destined location in the nonvolatile memory. If the current data to be written corresponds to the write cache miss and the write cache is not full, the current data is also written to the write cache. In another embodiment, the wear leveling control technique also includes address rotation process to achieve long-term wear leveling as well.
    Type: Grant
    Filed: August 6, 2016
    Date of Patent: March 12, 2019
    Assignee: Wolley INC.
    Inventor: Chuen-Shen Bernard Shung
  • Patent number: 10223283
    Abstract: Embodiments relate to enhancing a refresh PCI translation (RPCIT) instruction to refresh a translation lookaside buffer (TLB). A computer processor determines a request to purge a translation for a single frame of the TLB in response to executing an enhanced RPCIT instruction. The enhanced RPCIT instruction is configured to selectively perform one of a single-frame TLB refresh operation or a range-bounded TLB refresh operation. The computer processor determines an absolute storage frame based on a translation of a PCI virtual address in response to the request to purge a translation for a single frame of the TLB. The computer processor further performs the single-frame TLB refresh operation to purge the translation for the single frame.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: March 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David F. Craddock, Thomas A. Gregg, Dan F. Greiner, Damian L. Osisek
  • Patent number: 10216568
    Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The operation and the fault memory address translation are flushed from the hardware accelerator including augmenting the operation with an entity identifier. The switchboard forwards the operation with the fault memory address translation and the entity identifier from the hardware accelerator to a second buffer. The operating system repairs the fault memory address translation. The operating system sends the operation to the processing core utilizing an effective address based on the entity identifier. The switchboard, supported by the processing core, forwards the operation with the repaired memory address translation to a first buffer and the hardware accelerator executes the operation with the repaired address.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
  • Patent number: 10210089
    Abstract: A method and apparatus are provided for controlling data flow by storing variable length encoded information bits in a circular buffer in a write operation to a virtual write address comprising a first wrap bit value appended by a current write address within the buffer address range and generating an interrupt alarm if the virtual write address crosses a virtual alarm address comprising a second wrap bit value appended by an alarm address within the buffer address range, where the first and second wrap bit values each toggle between first and second values every time the current write address or alarm address, respectively, wraps around the circular buffer, thereby synchronizing data flow in the circular buffer and/or preventing buffer overflow.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: February 19, 2019
    Assignee: NXP USA, Inc.
    Inventors: Stephan M. Herrmann, Ritesh Agrawal, Aman Arora, Jeetendra Kumar Gupta, Snehlata Gutgutia, Deboleena Minz Sakalley
  • Patent number: 10198789
    Abstract: Techniques for allowing cache access returns out of order are disclosed. A return ordering queue exists for each of several cache access types and stores outstanding cache accesses in the order in which those accesses were made. When a cache access request for a particular type is at the head of the return ordering queue for that type and the cache access is available for return to the wavefront that made that access, the cache system returns the cache access to the wavefront. Thus, cache accesses can be returned out of order with respect to cache accesses of different types. Allowing out-of-order returns can help to improve latency, for example in the situation where a relatively low-latency access type (e.g., a read) is issued after a relatively high-latency access type (e.g., a texture sampler operation).
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: February 5, 2019
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Daniel Schneider, Fataneh Ghodrat
  • Patent number: 10176096
    Abstract: Providing scalable dynamic random access memory (DRAM) cache management using DRAM cache indicator caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in high-bandwidth memory. The DRAM cache management circuit comprises a DRAM cache indicator cache, which stores master table entries that are read from a master table in a system memory DRAM and that contain DRAM cache indicators. The DRAM cache indicators enable the DRAM cache management circuit to determine whether a memory line in the system memory DRAM is cached in the DRAM cache of high-bandwidth memory, and, if so, in which way of the DRAM cache the memory line is stored. Based on the DRAM cache indicator cache, the DRAM cache management circuit may determine whether to employ the DRAM cache and/or the system memory DRAM to perform a memory access operation in an optimal manner.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: January 8, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Patent number: 10162873
    Abstract: In a process for migrating a virtual machine's storage from a source disk to a destination disk, during a steady state (i.e., wherein the contents of the virtual machine stored on the source disk and the destination disk are equal), a virtual machine monitor receives a set of write requests from a guest operating system (“guest”) of the virtual machine, provides confirmation of the completion of the set of writes to the source disk, and asynchronously replicates the set of write requests to the destination disk. Upon receipt of a flush request from the guest, the virtual machine monitor confirms completion of the flushing of the destination disk following replication of the write requests to the destination disk. Upon receipt of a switch request from a virtual machine manager, the virtual machine monitor switches the virtual machine to the destination disk and issues subsequent write requests to the destination disk.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: December 25, 2018
    Assignee: Red Hat, Inc.
    Inventor: Paolo Bonzini
  • Patent number: 10120811
    Abstract: Provided are a computer program product, system, and method for considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage. One of a plurality of densities for one of a plurality of groups of tracks is incremented in response to determining at least one of that the group is not ready to destage and that one of the tracks in the group in the cache transitions to being ready to destage. A determination is made of a group frequency indicating a frequency at which tracks in the group are modified. At least one of the density and the group frequency is used for each of the groups to determine whether to destage the group. The tracks in the group in the cache are destaged to the storage in response to determining to destage the group.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: November 6, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Lokesh M. Gupta
  • Patent number: 10108568
    Abstract: A master for transmitting data to a slave via a bus segment by segment is provided. The master includes a finite state machine (FSM) configured to receive and analyze dirty bits for first data segments to be included in a current segment among the data and to output a first selection signal and location information related to the current segment according to an analysis result and a first multiplexer configured to determine whether to output the current segment as a dirty data segment to the bus based on the first selection signal.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: October 23, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Moon Gyung Kim
  • Patent number: 10095410
    Abstract: According to one embodiment, a memory system includes a nonvolatile memory, and a controller configured to control the nonvolatile memory. The controller includes an access controller configured to control access to the nonvolatile memory, based on a first request which is issued from a host, and a processor configured to execute a background process for the nonvolatile memory, based on a second request which is issued from the host before the first request is issued.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: October 9, 2018
    Assignee: Toshiba Memory Corporation
    Inventors: Hiroyuki Nemoto, Kazuya Kitsunai, Yoshihisa Kojima, Katsuhiko Ueki
  • Patent number: 10078588
    Abstract: The described embodiments include a computing device with two or more translation lookaside buffers (TLB) that performs operations for handling entries in the TLBs. During operation, the computing device maintains lease values for entries in the TLBs, the lease values representing times until leases for the entries expire, wherein a given entry in the TLB is invalid when the associated lease has expired. The computing device uses the lease value to control operations that are allowed to be performed using information from the entries in the TLBs. In addition, the computing device maintains, in a page table, longest lease values for page table entries indicating when corresponding longest leases for entries in TLBs expire. The longest lease values are used to determine when and if a TLB shootdown is to be performed.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: September 18, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Arkaprava Basu, Mark H. Oskin, Gabriel H. Loh, Andrew G. Kegel, David S. Christie, Kevin J. McGrath
  • Patent number: 10055355
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: August 21, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-Kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Patent number: 10049045
    Abstract: Techniques described herein generally include methods and systems related to cooperatively caching data in a chip multiprocessor. Cooperatively caching of data in the chip multiprocessor is managed based on an eviction rate of data blocks from private caches associated with each individual processor core in the chip multiprocessor. The eviction rate of data blocks from each private cache in the cooperative caching system is monitored and used to determine an aggregate eviction rate for all private caches. When the aggregate eviction rate exceeds a predetermined value, for example the threshold beyond which network flooding can occur, the cooperative caching system for the chip multiprocessor is disabled, thereby avoiding network flooding of the chip multiprocessor.
    Type: Grant
    Filed: March 5, 2017
    Date of Patent: August 14, 2018
    Assignee: Empire Technology Development LLC
    Inventor: Ezekiel Kruglick
  • Patent number: 10042560
    Abstract: According to a write data request processing method and a storage array provided in the embodiments of the present invention, a controller is connected to a cache device via a switching device, an input/output manager is connected to the controller via the switching device, and the input/output manager is connected to a cache device via the switching device. The controller obtains a cache address from the cache device for to-be-written data according to the write data request, the controller sends an identifier of the cache device and the cache address to the input/output manager via the switching device, and the input/output manager writes the to-be-written data to the cache address via the switching device.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: August 7, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Wei Zhang, Xianhong Lu, Mingchang Wei, Chenyi Zhang
  • Patent number: 10013300
    Abstract: A method for the on-board diagnosis of a control unit which include a hypervisor and at least one guest system operated under the hypervisor. In the method, the guest system receives a diagnosis inquiry at an individual diagnosis address of the guest system from a diagnostic tool with the aid of a communication infrastructure. The guest system makes a self-diagnosis. The guest system receives a hypervisor diagnosis from the hypervisor. The guest system transmits, at its diagnosis address, the self-diagnosis or the hypervisor diagnosis to the diagnostic tool as a function of the diagnosis inquiry.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: July 3, 2018
    Assignee: ROBERT BOSCH GMBH
    Inventors: Gunnar Piel, Gary Morgan
  • Patent number: 9983893
    Abstract: When a guest of a virtual machine attempts to accesses an address that causes an exit from the guest to the hypervisor of a host, the hypervisor receives an indication of an exit by a guest to the hypervisor. The received address is associated with a memory-mapped input-output (MMIO) instruction. The hypervisor determines, based on the received indication, that the exit is associated with the memory-mapped input-output (MMIO) instruction. The hypervisor identifies the address that caused the exit as a fast access address. The hypervisor identifies one or more memory locations associated with the fast access address, where the one or more memory locations store information associated with the MMIO instruction. The hypervisor identifies the MMIO instruction based on the stored information. The hypervisor executes the MMIO instruction on behalf of the guest.
    Type: Grant
    Filed: October 1, 2013
    Date of Patent: May 29, 2018
    Assignee: Red Hat Israel, Ltd.
    Inventors: Michael Tsirkin, Gleb Natapov
  • Patent number: 9952867
    Abstract: A processor core in an instruction block-based microarchitecture utilizes instruction blocks having headers that include an index to a size table that may be expressed using one of memory, register, logic, or code stream. A control unit in the processor core determines how many instructions to fetch for a current instruction block for mapping into an instruction window based on the block size that is indicated from the size table. As instruction block sizes are often unevenly distributed for a given program, utilization of the size table enables more flexibility in matching instruction blocks to the sizes of available slots in the instruction window as compared to arrangements in which instruction blocks have a fixed sized or are sized with less granularity. Such flexibility may enable denser instruction packing which increases overall processing efficiency by reducing the number of nops (no operations, such as null functions) in a given instruction block.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: April 24, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Douglas C. Burger, Aaron Smith, Jan Gray
  • Patent number: 9940027
    Abstract: Disclosed is a method of recording variable size data. A first processor receives, from a second processor, a read parameter including information on a read address value of data which has been read by the second processor and is stored in an external memory, compares the read address value acquired from the received read parameter and a record address value for data previously recorded in the external memory by the first processor, and determines whether or not the first processor is to transmit data to the second processor on the basis of the comparison result.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: April 10, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Doo-hyun Kim, Sang-jo Lee, Do-hyung Kim
  • Patent number: 9933836
    Abstract: A method for adjusting a frequency of a processor is disclosed herein. In one embodiment, the method includes inhibiting one or more processor cores from exiting an idle state. The method further includes determining a number of processor cores requesting exit from the idle state and a number of non-idle processor cores. The method also includes selecting a maximum frequency for the inhibited and non-idle processor cores based on the number of inhibited processor cores requesting exit from the idle state and the number of non-idle processor cores. The method includes setting the maximum frequency for both the inhibited and the non-idle processor cores, and then uninhibiting the processor cores requesting exit from the idle state.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: April 3, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Malcolm S. Allen-Ware, Charles R. Lefurgy, Karthick Rajamani, Todd J. Rosedahl, Guillermo J. Silva, Gregory S. Still, Victor Zyuban
  • Patent number: 9934152
    Abstract: Systems and techniques relating to hardware alias detection and management in caches are described. A cache controller can receive a cache request that specifies a virtual address, which includes a virtual page number (VPN) and a page offset; access, concurrently, one or more primary tags in a slot of the cache corresponding to a primary cache index that is based on a portion of the page offset and a portion of the VPN and one or more secondary tags in one or more slots corresponding to one or more secondary cache indices that are based on the portion of the page offset and one or more variations of the portion of the VPN; and determine whether there are any primary or secondary matching ways. The controller can write store data to a primary matching way if it exists and perform an alias management operation if any secondary matching ways exist.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: April 3, 2018
    Assignee: Marvell International Ltd.
    Inventors: Richard Bryant, R. Frank O'Bleness, Sujat Jamil, Kim Schuttenberg
  • Patent number: 9916257
    Abstract: Methods and apparatus are disclosed for efficient TLB (translation look-aside buffer) shoot-downs for heterogeneous devices sharing virtual memory in a multi-core system. Embodiments of an apparatus for efficient TLB shoot-downs may include a TLB to store virtual address translation entries, and a memory management unit, coupled with the TLB, to maintain PASID (process address space identifier) state entries corresponding to the virtual address translation entries. The PASID state entries may include an active reference state and a lazy-invalidation state. The memory management unit may perform atomic modification of PASID state entries responsive to receiving PASID state update requests from devices in the multi-core system and read the lazy-invalidation state of the PASID state entries. The memory management unit may send PASID state update responses to the devices to synchronize TLB entries prior to activation responsive to the respective lazy-invalidation state.
    Type: Grant
    Filed: July 26, 2011
    Date of Patent: March 13, 2018
    Assignee: Intel Corporation
    Inventors: Rajesh M. Sankaran, Altug Koker, Philip R. Lantz, Asit K. Mallick, James B. Crossland, Aditya Navale, Gilbert Neiger, Andrew V. Anderson
  • Patent number: 9910777
    Abstract: A system and method facilitate processing atomic storage requests. The method includes receiving, from a storage client, an atomic storage request for a first storage device that is incapable of processing atomic write operations. The method also includes processing the atomic storage request at a translation interface. The method also includes storing the atomic storage request in one or more storage operations in a second storage device capable of processing the atomic storage request.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 6, 2018
    Assignee: SanDisk Technologies LLC
    Inventors: David Flynn, Nisha Talagala
  • Patent number: 9886391
    Abstract: Embodiments relate to enhancing a refresh PCI translation (RPCIT) instruction to refresh a translation lookaside buffer (TLB). A computer processor determines a request to purge a translation for a single frame of the TLB in response to executing an enhanced RPCIT instruction. The enhanced RPCIT instruction is configured to selectively perform one of a single-frame TLB refresh operation or a range-bounded TLB refresh operation. The computer processor determines an absolute storage frame based on a translation of a PCI virtual address in response to the request to purge a translation for a single frame of the TLB. The computer processer further performs the single-frame TLB refresh operation to purge the translation for the single frame.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: February 6, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David F. Craddock, Thomas A. Gregg, Dan F. Greiner, Damian L. Osisek
  • Patent number: 9847134
    Abstract: A data storage device includes a flash memory, a voltage detection device, and a controller. The flash memory is arranged to store data. The voltage detection device is arranged to detect a supply voltage received by the data storage device. The controller is configured to receive write commands from a host, and perform a prohibition mode when the supply voltage is outside a predetermined range, wherein the write command is arranged to enable the controller to write the flash memory, and the controller is further configured to disable all of the write commands received from the host in the prohibition mode.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: December 19, 2017
    Assignee: Silicon Motion, Inc.
    Inventor: Yi-Hua Pao
  • Patent number: 9830263
    Abstract: A computer-executable method, system, and computer program product for managing a data storage system using a distributed write-through cache, wherein the data storage system comprises a first node, a second node, and a data storage array, wherein the first node includes a first cache and the second node includes a second cache, the computer-executable method, system, and computer program product comprising providing cache coherency on the data storage system by synchronizing the second cache with the first cache based on I/O requests received at the first node.
    Type: Grant
    Filed: June 30, 2014
    Date of Patent: November 28, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Orly Devor, Lior Zilpa, Michael Deift, Eli Ginot, Philip Derbeko