Patents Examined by Tuan Thai
-
Patent number: 10025503Abstract: Methods for dynamically optimizing platform resource allocation of a logically-partitioned data processing system. Processor and memory resources are allocated to logical partitions of the data processing system. After allocating the processor and memory resources to the plurality of logical partitions, local and non-local memory accesses are monitored for the logical partitions. Based at least in part on the local and non-local memory accesses, a determination is made whether to reallocate the processor and memory resources of the logical partitions. Responsive to determining to reallocate the processor and memory resources, the processor and memory resources are dynamically reallocated to the logical partitions of the data processing system.Type: GrantFiled: August 22, 2016Date of Patent: July 17, 2018Assignee: International Business Machines CorporationInventors: Anjan Kumar Guttahalli Krishna, Edward C. Prosser
-
Patent number: 10025536Abstract: A memory system and method for simplifying scheduling on a flash interface module and reducing latencies in a multi-die environment are provided. In one embodiment, a memory die is provided comprising a memory array, an interface, at least one register, and circuitry. The circuitry is configured to receive, via the interface, a pause command from a controller in communication with the memory die; and in response to receiving the pause command: pause a data transfer between the memory die and the controller; and while the data transfer is paused and until a resume command is received, maintain state(s) of the at least one register irrespective of inputs received via the interface that would otherwise change the state(s) of the at least one register. Other embodiments are provided.Type: GrantFiled: February 10, 2016Date of Patent: July 17, 2018Assignee: SanDisk Technologies LLCInventors: Abhijeet Manohar, Hua-Ling Cynthia Hsu, Daniel E. Tuers
-
Patent number: 10025532Abstract: A storage device utilizing read look ahead (RLA) may utilize auxiliary or spare latches as a RLA cache for storing pre-fetch data. The RLA may predict the next commands and do a speculative read to the flash using the latches for RLA storage. The auxiliary/spare latches may be present on a plane or die of non-volatile memory and may be different from the transfer data latch (XDL) that transfers data from the memory and the host. When the XDL is backed up, sense commands may still be performed and the data is stored in the auxiliary latches before being transferred with the XDL.Type: GrantFiled: October 30, 2015Date of Patent: July 17, 2018Assignee: SanDisk Technologies LLCInventors: Abhijeet Manohar, Daniel E. Tuers, Noga Deshe, Vered Kelner, Gadi Vishne, Nurit Appel, Judah Gamliel Hahn
-
Patent number: 10025510Abstract: A technique for use in managing data storage in a data storage system is disclosed. A first and second data storage commands (DSC) are received from a storage driver stack. Determining if the first DSC and the second DSC are both related aspects of a combined storage command and if so, establishing a pairing structure to pair the first DSC and the second DSC together. Fulfilling the combined storage command by fulfilling both the first DSC and the second DSC with reference to the pairing structure.Type: GrantFiled: June 30, 2016Date of Patent: July 17, 2018Assignee: EMC IP Holding Company LLCInventors: Milind Koli, Timothy C. Ng, James Mark Holt, David Haase, Vedashree Anantha Raman
-
Patent number: 10025685Abstract: A memory subsystem manages memory I/O impedance compensation by the memory device monitoring a need for impedance compensation. Instead of a memory controller regularly sending a signal to have the memory device update the impedance compensation when a change is not needed, the memory device can indicate when it is ready to perform an impedance compensation change. The memory controller can send an impedance compensation signal to the memory device in response to a compensation flag set by the memory or in response to determining that a sensor value has changed in excess of a threshold.Type: GrantFiled: March 27, 2015Date of Patent: July 17, 2018Assignee: Intel CorporationInventors: James A McCall, Kuljit S Bains
-
Patent number: 10019369Abstract: Apparatuses and methods for a cache memory are described. In an example method, a transaction history associated with a cache block is referenced, and requested information is read from memory. Additional information is read from memory based on the transaction history, wherein the requested information and the additional information are read together from memory. The requested information is cached in a segment of a cache line of the cache block and the additional information in cached another segment of the cache line. In another example, the transaction history is also updated to reflect the caching of the requested information and the additional information. In another example, read masks associated with the cache tag are referenced for the transaction history, the read masks identifying segments of a cache line previously accessed.Type: GrantFiled: February 17, 2017Date of Patent: July 10, 2018Assignee: Micron Technology, Inc.Inventors: David Roberts, J. Thomas Pawlowski
-
Patent number: 10019161Abstract: A system and method that allows out of order fetching of host non-volatile memory commands can improve and maximize the memory device performance. The memory device can examine the non-volatile memory command headers available in the non-volatile memory command queue to select one or more, non-volatile memory commands to be fetched, in an optimum order and executed according to currently available resources in the memory device. The memory device can optimize performance of the non-volatile memory commands by re-ordering the host commands fetched from the host memory.Type: GrantFiled: October 30, 2015Date of Patent: July 10, 2018Assignee: SanDisk Technologies LLCInventors: Tal Sharifie, Shay Benisty, Amir Turjeman
-
Patent number: 10019171Abstract: Systems and methods for decoupling host commands in a non-volatile memory system are disclosed. In one implementation, a non-volatile memory system includes a non-volatile memory and a controller in communication with the non-volatile memory. The controller is configured to translate a first command that is formatted according to a communication protocol to a second command that is formatted generically, store the first command in an expected queue, and store the second command in the expected queue with a command priority. The controller is further configured to execute the second command based on the command priority, translate a result of the executed second command into a format according to the communication protocol, and transmit the result of the executed second command in the format according to the communication protocol to a host system dependent upon a position of the first command in the expected queue.Type: GrantFiled: April 1, 2016Date of Patent: July 10, 2018Assignee: Sandisk Technologies LLCInventor: Yiftach Tzori
-
Patent number: 10019359Abstract: Described are techniques for processing I/O operations. A read operation is received to read first data from a first location. It is determined whether the read operation is a read miss and whether non-location metadata for the first location is stored in cache. Responsive to determining that the read operation is a read miss and that the non-location metadata for the first location is not stored in cache, first processing is performed that includes issuing concurrently a first read request to read the first data from physical storage and a second read request to read the non-location metadata for the first location from physical storage.Type: GrantFiled: May 9, 2017Date of Patent: July 10, 2018Assignee: EMC IP Holding Company LLCInventors: Andrew Chanler, Michael Scharland, Gabriel BenHanokh, Arieh Don
-
Patent number: 10019194Abstract: Described embodiments provide systems and methods for operating a storage system. One or more production volumes of the storage system are selected for continuous replication. A number, N, is selected that is associated with a number of damaged volumes the storage system can sustain and maintain data consistency. Write transactions from a host to an associated one of the selected one or more production volumes are intercepted. The intercepted write transactions are sent to the associated production volume and to a plurality of copy volumes. When acknowledgments of the write transaction have been received from N copy volumes, the write transaction is acknowledged to the host.Type: GrantFiled: September 23, 2016Date of Patent: July 10, 2018Assignee: EMC IP HOLDING COMPANY LLCInventors: Leehod Baruch, Assaf Natanzon, Jehuda Shemer, Amit Lieberman, Ron Bigman
-
Patent number: 10013184Abstract: A system may comprise a storage device on which counters are stored. A counter may be associated with an identifier. A computing node of the system may receive a request to modify the counter. In response to the request, a read signature may be stored and may comprise a hash of the identifier and a tolerance of the counter to change. A write signature may be stored in response to the request, and may comprise a hash of the identifier and a magnitude of the requested modification. A conflict may be detected by comparing a sum of the magnitudes of requested changes to the tolerance of the read operation.Type: GrantFiled: June 30, 2016Date of Patent: July 3, 2018Assignee: Amazon Technologies, Inc.Inventors: John Michael Morkel, Timothy Daniel Cole, Christopher Richard Jacques de Kadt, Allan Henry Vermeulen
-
Patent number: 10001951Abstract: Managing input/output (‘I/O’) queues in a data storage system, including: receiving, by a host that is coupled to a plurality of storage devices via a storage network, a plurality of I/O operations to be serviced by a target storage device; determining, for each of a plurality of paths between the host and the target storage device, a data transfer maximum associated with the path; determining, for one or more of the plurality of paths, a cumulative amount of data to be transferred by I/O operations pending on the path; and selecting a target path for transmitting one or more of the plurality of I/O operations to the target storage device in dependence upon the cumulative amount of data to be transferred by I/O operations pending on the path and the data transfer maximum associated with the path.Type: GrantFiled: August 4, 2017Date of Patent: June 19, 2018Assignee: Pure Storage, Inc.Inventors: Ronald Karr, John Mansperger
-
Patent number: 10002076Abstract: A method includes generating least-recently-used location information for a shared set-associative multi-access cache and next-to least-recently-used location information for the shared set-associative multi-access cache. The method includes concurrently accessing a shared set-associative multi-access cache in response to a first memory request from a first memory requestor and a second memory request from a second memory requestor based on the least-recently-used location information and the next-to least-recently-used location information. The method may include updating the least-recently-used location information and the next-to least-recently-used location information in response to concurrent access to the shared set-associative multi-access cache according to the first memory request and the second memory request.Type: GrantFiled: September 29, 2015Date of Patent: June 19, 2018Assignee: NXP USA, Inc.Inventor: Daniel M. McCarthy
-
Patent number: 9996270Abstract: Various embodiments for managing data by a processor in a multi-tiered computing storage environment. Input/Output (I/O) statistics are examined from each of cache and device drivers in the computing storage environment. Based on the I/O statistics, a ranking mechanism is applied to differentiate data between at least a cache rank and a Solid State Drive (SSD) rank. The ranking mechanism migrates data between the cache rank and SSD rank such that storage space in the cache rank is reserved for those of the plurality of data workload types having a greater adverse effect on a storage performance characteristic if stored in the SSD rank than if those workload types were stored in the cache rank.Type: GrantFiled: July 8, 2014Date of Patent: June 12, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Yong Guo, Bruce McNutt, Jie Tian, Yan Xu
-
Patent number: 9996370Abstract: Disclosed are examples of memory allocation and reallocation for virtual machines operating in a shared memory configuration creating a swap file for at least one virtual machine. One example method may include allocating guest physical memory to the swap file to permit the at least one virtual machine to access host physical memory previously occupied by the guest physical memory. The example method may also include determining whether an amount of available host physical memory is below a minimum acceptable level threshold, and if so then freeing at least one page of host physical memory and intercepting a memory access attempt performed by the at least one virtual machine and allocating host physical memory to the virtual machine responsive to the memory access attempt.Type: GrantFiled: April 11, 2013Date of Patent: June 12, 2018Assignee: Open Invention Network LLCInventors: Farid Khafizov, Andrey Mokhov
-
Patent number: 9983819Abstract: Systems and methods for initializing a memory system are provided. One system includes a processor and a memory including a storage volume coupled to the processor. The storage volume includes a first bitmap for tracking an initialization process for the storage volume and a second bitmap for tracking a copying process for the storage volume. A method includes performing, via the processor, an initialization process for the storage volume and tracking, via the processor utilizing the first bitmap, the initialization process. The method further includes performing, via the processor, a copying process for the storage volume prior to completing the initialization process and tracking, via the processor utilizing the second bitmap, the copying process. Also provided are computer storage mediums including computer program code for performing the above method.Type: GrantFiled: January 7, 2016Date of Patent: May 29, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ellen J. Grusy, Brian D. Hatfield, Kurt A. Lovrien, Richard A. Ripberger, Matthew Sanchez
-
Patent number: 9977748Abstract: Apparatuses, methods, systems, and program products are disclosed for managing storage of digital content. An eligibility module determines one or more content elements that are eligible for compression. A content element is determined to be eligible for compression based on one or more characteristics of the content element. A rate module determines a compression rate for each of the one or more content elements. The compression rate comprises an amount of compression to be applied to a content element. The amount of compression to be applied to the content element is determined as a function of one or more characteristics of the content element. A compression module compresses each of the one or more eligible content elements according to the determined compression rate.Type: GrantFiled: June 19, 2015Date of Patent: May 22, 2018Assignee: Lenovo (Singapore) PTE. LTD.Inventors: Joaquin F. Luna, Marco Alberto Gonzalez, Scott Wentao Li, Grigori Zaitsev
-
Patent number: 9971709Abstract: Described are techniques for migrating data. A source data storage system includes a source device and a target data storage system includes a target device. A passive path and an active path are provided for a host to access data of a logical device. The host recognizes the passive path and the active path as paths to the logical device. The passive path is between the host and the source data storage system. The active path is between the host and the target data storage system and used in connection with proxying at least some requests directed to the logical device received from the host through the target data storage system while migrating data for the logical device from the source device to the target device. Migrating is performed to migrate data for the logical device from the source device to the target device. Migrating is controlled by a migration module executing on the target data storage system that copies data from the source device to the target device.Type: GrantFiled: September 29, 2015Date of Patent: May 15, 2018Assignee: EMC IP Holding Company LLCInventors: Matthew Long, Roy E. Clark, Dennis Duprey, David Harvey, Walter A. O'Brien, III
-
Patent number: 9971629Abstract: A computer-implemented method includes, in a transactional memory environment, identifying a transaction and identifying one or more cache lines. The cache lines are allocated to the transaction. A cache line record is stored. The cache line record includes a reference to the one or more cache lines. An indication is received. The indication denotes a request to demote the one or more cache lines. The cache line record is retrieved, and the one or more cache lines are released. A corresponding computer program product and computer system are also disclosed.Type: GrantFiled: May 12, 2017Date of Patent: May 15, 2018Assignee: International Business Machines CorporationInventors: Jonathan D. Bradbury, Michael Karl Gschwind, Chung-Lung K. Shum, Timothy J. Slegel
-
Patent number: 9971704Abstract: Methods, apparatus and design structures are provided for improving resource utilization by data compression accelerators. An exemplary apparatus for compressing data comprises a plurality of hardware data compression accelerators and a hash table shared by the plurality of hardware data compression accelerators. Each of the plurality of hardware data compression accelerators optionally comprises a first-in-first-out buffer that stores one or more input phrases. The hash table optionally records a location in the first-in-first-out buffers where a previous instance of an input phrase is stored. The plurality of hardware data compression accelerators can simultaneously access the hash table. For example, the hash table optionally comprises a plurality of input ports for simultaneous access of the hash table by the plurality of hardware data compression accelerators. A design structure for a data compression accelerator system is also disclosed.Type: GrantFiled: March 27, 2015Date of Patent: May 15, 2018Assignee: International Business Machines CorporationInventors: Bulent Abali, Bartholomew Blaner, Balaram Sinharoy