Patents Examined by Tuan Thai
  • Patent number: 9824030
    Abstract: Provided are a computer program product, system, and method for adjusting active cache size based on cache usage. An active cache in at least one memory device caches tracks in a storage during computer system operations. An inactive cache in the at least one memory device is not available to cache tracks in the storage during the computer system operations. During caching operations in the active cache, information is gathered on cache hits to the active cache and cache hits that would occur if the inactive cache was available to cache data during the computer system operations. The gathered information is used to determine whether to configure a portion of the inactive cache as part of the active cache for use during the computer system operations.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: November 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Will A. Wright
  • Patent number: 9824740
    Abstract: Described are dynamic memory systems that perform overlapping refresh and data-access (read or write) transactions that minimize the impact of the refresh transaction on memory performance. The memory systems support independent and simultaneous activate and precharge operations directed to different banks. Two sets of address registers enable the system to simultaneously specify different banks for refresh and data-access transactions.
    Type: Grant
    Filed: June 11, 2009
    Date of Patent: November 21, 2017
    Assignee: Rambus Inc.
    Inventors: Frederick A. Ware, Richard E. Perego
  • Patent number: 9824020
    Abstract: Systems and methods for managing memory in a dynamic translation computer system are provided. Embodiments may include receiving an instruction packet and processing the instruction packet. The instruction packet may include one or more instructions for obtaining a block of virtual memory for use in an emulated operating environment from a slab of virtual memory in a host environment, maintaining a mapping between the block of virtual memory and physical memory when the block is returned to the host environment, and for filling the block of virtual memory with zeros and a pattern based, at least in part, on a detected fill type.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: November 21, 2017
    Assignee: Unisys Corporation
    Inventors: Michael Rieschl, James Merten, Brian Garrett, Steven Bernardy
  • Patent number: 9823876
    Abstract: Apparatus and method for managing multi-device storage systems. In some embodiments, a distributed data set is stored across a group of storage devices. Data from a selected storage device in the group are reconstructed and stored in a spare location. Host access requests associated with the data are serviced from the spare location along a first data path while the data from the spare location are concurrently transferred along a different, second data path to a replacement storage device maintained in an offline condition using a progressive (iterative) copyback process. The replacement storage device is thereafter transitioned to an online condition responsive to the transfer of the data to the replacement storage device.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: November 21, 2017
    Assignee: Seagate Technology LLC
    Inventors: Lloyd A. Poston, Gregory Prestas, Eugene M. Taranta, II
  • Patent number: 9824026
    Abstract: An apparatus and method are described for managing a virtual graphics processor unit (GPU). For example, one embodiment of an apparatus comprises: a dynamic addressing module to map portions of an address space required by the virtual machine to matching free address spaces of a host if such matching free address spaces are available, and to select non-matching address spaces for those portions of the address space required by the virtual machine which cannot be matched with free address spaces of the host; and a balloon module to perform address space ballooning (ASB) techniques for those portions of the address space required by the virtual machine which have been mapped to matching address spaces of the host; and address remapping logic to perform address remapping techniques for those portions of the address space required by the virtual machine which have not been mapped to matching address spaces of the host.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: November 21, 2017
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian
  • Patent number: 9817761
    Abstract: A method for optimization of host sequential reads based on volume of data includes, at a mass data storage device, pre-fetching a first volume of predicted data associated with an identified read data stream from a data store into a buffer memory different from the data store. A request for data from the read data stream is received from a host. In response, the requested data is provided to the host from the buffer memory. While providing the requested data to the host from the buffer memory, it is determined whether a threshold volume of data has been provided to the host from the data buffer memory. If so, a second volume of predicted data associated with the identified read data stream is pre-fetched from the data store and into the buffer memory. If not, additional predicted data is not pre-fetched from the data store.
    Type: Grant
    Filed: January 6, 2012
    Date of Patent: November 14, 2017
    Assignee: SanDisk Technologies LLC
    Inventors: Koren Ben-Shemesh, Yan Nosovitsky
  • Patent number: 9811284
    Abstract: A method for data storage includes preparing first data having a first size for storage in a memory device that stores data having a nominal size larger than the first size, by programming a group of memory cells to multiple predefined levels using a one-pass program-and-verify scheme. The first data is combined with dummy data to produce first combined data having the nominal size, and is sent to the memory device for storage in the group. The dummy data is chosen to limit the levels to which the memory cells in the group are programmed to a partial subset of the predefined levels. In response to identifying second data to be stored in the group, the second data is combined with the first data to obtain second combined data having the nominal size, and is sent to the memory device for storage, in place, in the group.
    Type: Grant
    Filed: December 20, 2015
    Date of Patent: November 7, 2017
    Assignee: APPLE INC.
    Inventors: Charan Srinivasan, Eyal Gurgi
  • Patent number: 9811464
    Abstract: In one embodiment of the invention, a processor comprising an upper level cache and at least one processor core. The at least one processor core includes one or more registers and a plurality of instruction processing stages: a decode unit to decode an instruction requiring an input of a plurality of data elements, wherein a size of each of the plurality of data elements is less than a cache line size of the processor; an execution unit to load the plurality of data elements to the one or more registers of the processor, without loading data elements spatially adjacent to the plurality of data elements or the plurality of data elements in an upper level cache.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: November 7, 2017
    Assignee: INTEL CORPORATION
    Inventors: Ruchira Sasanka, Elmoustapha Ould-Ahmed-Vall
  • Patent number: 9811281
    Abstract: A memory management service occupies a configurable portion of an overall memory system in a disaggregate compute environment. The service provides optimized data organization capabilities over the pool of real memory accessible to the system. The service enables various types of data stores to be implemented in hardware, including at a data structure level. Storage capacity conservation is enabled through the creation and management of high-performance, re-usable data structure implementations across the memory pool, and then using analytics (e.g., multi-tenant similarity and duplicate detection) to determine when data organizations should be used. The service also may re-align memory to different data structures that may be more efficient given data usage and distribution patterns. The service also advantageously manages automated backups efficiently.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: November 7, 2017
    Assignee: International Business Machines Corporation
    Inventors: John Alan Bivens, Koushik K. Das, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Patent number: 9811474
    Abstract: Provided are a computer program product, system, and method for determining cache performance using a ghost cache list. Tracks in the cache are indicated in a cache list. A track demoted from the cache is indicated in a ghost cache list in response to demoting the track in the cache. The demoted track is not indicated in the cache list. During caching operations, information is gathered on a number of cache hits comprising accesses to tracks indicated in the cache list and a number of ghost cache hits comprising accesses to tracks indicated in the ghost cache list. The gathered information on the cache hits and the ghost cache hits is used to generate information on cache performance improvements that would occur if the cache was increased in size to cache tracks in the ghost cache list.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: November 7, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Juan A. Yanes
  • Patent number: 9811261
    Abstract: A processing device determines configuration data associated with a device. The processing device analyzes the configuration data with respect to storage usage data collected over a previous time period. The processing device determines a maximum amount of storage space of a storage component for the device that is predicted to be written to in a future time period. The processing device determines a free space buffer threshold for a free space buffer of the storage component to be greater than the maximum amount of storage space that is predicted to be written to in the future time period.
    Type: Grant
    Filed: September 13, 2015
    Date of Patent: November 7, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Ishwar VenkataManikanda Ramani, Michael Wendling, Mridula Karumuru, James Robert Wright
  • Patent number: 9804931
    Abstract: Memory system enabling memory mirroring in single write operations for the primary and backup data storage. The memory system utilizes a memory channel including one or more latency groups, with each latency group encompassing a number of memory modules that have the same signal timing to the controller. A primary copy and a backup copy of a data element can be written to two memory modules in the same latency group of the channel and in a single write operation. The buses of the channel may have the same trace length to each of the memory modules within a latency group.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: October 31, 2017
    Assignee: Rambus Inc.
    Inventors: Steven Woo, David Secker, Ravindranath Kollipara
  • Patent number: 9804779
    Abstract: Attributing consumed storage capacity among entities storing data in a storage array includes: identifying a data object stored in the storage array and shared by a plurality of entities, where the data object occupies an amount of storage capacity of the storage array; and attributing to each entity a fractional portion of the amount of storage capacity occupied by the data object.
    Type: Grant
    Filed: January 30, 2017
    Date of Patent: October 31, 2017
    Assignee: Pure Storage, Inc.
    Inventors: Jianting Cao, Martin Harriman, John Hayes, Cary Sandvig
  • Patent number: 9804975
    Abstract: An apparatus having processing circuitry configured to execute applications involving access to memory may include a CPU and a cache controller. The CPU may be configured to access cache memory for execution of an application. The cache controller may be configured to provide an interface between the CPU and the cache memory. The cache controller may include a bitmask to enable the cache controller to employ a two-level data structure to identify memory exploits using hardware. The two-level data structure may include a page level protection mechanism, and a sub-page level protection mechanism.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: October 31, 2017
    Assignee: The Johns Hopkins University
    Inventor: James M. Stevens
  • Patent number: 9798465
    Abstract: Described are techniques for performing data storage management operations. A graphical user interface display includes multiple each associated with a tiering preference. The graphical user interface includes multiple user interface elements representing a plurality of logical devices. Each user interface element denotes a logical device located in one of the plurality of areas to thereby indicate any of a tiering preference and a tiering requirement for the logical device. First processing is performed to modify a tiering preference for a first logical device where the first processing includes selecting the first logical device by selecting a user interface element representing the first logical device, and moving the first user interface element from a first of the areas, denoting a first tiering preference, to a second of the area, denoting a second tiering preference.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: October 24, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Donald E. Labaj, Kendra Marchant, Rhon Porter
  • Patent number: 9798489
    Abstract: An administrator provisions a virtual disk in a remote storage platform and defines policies for that virtual disk. A virtual machine writes to and reads from the storage platform using any storage protocol. Virtual disk data within a failed storage pool is migrated to different storage pools while still respecting the policies of each virtual disk. Snapshot and revert commands are given for a virtual disk at a particular point in time and overhead is minimal. A virtual disk is cloned utilizing snapshot information and no data need be copied. Any number of Zookeeper clusters are executing in a coordinated fashion within the storage platform, thus increasing overall throughput. A timestamp is generated that guarantees a monotonically increasing counter, even upon a crash of a virtual machine. Any virtual disk has a “hybrid cloud aware” policy in which one replica of the virtual disk is stored in a public cloud.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: October 24, 2017
    Assignee: HEDVIG, INC.
    Inventors: Avinash Lakshman, Srinivas Lakshman
  • Patent number: 9798472
    Abstract: A System, Computer Program Product, and Computer-executable method for managing cache de-staging on a data storage system wherein the data storage system provides a Logical Unit (LU), the System, Computer Program Product, and Computer-executable method including dividing the LU into two or more extents, analyzing each of the two or more extents, creating a cache de-staging policy based on the analysis, and managing cache de-staging of the LU based the cache de-staging policy.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: October 24, 2017
    Assignee: EMC CORPORATION
    Inventors: Assaf Natanzon, Eitan Bachmat, Mark Abashkin
  • Patent number: 9798665
    Abstract: A method that may include determining, for each user of a group of users, a time difference between an event of a first type that is related to a storage of a user data unit of the user within a cache of a storage system and to an eviction of the user data unit from the cache, in response to (a) a service-level agreement (SLA) associated with the user and to (b) multiple data hit ratios associated with multiple different values of a time difference between events of the first type and evictions, from the cache, of multiple user data units of the user; and evicting from the cache, based upon the determination, one or more user data units associated with one or more users of the group.
    Type: Grant
    Filed: December 20, 2015
    Date of Patent: October 24, 2017
    Assignee: INFINIDAT LTD.
    Inventor: Yechiel Yochai
  • Patent number: 9798471
    Abstract: Embodiments of the present invention relate to a method and apparatus for improving performance of a de-clustered disk array by making statistics on a number and types of active input/output (I/O) requests of each of the plurality of physical disks; dividing the plurality of physical disks at least into a first schedule group and a second schedule group based on the statistic number and types of the active I/O requests of the each physical disk for a predetermined time period, the first schedule group having a first schedule priority, the second schedule group having a second schedule priority higher than the first schedule priority; and selecting, in a decreasing order of the schedule priority, a physical disk for schedule from one of the resulting schedule groups thereby preventing too many I/O requests from concentrating on some physical disks and thereby improve overall performance of a de-clustered RAID.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: October 24, 2017
    Assignee: EMC IP Holding Compnay LLC
    Inventors: Alan Zhongjie Wu, Colin Yong Zou, Chris Zirui Liu, Fei Wang, Zhengli Yi
  • Patent number: 9792048
    Abstract: Systems and methods are disclosed for identifying disk drives and processing data access requests. A disk drive may be identified as an Advanced Host Controller Interface (AHCI) drive, a Non-Volatile Memory Express (NVME) drive, and/or an ATA packet interface (ATAPI) drive. Data access requests for the disk drive may be translated to NVME commands, AHCI commands, or ATAPI commands, based on whether the drive is identified as a NVME drive, an AHCI drive, and/or an ATAPI drive.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: October 17, 2017
    Assignee: Western Digital Technologies, Inc.
    Inventor: John E. Maroney