Patents Examined by Reginald Bragdon
  • Patent number: 9740628
    Abstract: A method includes identifying, by a processor, a first page table entry (PTE) of a page table for translating virtual addresses to main storage addresses, the page table comprising a second page table entry contiguous with the second page table entry, determining with the processor whether the first PTE may be joined with the second PTE, the determining based on the respective pages of main storage being contiguous, and setting a marker in the page table for indicating that the main storage pages of identified by the first PTE and second PTEs are contiguous.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: August 22, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Anthony J. Bybell, Michael K. Gschwind
  • Patent number: 9740569
    Abstract: Head start population of an image backup. In one example embodiment, a method for head start population of an image backup may include tracking blocks that are modified in a source storage between a first point in time and a second point in time, head start copying a first portion of the modified blocks into the image backup prior to the second point in time and ceasing the tracking of the first portion of the modified blocks as being modified, activating a snapshot on the source storage at the second point in time where the snapshot represents a state of the source storage at the second point in time, and copying, subsequent to the second point in time, from the snapshot and into the image backup, a second portion of the modified blocks that were not yet copied into the image backup by the second point in time.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: August 22, 2017
    Assignee: STORAGECRAFT TECHNOLOGY CORPORATION
    Inventor: Nathan S. Bushman
  • Patent number: 9734062
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the cache-lines comprises a plurality of sub-cache lines. Each of the plurality of cache-lines and each of the plurality of sub-cache lines is associated with meta-data indicating one or more of a dirty state and an invalid state. The controller is connected to the memory and configured to (i) recognize sub-cache line boundaries and (ii) process the I/O requests in multiples of a size of said sub-cache lines to minimize cache-fills.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: August 15, 2017
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Saugata Das Purkayastha, Luca Bert, Horia Simionescu, Kishore Kaniyar Sampathkumar, Mark Ish
  • Patent number: 9734079
    Abstract: Hybrid multi-level memory architecture technologies are described. A System on Chip (SOC) includes multiple functional units and a multi-level memory controller (MLMC) coupled to the functional units. The MLMC is coupled to a hybrid multi-level memory architecture including a first-level dynamic random access memory (DRAM) (near memory) that is located on-package of the SOC and a second-level DRAM (far memory) that is located off-package of the SOC. The MLMC presents the first-level DRAM and the second-level DRAM as a contiguous addressable memory space and provides the first-level DRAM to software as additional memory capacity to a memory capacity of the second-level DRAM. The first-level DRAM does not store a copy of contents of the second-level DRAM.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: August 15, 2017
    Assignee: Intel Corporation
    Inventors: Dannie G. Feekes, Shlomo Raikin, Blaise Fanning, Joydeep Ray, Julius Mandelblat, Ariel Berkovits, Eran Shifer, Zvika Greenfield, Evgeny Bolotin
  • Patent number: 9734056
    Abstract: A cache structure for use in implementing reconfigurable system configuration information storage, comprises: layered configuration information cache units: for use in caching configuration information that may be used by a certain or several reconfigurable arrays within a period of time; an off-chip memory interface module: for use in establishing communication; a configuration management unit: for use in managing a reconfiguration process of the reconfigurable arrays, in mapping each subtask in an algorithm application to a certain reconfigurable array, thus the reconfigurable array will, on the basis of the mapped subtask, load the corresponding configuration information to complete a function reconfiguration for the reconfigurable array. This increases the utilization efficiency of configuration information caches.
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: August 15, 2017
    Assignee: Southeast University
    Inventors: Longxing Shi, Jun Yang, Peng Cao, Bo Liu, Jinjiang Yang, Leibo Liu, Shouyi Yin, Shaojun Wei
  • Patent number: 9734014
    Abstract: A method for generating virtual dispersed storage network (DSN) addresses includes dispersed storage error encoding a data segment of a data object to produce a set of encoded data slices of a plurality of sets of encoded data slices of the pluralities of sets of encoded data slices. The method further includes generating, for each encoded data slice of the set of encoded data slices, a virtual DSN address having a slice name that includes a vault identifier, a slice index, a data object identifier, and a data segment identifier. The method further includes obtaining a mapping of a vault to a set of storage units of the DSN, wherein the mapping indicates how the set of encoded data slices are to be stored. The method further includes outputting the set of encoded data slices to the set of storage units in accordance with the mapping.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: August 15, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Greg Dhuse, Andrew Baptist
  • Patent number: 9733864
    Abstract: An erase method of a nonvolatile memory device is provided which includes receiving an erase request; selecting an erase mode of a memory block corresponding to the erase request, based on an access condition of the nonvolatile memory device managed by a memory controller; and controlling the nonvolatile memory device to erase the memory block according to the selected erase mode. The erase mode includes a fast erase mode of which an erase time for the memory block is shorter than a reference time and a slow erase mode of which an erase time for the memory block is longer than the reference time.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: August 15, 2017
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB Foundation
    Inventors: Sangkwon Moon, Jihong Kim, Jaeyong Jeong, Kyung Ho Kim
  • Patent number: 9734052
    Abstract: The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: August 15, 2017
    Assignee: International Business Machines Corporation
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Patent number: 9734059
    Abstract: A method of way prediction for a data cache having a plurality of ways is provided. Responsive to an instruction to access a stack data block, the method accesses identifying information associated with a plurality of most recently accessed ways of a data cache to determine whether the stack data block resides in one of the plurality of most recently accessed ways of the data cache, wherein the identifying information is accessed from a subset of an array of identifying information corresponding to the plurality of most recently accessed ways; and when the stack data block resides in one of the plurality of most recently accessed ways of the data cache, the method accesses the stack data block from the data cache.
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: August 15, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lena E. Olson, Yasuko Eckert, Vilas K. Sridharan, James M. O'Connor, Mark D. Hill, Srilatha Manne
  • Patent number: 9727335
    Abstract: Embodiments of the present invention provide systems and methods for clearing specified blocks of main storage. In one embodiment, an EADM start subchannel is executed. The instructions of the execution of the EADM start subchannel may include a SAP receiving an ADM request block, which specifies a main-storage-clearing operation command. The address and size of a block of main memory to be cleared by the SAP is specified in an MSB designated by the ADM request block.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: August 8, 2017
    Assignee: International Business Machines Incorporated
    Inventors: Anthony F. Coneski, Beth A. Glendening, Dan F. Greiner, Peter G. Sutton, Scott B. Tuttle, Elpida Tzortzatos
  • Patent number: 9727469
    Abstract: According to one aspect of the present disclosure, a method and technique for performance-driven cache line memory access is disclosed. The method includes: receiving, by a memory controller of a data processing system, a request for a cache line; dividing the request into a plurality of cache subline requests, wherein at least one of the cache subline requests comprises a high priority data request and at least one of the cache subline requests comprises a low priority data request; servicing the high priority data request; and delaying servicing of the low priority data request until a low priority condition has been satisfied.
    Type: Grant
    Filed: February 15, 2013
    Date of Patent: August 8, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert H. Bell, Jr., Men-Chow Chiang, Hong L. Hua, Mysore S. Srinivas
  • Patent number: 9727255
    Abstract: A storage apparatus includes a semiconductor storage device, and a storage controller coupled to the semiconductor storage device, and which stores data to a logical storage area provided by the semiconductor storage device. The semiconductor storage device includes one or more non-volatile semiconductor storage media, and a medium controller coupled to the semiconductor storage media. The medium controller compresses data stored in the logical storage area, and stores the compressed data in the semiconductor storage medium. The size of a logical address space of the logical storage area is larger than a total of the sizes of physical address spaces of the semiconductor storage media.
    Type: Grant
    Filed: July 19, 2013
    Date of Patent: August 8, 2017
    Assignee: Hitachi, Ltd.
    Inventor: Takaki Matsushita
  • Patent number: 9727464
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: August 8, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9720833
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: August 1, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9715455
    Abstract: An apparatus having an interface and a circuit is shown. The interface is configured to receive a request to access a memory. The request includes a hint. The circuit is configured to select a current one of a plurality of cache policies based on the hint. The current cache policy includes a number of current parameters ranging from some to all of a plurality of management parameters of a cache device. The circuit is further configured to cache data of the request based on the current cache policy.
    Type: Grant
    Filed: May 20, 2014
    Date of Patent: July 25, 2017
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Saugata Das Purkayastha, Kishore Kaniyar Sampathkumar
  • Patent number: 9715448
    Abstract: A method for collection instance resizing. The method may include identifying at least one collection object within a collection framework of a virtual machine. The method may also include determining the at least one identified collection object satisfies at least one preconfigured criteria. The method may further include determining a garbage collection cycle count associated with the at least one identified collection object exceeds a preconfigured threshold. The method may also include determining an occupancy ratio associated with the at least one identified collection object is less than a preconfigured shrink threshold. The method may further include restructuring the at least one identified collection object based on the at least one identified collection object satisfying the at least one preconfigured criteria, the garbage collection cycle count exceeding the preconfigured threshold, and the occupancy ratio being less than the preconfigured shrink threshold.
    Type: Grant
    Filed: January 10, 2017
    Date of Patent: July 25, 2017
    Assignee: International Business Machines Corporation
    Inventors: Guru C. Ganta, Gireesh Punathil
  • Patent number: 9710193
    Abstract: A method of detecting a rewritable non-volatile memory module is provided. The method includes setting an output voltage of a write protect pin of a memory interface as a first logic level, giving a read status command and receiving a first status message. The method further includes determining whether a corresponding bit data in the first status message conforms to a status corresponding to the first logic level; and if yes, identifying that the rewritable non-volatile memory module has connected to the memory interface.
    Type: Grant
    Filed: December 25, 2013
    Date of Patent: July 18, 2017
    Assignee: PHISON ELECTRONICS CORP.
    Inventor: Chien-Hua Chu
  • Patent number: 9704574
    Abstract: Aspects of the disclosure provide an apparatus that includes a key generator, a first memory, a second memory, and a controller. The key generator is configured to generate a first search key, and one or more second search keys in response to a pattern. The first memory is configured to compare the first search key to a plurality of entries populated in the first memory, and determine an index of a matching entry to the first search key. The second memory is configured to respectively retrieve one or more exact match indexes of the one or more second search keys from one or more exact match pattern groups populated in the second memory. The controller is configured to select a search result for the pattern from among the index output from the first memory and the one or more exact match indexes output from the second memory.
    Type: Grant
    Filed: July 15, 2014
    Date of Patent: July 11, 2017
    Assignee: Marvell International Ltd.
    Inventor: Michael Shamis
  • Patent number: 9697115
    Abstract: Embodiments herein relate to segmenting and pinning a first non-volatile memory to store cache information. In an embodiment, the first non-volatile memory is divided into a plurality of segments. Then, a first type of software of a plurality of types of software is pinned to a first segment of the plurality of segments. The first pinned segment stores the cache information associated with the first type of software.
    Type: Grant
    Filed: October 26, 2011
    Date of Patent: July 4, 2017
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Fred Charles Thomas, III, Walter A Gaspard, Chi W So
  • Patent number: 9697124
    Abstract: A dynamic cache extension in a multi-cluster heterogeneous processor architecture is described. One embodiment is a system comprising a first processor cluster having a first level two (L2) cache and a second processor cluster having a second L2 cache. The system further comprises a controller in communication with the first and second L2 caches. The controller receives a processor workload input and a cache workload input from the first processor cluster. Based on processor workload input and the cache workload input, the cache controller determines whether a current task associated with the first processor cluster is limited by a size threshold of the first L2 cache or a performance threshold of the first processor cluster. If the current task is limited by the size threshold of the first L2 cache, the controller uses at least a portion of the second L2 cache as an extension of the first L2 cache.
    Type: Grant
    Filed: January 13, 2015
    Date of Patent: July 4, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Hee Jun Park, Krishna Vsssr Vanka, Sravan Kumar Ambapuram, Shirish Kumar Agarwal, Ashvinkumar Namjoshi, Harshad Bhutada