Patents by Inventor Xiao-Yu Hu

Xiao-Yu Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8719494
    Abstract: For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes.
    Type: Grant
    Filed: March 6, 2013
    Date of Patent: May 6, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Matthew J. Kalos, Ioannis Koltsidas, Roman A. Pletka
  • Patent number: 8688897
    Abstract: Provided are a system, method, and computer program product for managing cache memory to cache data units in at least one storage device. A cache controller is coupled to at least two flash bricks, each comprising a flash memory. Metadata indicates a mapping of the data units to the flash bricks caching the data units, wherein the metadata is used to determine the flash bricks on which the cache controller caches received data units. The metadata is updated to indicate the flash brick having the flash memory on which data units are cached.
    Type: Grant
    Filed: April 5, 2011
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Evangelos S. Eleftheriou, Robert Haas, Xiao-Yu Hu, Roman A. Pletka
  • Patent number: 8688914
    Abstract: For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Matthew J. Kalos, Ioannis Koltsidas, Karl A. Nielsen, Roman A. Pletka
  • Patent number: 8688913
    Abstract: For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Matthew J. Kalos, Ioannis Koltsidas, Roman A. Pletka
  • Patent number: 8688900
    Abstract: Provided is a method for managing cache memory to cache data units in at least one storage device. A cache controller is coupled to at least two flash bricks, each comprising a flash memory. Metadata indicates a mapping of the data units to the flash bricks caching the data units, wherein the metadata is used to determine the flash bricks on which the cache controller caches received data units. The metadata is updated to indicate the flash brick having the flash memory on which data units are cached.
    Type: Grant
    Filed: February 4, 2013
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Evangelos S. Eleftheriou, Robert Haas, Xiao-Yu Hu, Roman A. Pletka
  • Patent number: 8681990
    Abstract: A system, method apparatus, and computer readable medium for managing renewal of a dynamic set of data items. Each data item has an associated renewal deadline, in a data item management system. A renewal schedule allocates to each data item a renewal interval for renewal of the data item. On addition of a new data item, if a potential renewal interval having a duration required for renewal of the data item, and having an ending at the renewal deadline for that item does not overlap a time period in the schedule during which the system is busy, the renewal schedule is automatically updated by allocating the potential renewal interval to the new data item. If the potential renewal interval does overlap a busy period, the renewal schedule is automatically updated by selecting an earlier renewal interval for at least one data item in the set.
    Type: Grant
    Filed: March 26, 2009
    Date of Patent: March 25, 2014
    Assignee: International Business Machines Corporation
    Inventors: Christian Cachin, Patrick Droz, Robert Haas, Xiao-Yu Hu, Ilias Iliadis, René A. Pawlitzek
  • Patent number: 8667219
    Abstract: A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: March 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Evangelos S. Eleftheriou, Robert Galbraith, Adrian C. Gerhard, Robert Haas, Xiao-Yu Hu, Murali N. Iyer, Ioannis Koltsidas, Timothy J. Larson, Steven P. Norgaard, Roman Pletka
  • Patent number: 8661196
    Abstract: A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands.
    Type: Grant
    Filed: August 15, 2011
    Date of Patent: February 25, 2014
    Assignee: International Business Machines Corporation
    Inventors: Evangelos S. Eleftheriou, Robert Galbraith, Adrian C. Gerhard, Robert Haas, Xiao-Yu Hu, Murali N. Iyer, Ioannis Koltsidas, Timothy J. Larson, Steven P. Norgaard, Roman Pletka
  • Publication number: 20140032817
    Abstract: A method for garbage collection in a solid state drive (SSD) includes determining whether the SSD is idle by a garbage collection module of the SSD; based on determining that the SSD is idle, determining a victim block from a plurality of memory blocks of the SSD; determining a number of valid pages in the victim block; comparing the determined number of valid pages in the victim block to a valid page threshold; and based on the number of valid pages in the victim block being less than the valid page threshold, issuing a garbage collection request for the victim block.
    Type: Application
    Filed: July 27, 2012
    Publication date: January 30, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Werner Bux, Robert Haas, Xiao-Yu Hu, Ilias Iliadis
  • Publication number: 20130346538
    Abstract: A method for managing cache memories includes providing a computerized system including a shared data storage system (CS) configured to interact with several local servers that serve applications using respective cache memories, and access data stored in the shared data storage system; providing cache data information from each of the local servers to the shared data storage system, the cache data information comprising cache hit data representative of cache hits of each of the local servers, and cache miss data representative of cache misses of each of the local servers; aggregating, at the shared data storage system, at least part of the cache hit and miss data received and providing the aggregated cache data information to one or more of the local servers; and at the local servers, updating respective one or more cache memories used to serve respective one or more applications based on the aggregated cache data information.
    Type: Application
    Filed: June 18, 2013
    Publication date: December 26, 2013
    Inventors: Stephen L. Blinick, Lawrence Y. Chiu, Evangelos S. Eleftheriou, Robert Haas, Yu-Cheng Hsu, Xiao-Yu Hu, Ioannis Koltsidas, Paul H. Muench, Roman Pletka
  • Publication number: 20130232295
    Abstract: Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache.
    Type: Application
    Filed: May 8, 2012
    Publication date: September 5, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Ioannis Koltsidas, Roman A. Pletka
  • Publication number: 20130232294
    Abstract: Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache.
    Type: Application
    Filed: March 5, 2012
    Publication date: September 5, 2013
    Applicant: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Ioannis Koltsidas, Roman A. Pletka
  • Patent number: 8495281
    Abstract: A method for intra-block wear leveling within solid-state memory subjected to wear, having a plurality of memory cells includes the step of writing to at least certain ones of the plurality of memory cells, in a non-uniform manner, such as to balance the wear of the at least certain ones of the plurality of memory cells within the solid-state memory, at intra-block level. For example, if a behavior of at least some of the plurality of memory cells is not characterized, then the method may comprise characterizing a behavior of at least some of the plurality of memory cells and writing to at least certain ones of the plurality of memory cells, based on the characterized behavior, and in a non-uniform manner.
    Type: Grant
    Filed: December 4, 2009
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ilias Iliadis, Theodoros A. Antonakopoulos, Roman Pletka, Xiao-Yu Hu, Roy D. Cideciyan
  • Patent number: 8495471
    Abstract: Systems and methods are provided that confront the problem of failed storage integrated circuits (ICs) in a solid state drive (SSD) by using a fault-tolerant architecture along with one error correction code (ECC) mechanism for random/burst error corrections and an L-fold interleaving mechanism. The systems and methods described herein keep the SSD operational when one or more integrated circuits fail and allow the recovery of previously stored data from failed integrated circuits and allow random/burst errors to be corrected in other operational integrated circuits. These systems and methods replace the failed integrated circuits with fully functional/operational integrated circuits treated herein as spare integrated circuits. Furthermore, these systems and methods improve I/O performance in terms of maximum achievable read/write data rate.
    Type: Grant
    Filed: November 30, 2009
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Theodore A. Antonakopoulos, Roy D. Cideciyan, Evangelos S. Eleftheriou, Robert Haas, Xiao-Yu Hu, Ilias Iliadis
  • Publication number: 20130166827
    Abstract: The invention is directed to a method for wear-leveling cells or pages or sub-pages or blocks of a memory such as a flash memory, the method comprising:—receiving (S10) a chunk of data to be written on a cell or page or sub-page or block of the memory;—counting (S40) in the received chunk of data the number of times a given type of binary data ‘0’ or ‘I’ is to be written; and—distributing (S50) the writing of the received chunk of data amongst cells or pages or sub-pages or blocks of the memory such as to wear-level the memory with respect to the number of the given type of binary data ‘0’ or ‘I’ counted in the chunk of data to be written.
    Type: Application
    Filed: June 6, 2011
    Publication date: June 27, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Roy D. Cideciyan, Evangelos S. Eleftheriou, Robert Haas, Xiao-Yu Hu, Ilias Iliadis, Roman Pletka
  • Patent number: 8458568
    Abstract: A method for writing data to a memory array includes receiving a write request including data from a processor, compressing the data, assigning a page strength to the compressed data, the page strength defined by a compression ratio used to compress the data, generating a parity data block associated with the compressed data, and saving the compressed data and the parity data block in a page of the memory array, the page of the memory array having a page strength corresponding to the assigned page strength of the compressed data.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: June 4, 2013
    Assignee: International Business Machines Corporation
    Inventors: Roy D. Cideciyan, Xiao-Yu Hu
  • Publication number: 20130124794
    Abstract: The present idea provides a high read and write performance from/to a solid state memory device. The main memory of the controller is not blocked by a complete address mapping table covering the entire memory device. Instead such table is stored in the memory device itself, and only selected portions of address mapping information are buffered in the main memory in a read cache and a write cache. A separation of the read cache from the write cache enables an address mapping entry being evictable from the read cache without the need to update the related flash memory page storing such entry in the flash memory device. By this design, the read cache may advantageously be stored on a DRAM even without power down protection, while the write cache may preferably be implemented in nonvolatile or other fail-safe memory. This leads to a reduction of the overall provisioning of nonvolatile or fail-safe memory and to an improved scalability and performance.
    Type: Application
    Filed: July 25, 2011
    Publication date: May 16, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Werner Bux, Robert Haas, Xiao-Yu Hu, Roman Pletka
  • Publication number: 20130111131
    Abstract: The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.
    Type: Application
    Filed: April 26, 2012
    Publication date: May 2, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: MICHAEL T. BENHASE, STEPHEN L. BLINICK, EVANGELOS S. ELEFTHERIOU, LOKESH M. GUPTA, ROBERT HAAS, XIAO-YU HU, IOANNIS KOLTSIDAS, ROMAN A. PLETKA
  • Publication number: 20130111106
    Abstract: Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages.
    Type: Application
    Filed: November 1, 2011
    Publication date: May 2, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Stephen L. BLINICK, Evangelos S. ELEFTHERIOU, Lokesh M. GUPTA, Robert HAAS, Xiao-Yu HU, Matthew J. KALOS, Ioannis KOLTSIDAS, Karl A. NIELSEN, Roman A. PLETKA
  • Publication number: 20130111134
    Abstract: Various embodiments for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor are provided. In one such embodiment, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. Additional system and computer program product embodiments are disclosed and provide related advantages.
    Type: Application
    Filed: November 1, 2011
    Publication date: May 2, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Stephen L. BLINICK, Evangelos S. ELEFTHERIOU, Lokesh M. GUPTA, Robert HAAS, Xiao-Yu HU, Matthew J. KALOS, Ioannis KOLTSIDAS, Roman A. PLETKA