Patents Examined by Hewy Li
  • Patent number: 9383925
    Abstract: A page compression strategy classifies uncompressed pages selected for compression. Similarly classified pages are compressed and bound into a single logical page. For logical pages having pages with more than one classification, a weighting factor is determined for the logical page.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: July 5, 2016
    Assignee: International Business Machines Corporation
    Inventors: Suma M. B. Bhat, Chetan L. Gaonkar, Vamshi K. Thatikonda
  • Patent number: 9361408
    Abstract: According to one embodiment, a memory system including a key-value store containing key-value data as a pair of a key and a value corresponding to the key, includes an interface, a memory block, an address acquisition circuit and a controller. The interface receives a data write/read request or a request based on the key-value store. The memory block has a data area for storing data and a metadata table containing the key-value data. The address acquisition circuit acquires an address in response to input of the key. The controller executes the data write/read request for the memory block, and outputs the address acquired to the memory block and executes the request based on the key-value store. The controller outputs the value corresponding to the key via the interface.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: June 7, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takao Marukame, Atsuhiro Kinoshita, Kosuke Tatsumura
  • Patent number: 9330027
    Abstract: A system employs a white list of authorized transactions to control access to system registers. In an embodiment, the white list is loaded into filter registers during system boot. Routing logic monitors a logical interconnect fabric of the system for register access requests. The routing logic parses source, destination information from a request to index the white list. If the white list includes an entry corresponding to the processing entity indicated in the source information and the register indicated in the destination information, the routing logic will permit the requested access.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 3, 2016
    Assignee: Intel Corporation
    Inventors: Julien Carreno, Derek Harnett, Gordon J. Walsh
  • Patent number: 9317426
    Abstract: A method for providing for write once read many (WORM) times from at least some addresses of a storage drive that is otherwise manufactured for multiple writes to individual addresses. In at least one embodiment, a WORM area(s) is defined by a START_LBA and an END_LBA and the method uses a HWM_LBA to determine whether a LBA in the WORM area has been written to previously and to prevent previously written to LBA(s) in the WORM area from being rewritten. In at least one embodiment where there are multiple WORM areas, each WORM area has its own respective START_LBA, END_LBA and HWM_LBA.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: April 19, 2016
    Assignee: GreenTec-USA. Inc.
    Inventors: Stephen E. Petruzzo, Richard E. Detore
  • Patent number: 9286101
    Abstract: A processing device executing an operating system such as a guest operating system generates a bitmap wherein bits of the bitmap represent statuses of memory pages that are available to the operating system. The processing device frees a memory page. The processing device then sets a bit in the bitmap to indicate that the memory page is unused after the memory page is freed.
    Type: Grant
    Filed: July 28, 2011
    Date of Patent: March 15, 2016
    Assignee: Red Hat, Inc.
    Inventor: Henri Han van Riel
  • Patent number: 9280486
    Abstract: A host selects a memory page that has been allocated to a guest for eviction. The host may be a host machine that hosts a plurality of virtual machines. The host accesses a bitmap maintained by the guest to determine a state of a bit in the bitmap associated with the memory page. The host determines whether content of the memory page is to be preserved based on the state of the bit. In response to determining that the content of the memory page is not to be preserved, the host discards the content of the memory page.
    Type: Grant
    Filed: July 28, 2011
    Date of Patent: March 8, 2016
    Assignee: Red Hat, Inc.
    Inventor: Henri Han van Riel
  • Patent number: 9280467
    Abstract: A method and a system to dynamically determine how much of the total IO bandwidth may be used for flushing dirty metadata from the cache to the main memory without increasing the host memory access latency time, includes increasing the number of IO processes by adding a number of IO processes at short intervals and measuring host latency. If the host latency is acceptable, then increasing the number of IO processes again by the same number, and repeating until the host latency period reaches a limit. When the limit has been reached, reducing the number of IO processes by a multiplicative factor, and repeating the additive process from the reduced number of IO processes. The number of IO processes used for flushing dirty metadata may resemble a series of saw teeth, rising gradually and declining rapidly in response to the number of host IO processes needed.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 8, 2016
    Assignee: EMC Corporation
    Inventors: Kumar Kanteti, William Davenport, Philippe Armangau
  • Patent number: 9268623
    Abstract: Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address.
    Type: Grant
    Filed: December 18, 2012
    Date of Patent: February 23, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, James E. Carey, Philip J. Sanders, Brian E. Smith
  • Patent number: 9262500
    Abstract: According to one embodiment, a memory system including a key-value store containing key-value data as a pair of a key and a value corresponding to the key, includes a first memory, a control circuit and a second memory. The first memory is configured to contain a data area for storing data, and a table area containing the key-value data. The control circuit is configured to perform write and read to the first memory by addressing, and execute a request based on the key-value store. The second memory is configured to store the key-value data in accordance with an instruction from the control circuit. The control circuit performs a set operation by using the key-value data stored in the first memory, and the key-value data stored in the second memory.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: February 16, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsuhiro Kinoshita, Takao Marukame, Kosuke Tatsumura
  • Patent number: 9262243
    Abstract: Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address.
    Type: Grant
    Filed: February 13, 2013
    Date of Patent: February 16, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, James E. Carey, Philip J. Sanders, Brian E. Smith
  • Patent number: 9256458
    Abstract: Methods, parallel computers, and computer program products for conditionally updating shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a broadcast reduction operation header. The broadcast reduction operation header includes an SVD key and a first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD, in response to receiving the broadcast reduction operation header. Embodiments also include the runtime optimizer determining that the first SVD address does not match the second SVD address and updating the remote address cache with the first SVD address.
    Type: Grant
    Filed: December 18, 2012
    Date of Patent: February 9, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, James E. Carey, Philip J. Sanders, Brian E. Smith
  • Patent number: 9250950
    Abstract: Methods, parallel computers, and computer program products for conditionally updating shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a broadcast reduction operation header. The broadcast reduction operation header includes an SVD key and a first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD, in response to receiving the broadcast reduction operation header. Embodiments also include the runtime optimizer determining that the first SVD address does not match the second SVD address and updating the remote address cache with the first SVD address.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: February 2, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, James E. Carey, Philip J. Sanders, Brian E. Smith
  • Patent number: 9229864
    Abstract: Flushing cache memory of dirty metadata in a plurality of file systems without either letting the caches reach their maximum capacity, or using so much of the total system IO process bandwidth that host system IO process requests are unreasonably delayed, may include determining the length of an interval between sync operations for each individual one of the plurality of file system, and how to divide a system wide maximum sync process IO operation bandwidth fairly between various ones of the plurality of file systems. A computer dynamically measures overall system operation rates, and calculates an available portion of a current calculated sync operation bandwidth for each file system. The computer also measures file system operation rates and determines how long a time period should be between sync operations in each file system.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: January 5, 2016
    Assignee: EMC Corporation
    Inventors: Kumar Kanteti, William Davenport, Philippe Armangau
  • Patent number: 9229901
    Abstract: A distributed storage system including memory hosts and at least one curator in communication with the memory hosts. Each memory host has memory, and the curator manages striping of data across the memory hosts. In response to a memory access request by a client in communication with the memory hosts and the curator, the curator provides the client a file descriptor mapping data stripes and data stripe replications of a file on the memory hosts for remote direct memory access of the file on the memory hosts.
    Type: Grant
    Filed: June 8, 2012
    Date of Patent: January 5, 2016
    Assignee: Google Inc.
    Inventors: Kyle Nesbit, Andrew Everett Phelps
  • Patent number: 9213789
    Abstract: A method of generating optimized memory instances using a memory compiler is disclosed. Data pertinent to describing a memory to be designed are provided, and front-end models and back-end models are made to supply a library. Design criteria are received via a user interface. Design of the memory is optimized among speed, power and area according to the provided library and the received design criteria, thereby generating memory instances.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: December 15, 2015
    Assignee: M31 TECHNOLOGY CORPORATION
    Inventors: Nan-Chun Lien, Hsiao-Ping Lin, Wei-Chiang Shih, Yu-Chun Lin, Yu-Wei Yeh
  • Patent number: 9195400
    Abstract: Techniques for improved snapshot data management for modeling and migration planning associated with data storage systems and datacenters. For example, a method comprises the following steps. A plurality of types of representation of states of a system are generated, data from the system is imported to a first type of representation, and a second type of representation is updated, via the first type of representation, with the imported data, wherein modeling is capable of being performed in the second type of representation, and not capable of being performed in the first type of representation.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 24, 2015
    Assignee: EMC Corporation
    Inventors: Scott Ostapovicz, Karen Murphy, Fergal Gunn, Michael Schwartz, David Bowden
  • Patent number: 9189379
    Abstract: The disclosure is directed to a system for managing data samples utilizing a time division multiplexing controller to allocate time slots for accessing a sample memory according to one or more modes of operation. The time division multiplexing controller is configured to allocate slots for concurrent access by a sample controller, a plurality of detectors, and a noise predictive calibrator when a normal mode is enabled. The time division multiplexing controller is further configured to allocate slots excluding at least one of the sample controller, the plurality of detectors, and the noise predictive calibrator from accessing the sample memory when a retry mode is enabled. In some embodiments, the time division multiplexing controller is further configured to allocate time slots for one or more clients other than the sample controller, the plurality of detectors, and the noise predictive calibrator.
    Type: Grant
    Filed: February 6, 2013
    Date of Patent: November 17, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Herjen Wang, Ngok Ying Chu, Johnson Yen, Lei Chen
  • Patent number: 9176669
    Abstract: An algorithm for mapping memory and a method for using a high performance computing (“HPC”) system are disclosed. The algorithm takes into account the number of physical nodes in the HPC system, and the amount of memory in each node. Some of the nodes in the HPC system also include input/output (“I/O”) devices like graphics cards and non-volatile storage interfaces that have on-board memory; the algorithm also accounts for the number of such nodes and the amount of I/O memory they each contain. The algorithm maximizes certain parameters in priority order, including the number of mapped nodes, the number of mapped I/O nodes, the amount of mapped I/O memory, and the total amount of mapped memory.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 3, 2015
    Assignee: Silicon Graphics International Corp.
    Inventors: Brian Justin Johnson, Michael John Habeck
  • Patent number: 9170891
    Abstract: A snapshot of a volume is taken by proactive uploading of scheduled snapshot data before the scheduled snapshot time has arrived. A volume snapshot schedule of once a day may be set up to a service provider using a speed-limited network connection. Using a determined upload speed of the network connection and a list of changes to the volume since a prior snapshot, a snapshot system may determine an appropriate time to start uploading volume data so that the snapshot may be completed at or after the scheduled snapshot time. By using the list of changes and available bandwidth of the network connection, the snapshot may be completed earlier than had it been started at the time of the snapshot and the available bandwidth of the network connection may be more efficiently used.
    Type: Grant
    Filed: September 10, 2012
    Date of Patent: October 27, 2015
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 9164702
    Abstract: A distributed cache system including a data storage portion, a data control portion, and a cache logic portion in communication with the data storage and data control portions. The data storage portion includes memory hosts, each having non-transitory memory and a network interface controller in communication with the memory for servicing remote direct memory access requests. The data control portion includes a curator in communication with the memory hosts. The curator manages striping of data across the memory hosts. The cache logic portion executes at least one memory access request to implement a cache operation. In response to each memory access request, the curator provides the cache logic portion a file descriptor mapping data stripes and data stripe replications of a file on the memory hosts for remote direct memory access of the file on the memory hosts through the corresponding network interface controllers.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: October 20, 2015
    Assignee: Google Inc.
    Inventors: Kyle Nesbit, Scott Fredrick Diehl