Patents Examined by Jared I Rutz
  • Patent number: 10228866
    Abstract: In general, techniques are described for enabling performance tuning of a storage device. A storage device comprising one or more processors and a memory may perform the tuning techniques. The one or more processors may be configured to receive a command stream including one or more commands to access the storage device. The memory may be configured to store the command stream. The one or more processors may be further configured to insert a delay into the command stream to generate a performance tuned command stream, and access the storage device in accordance with the performance tuned command stream.
    Type: Grant
    Filed: January 19, 2015
    Date of Patent: March 12, 2019
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Darin E. Gerhart
  • Patent number: 10223291
    Abstract: A computing device comprises: a memory; a processor; an interpreter; and a Memory Management Unit. The interpreter is for controlling the processor to execute a program comprising at least one first instruction in a format that is not native to the processor and at least one second instruction in machine code that is native to the processor. The Memory Management Unit is adapted to control access by the processor to the memory and possibly also to peripherals when the at least one second instruction is executed.
    Type: Grant
    Filed: May 15, 2010
    Date of Patent: March 5, 2019
    Assignee: NXP B.V.
    Inventors: Ernst Haselsteiner, Christian Kirchstaetter
  • Patent number: 10223268
    Abstract: A computer system includes transactional memory to implement a nested transaction. The computer system generates a plurality of speculative identification numbers (IDs), identifies at least one of a software thread executed by a hardware processor and a memory operation performed in accordance with an application code. The computer system assigns at least one speculative cache version to a requested transaction based on a corresponding software thread. The speculative ID of the corresponding software thread identifies the speculative cache version. The computer system also identifies a nested transaction in the memory unit, assigns a cache version to the nested transaction, detects a conflict with the nested transaction, determines a conflicted nesting level of the nested transaction, and determines a cache version corresponding to the conflicted nesting level. The computer system also invalidates the cache version corresponding to the conflicted nesting level.
    Type: Grant
    Filed: February 23, 2016
    Date of Patent: March 5, 2019
    Assignee: INTERNATIONAL BUSINESS SYSTEMS CORPORATION
    Inventors: Michael Karl Gschwind, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
  • Patent number: 10210100
    Abstract: A system and method are disclosed for an event lock storage device. The storage device includes a user partition and an event partition (which may be associated with an event). The storage device receives data from a host device, and stores the data in the user partition. In response to receiving an indication of an event, the storage device may designate the data as part of the event partition. The event partition may include a set of access rules that is different from the user partition, such as more restrictive rules for modification or deletion of a file containing the data.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: February 19, 2019
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Filip Verhaeghe, Bsa Chung, Samuel Yu, Michael Lavrentiev
  • Patent number: 10210007
    Abstract: Techniques are disclosed for performing input/output (I/O) requests to two or more physical adapters in parallel. An address for at least a first page associated with a virtual I/O request is mapped to an entry in a virtual translation control entry (TCE) table. A plurality of physical adapters required to service the virtual I/O request are identified. Upon determining, in each of the identified physical adapters, that an entry in the respective physical TCE table corresponding to the physical adapter is available, for each of the identified physical adapters, the entry in the virtual TCE table is mapped to an entry in the respective physical TCE table corresponding to the physical adapter, and a physical I/O request corresponding to each physical TCE table entry is issued to the respective physical adapter.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: February 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Andrew T. Koch, Kyle A. Lucke, Nicholas J. Rogness, Steven E. Royer
  • Patent number: 10198358
    Abstract: Apparatuses, computer readable mediums, and methods of processor unit testing using cache resident testing are disclosed. The method may include loading a test program in a cache on a chip comprising one or more processor units. The method may include the one or more processor units executing the test program to generate one or more results. The method may include redirecting a first memory reference to the cache, wherein the first memory reference is generated during the execution of the test program. The method may include determining whether the one or more generated results match one or more test results. The method may include redirecting a memory request to a memory location resident in the cache if the memory request includes a memory location not resident in the cache. The method may include redirecting a memory request to the cache if the memory request is not directed to the cache.
    Type: Grant
    Filed: April 2, 2014
    Date of Patent: February 5, 2019
    Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC
    Inventors: Angel E. Socarras, Kostantinos Danny Christidis, Curtis Alan Gilgan, Alexander Fuad Ashkar
  • Patent number: 10200472
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for improved coordination between sender and receiver nodes in a one-sided memory access to a PGAS in a distributed computing environment. The system may include a transceiver module configured to receive a message over a network, the message comprising a data portion and a data size indicator and an offset handler module configured to calculate a destination address from a base address of a memory buffer and an offset counter. The transceiver module may further be configured to write the data portion to the memory buffer at the destination address; and the offset handler module may further be configured to update the offset counter based on the data size indicator.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: February 5, 2019
    Assignee: Intel Corporation
    Inventors: Mario Flajslik, James Dinan
  • Patent number: 10191775
    Abstract: The present invention discloses a method for optimizing the throughput of hardware accelerators (HWAs) in a computerized abstraction system, by utilizing the maximal data input bandwidth to the said HWAs. The method is comprised of the following steps: dynamically obtaining the quantities and properties of HWAs and storage units within the computerized abstraction system dynamically allocating cache memory space per each of the HWAs, according to the said obtained quantities and properties, to minimize the time required for reading data from storage instances to the said HWA dynamically allocating spoolers per each of the HWAs, according to the said obtained quantities and properties, to buffer the input data and ensure a continuous flow of input data, in the target HWA's maximal input bandwidth.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: January 29, 2019
    Assignee: SQREAM TECHNOLOGIES LTD.
    Inventors: Ori Brostovsky, Omid Vahdaty, Eli Klatis, Tal Zelig, Jake Wheat, Razi Shoshani
  • Patent number: 10180901
    Abstract: Aspects of the present disclosure disclose systems and methods for managing space in storage devices. In various aspects, the disclosure is directed to providing more efficient method for managing free space in the storage system, and related apparatus and methods. In particular, the system provides for freeing blocks of memory that are no longer being used based on the information stored in a file system. More specifically, the system allows for reclaiming of large segments of free blocks at one time by providing information on aggregated blocks that were being freed to the storage devices.
    Type: Grant
    Filed: February 18, 2013
    Date of Patent: January 15, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventor: Eric Carl Taylor
  • Patent number: 10157002
    Abstract: A method begins by a processing module determining a priority access level of an encoded data slice stored on a memory device. The method continues with the processing module determining an end-of-life memory level for the memory device. The method continues with the processing module determining whether to migrate the encoded data slice from the memory device based on the priority access level and the end-of-life memory level. The method continues with the processing module identifying another memory device. The method continues with the processing module facilitating migration of the encoded data slice to another memory device.
    Type: Grant
    Filed: August 5, 2011
    Date of Patent: December 18, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gary W. Grube, Jason K. Resch, Timothy W. Markison, Ilya Volvovski, Manish Motwani
  • Patent number: 10140031
    Abstract: A Flash Translation Layer (FTL) structure including mapping information for storing data is disclosed. The FTL structure includes a plurality of hierarchical data groups including a zeroth-layer host data group, and first-layer to nth-layer metadata groups, and zeroth to nth logs configured in a hierarchical structure in correspondence with the respective hierarchical data groups, for processing data of the corresponding data groups. A kth log (0?k?n) provides an interface to volatile memory resources dividedly allocated to the kth log, an interface to non-volatile memory resources dividedly allocated to the kth log, and an interface to at least one of (k?1)th and (k+1)th logs.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: November 27, 2018
    Assignee: FADU Inc.
    Inventors: Yoon Jae Seong, Eyee Hyun Nam, Hongseok Kim, Jin-yong Choi, Sunggab Lee, Kijun Kim
  • Patent number: 10114749
    Abstract: A cache memory system is provided. The cache memory system includes multiple upper level caches and a current level cache. Each upper level cache includes multiple cache lines. The current level cache includes an exclusive tag random access memory (Exclusive Tag RAM) and an inclusive tag random access memory (Inclusive Tag RAM). The Exclusive Tag RAM is configured to preferentially store an index address of a cache line that is in each upper level cache and whose status is unique dirty (UD). The Inclusive Tag RAM is configured to store an index address of a cache line that is in each upper level cache and whose status is unique clean (UC), shared clean (SC), or shared dirty (SD).
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: October 30, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Zhenxi Tu, Jing Xia
  • Patent number: 10095621
    Abstract: A method for coordinating cache and memory reservation in a computerized system includes identifying at least one running application, recognizing the at least one application as a latency-critical application, monitoring information associated with a current cache access rate and a required memory bandwidth of the at least one application, allocating a cache partition, a size of the cache partition corresponds to the cache access rate and the required memory bandwidth of the at least one application, defining a threshold value including a number of cache misses per time unit, determining a reduction of cache misses per time unit, in response to the reduction of cache misses per time unit being above the threshold value, retaining the cache partition, assigning a priority of scheduling memory request including a medium priority level, and assigning a memory channel to the at least one application to avoid memory channel contention.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: October 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Robert Birke, Yiyu Chen, Navaneeth Rameshan, Martin Schmatz
  • Patent number: 10078462
    Abstract: Methods and system for providing a security function, such as random number generation, fingerprinting and data hiding, using a Flash memory. The methods and systems do not require carefully design specific circuits, can be implemented in all flash memory device. The fingerprinting methods and systems do not require a long time to generate a read and the data hiding is decoupled from Flash memory content.
    Type: Grant
    Filed: May 17, 2013
    Date of Patent: September 18, 2018
    Assignee: CORNELL UNIVERSITY
    Inventors: Yinglei Wang, Wing-kei Yu, Edwin C. Kan, Gookwon E. Suh
  • Patent number: 10073656
    Abstract: An I/O manager may be configured to service I/O requests pertaining to ephemeral data of a virtual machine using a storage device that is separate from and/or independent of a primary storage resource to which the I/O request is directed. Ephemeral data may be removed from ephemeral storage in response to a removal condition and/or trigger, such as a virtual machine reboot. The I/O manager may manage transfers of ephemeral virtual machine data in response to virtual machines migrating between host computing devices. The I/O manager may be further configured to cache virtual machine data, and/or manage shared file data that is common to two or more virtual machines operating on a host computing device.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: September 11, 2018
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Jerene Zhe Yang, Yang Luan, Brent Lim Tze Hao, Vikram Joshi, Michael Brown, Prashanth Radhakrishnan, David Flynn, Bhavesh Mehta
  • Patent number: 10067681
    Abstract: A memory chip, a memory system, and a method of accessing the memory chip. The memory chip includes a substrate, a first storage unit, and a second storage unit. The first storage unit includes a plurality of first memory cells may have a first storage capacity of 2n. The plurality of first memory cells may be configured to activate in response to a first selection signal. The second storage unit includes a plurality of second memory cells and may have a second storage capacity of 2n+1. The plurality of second memory cells may be configured to activate in response to a second selection signal.
    Type: Grant
    Filed: March 22, 2012
    Date of Patent: September 4, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chul-sung Park, Joo-sun Choi
  • Patent number: 10055161
    Abstract: In one aspect, a method includes splitting empty RAID stripes into sub-stripes and storing pages into the sub-stripes based on a compressibility score. In another aspect, a method includes reading pages from 1-stripes, storing compressed data in a temporary location, reading multiple stripes, determining compressibility score for each stripe and filling stripes based on the compressibility score. In a further aspect, a method includes scanning a dirty queue in a system cache, compressing pages ready for destaging, combining compressed pages in to one aggregated page, writing one aggregated page to one stripe and storing pages with same compressibility score in a stripe.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: August 21, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: David Meiri, Anton Kucherov, Vladimir Shveidel
  • Patent number: 10048866
    Abstract: A storage control apparatus includes a plurality of MBFs for managing pieces of data stored in a storage by storage region, caches some of the MBFs on a RAM, and determines the presence or absence of redundancy on a basis of the MBFs on the RAM alone. The storage control apparatus performs redundancy elimination on the pieces of data already stored in the storage on the basis of how the MBFs are used such that the contents of a hash log for an MBF higher in frequency of use are maintained.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: August 14, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Yoshihiro Tsuchiya, Takashi Watanabe
  • Patent number: 10013218
    Abstract: Methods, apparatus and computer program products implement embodiments of the present invention that include storing one or more data volumes to a small computer system interface storage device, and receiving a request to map a given data volume to a host computer. One or more attributes of the given data volume are identified, and using the identified one or more attributes, a unique logical unit number (LUN) for the given data volume is generated. The given data volume is mapped to the host computer via the unique LUN. In some embodiments, the generated LUN includes one of the one or more attributes. In additional embodiments, the generated LUN includes a result of a hash function using the one or more attributes. In storage virtualization environments, the data volume may include secondary logical units, and mapping the given data volume to the host may include binding the SLU to the host.
    Type: Grant
    Filed: January 22, 2014
    Date of Patent: July 3, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Daniel I. Goodman, Ran Harel, Oren S. Li-On, Rivka M. Matosevich, Orit Nissan-Messing, Yossi Siles, Eliyahu Weissbrem
  • Patent number: 9990289
    Abstract: A processing system having a multilevel cache hierarchy employs techniques for repurposing dead cache blocks so as to use otherwise wasted space in a cache hierarchy employing a write-back scheme. For a cache line containing invalid data with a valid tag, the valid tag is maintained for cache coherence purposes or otherwise, resulting in a valid tag for a dead cache block. A cache controller repurposes the dead cache block by storing any of a variety of new data at the dead cache block, while storing the new tag in a tag entry of a dead block tag way with an identifier indicating the location of the new data.
    Type: Grant
    Filed: September 19, 2014
    Date of Patent: June 5, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel H. Loh, Derek R. Hower, Shuai Che