Patents Examined by Jae U Yu
  • Patent number: 11151058
    Abstract: Provided are a computer program product, system, and method for staging data from storage to a fast cache tier of a multi-tier cache in a non-adaptive sector caching mode in which data staged in response to a read request is limited to track sectors required to satisfy the read request. Data is also staged from storage to a slow cache tier of the multi-tier cache in a selected adaptive caching mode of a plurality of adaptive caching modes available for staging data of tracks. Adaptive caching modes are selected for the slow cache tier as a function of historical access ratios. Prestage requests for the slow cache tier are enqueued in one of a plurality of prestage request queues of various priority levels as a function of the selected adaptive caching mode and historical access ratios. Other aspects and advantages are provided, depending upon the particular application.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh Mohan Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
  • Patent number: 11133061
    Abstract: An example method includes determining a time between writes in place to a particular memory cell, incrementing a disturb count corresponding to a neighboring memory cell by a particular count increment that is based on the time between the writes to the particular memory cell, and determining whether to check a write disturb status of the neighboring memory cell based on the incremented disturb count.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: September 28, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Edward C. McGlaughlin, Samuel E. Bradshaw
  • Patent number: 11126556
    Abstract: Memory prefetching in a processor comprises: identifying, in response to memory access instructions, a pattern of addresses; and determining, based on the pattern of addresses, an address to prefetch. Determining the address to prefetch comprises: determining, using the pattern of addresses, an index into a history table; retrieving, from the history table and using the index, an offset value, wherein the offset value is not the address to prefetch; and determining the address to prefetch using the offset value and at least one address of the pattern of addresses. The method further comprises prefetching the address to prefetch.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: September 21, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventor: Shubhendu Sekhar Mukherjee
  • Patent number: 11126753
    Abstract: A processor chip including a memory controller, application processor and a communication processor, where the memory controller is configured to define an area of memory as secure memory, and allow only an access request with a security attribute to access the secure memory. The application processor is configured to invoke a secure application in a trusted execution environment, and write an instruction request for a secure element into the secure memory using the secure application. The communication processor is configured to read the instruction request from the secure memory in the trusted execution environment, and send the instruction request to the secure element. The application processor and the communication processor need to be in the trusted execution environment when accessing the secure memory, and access the secure memory only using the secure application.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: September 21, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Li Zhu, Zhihua Lu
  • Patent number: 11119921
    Abstract: State machine generation for a multi-buffer electronic system can include receiving, using a processor, a user input specifying a reader policy and a number of a plurality of buffers used by a reader and a writer of the multi-buffer electronic system. A state machine can be generated as a data structure. The state machine has a plurality of states determined based on the number of the plurality of buffers and the reader policy. The state machine allocates different buffers of the plurality of buffers to the reader in temporally accurate order over time. Each state can specify an allocation from the plurality of buffers to the reader and the writer. A state machine description including one or more program code components can be generated, where the one or more program components may be used in an implementation of the reader and an implementation of the writer.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: September 14, 2021
    Assignee: Xilinx, Inc.
    Inventor: Uday M. Hegde
  • Patent number: 11113193
    Abstract: Techniques for implementing an apparatus, which includes a memory system that provides data storage via multiple hierarchical memory levels, are provided. The memory system includes a cache that implements a first memory level and a memory array that implements a second memory level higher than the first memory level. Additionally, the memory system includes one or more memory controllers that determine a predicted data access pattern expected to occur during an upcoming control horizon, based at least in part on first context of first data to be stored in the memory sub-system, second context of second data previously stored in the memory system, or both, and control what one or more memory levels of the multiple hierarchical memory levels implemented in the memory system in which to store the first data, the second data, or both based at least in part on the predicted data access pattern.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: September 7, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Anton Korzh
  • Patent number: 11113000
    Abstract: In various embodiments, a memory pool application implements composite arrays via a memory pool that includes a first slab and a second slab. First, the memory pool application assigns the first slab and the second slab to a composite array. The memory pool application then modifies a final data word included in the first slab to store a first portion of a specified value and a leading data word included in the second slab to store a second portion of the specified value. The memory pool application copies the second data word to a duplicate data word included in the first slab. Subsequently, the memory pool application performs an unaligned read operation on the first slab based on a specified offset to retrieve a first word stored in memory and extracts the specified value from the first word based on the specified offset and a specified number of bits.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: September 7, 2021
    Assignee: NETFLIX, INC.
    Inventor: John Andrew Koszewnik
  • Patent number: 11106584
    Abstract: A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 31, 2021
    Assignee: Texas Instmments Incorporated
    Inventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria
  • Patent number: 11106368
    Abstract: A solid state drive and a method for accessing the metadata are provided. The solid state drive includes different kinds of first and second memories and a memory controller which controls the first and second memories, wherein the memory controller receives a metadata access request from a host, and includes a condition checker which determines conditions of the first and second memories in response to the metadata access request and selects at least one of the conditions, and the memory controller accesses to the memory selected by the condition checker.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: August 31, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-Duk Yu, Jin-Young Kim, Yu-Hun Jun
  • Patent number: 11099997
    Abstract: In a data prefetching method, a storage device obtains a first sequence stream length and a first access count of a target logical block after execution of a first data access request is completed. When a second data access request is received, the storage device modifies the first sequence stream length to a second sequence stream length and modifies the first access count to a second access count. The storage device further calculates a sequence degree of the target logical block based on the second sequence stream length and the second access count, and performs a data prefetching operation when the sequence degree of the target logical block exceeds a first prefetch threshold.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: August 24, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Chunhua Tan, Weiqiang Jia, Ding Li, Wenqiang Yang, Liyu Wang, Pengli Ji
  • Patent number: 11100004
    Abstract: A processor uses the same virtual address space for heterogeneous processing units of the processor. The processor employs different sets of page tables for different types of processing units, such as a CPU and a GPU, wherein a memory management unit uses each set of page tables to translate virtual addresses of the virtual address space to corresponding physical addresses of memory modules associated with the processor. As data is migrated between memory modules, the physical addresses in the page tables can be updated to reflect the physical location of the data for each processing unit.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: August 24, 2021
    Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC
    Inventors: Gongxian Jeffrey Cheng, Mark Fowler, Philip J. Rogers, Benjamin T. Sander, Anthony Asaro, Mike Mantor, Raja Koduri
  • Patent number: 11086543
    Abstract: A data to write to a tape in a linear tape file system (LTFS) is received. The data is written to the tape. A forced termination of the write is received. Responsive to receiving the forced termination, a preserve metadata command is issued.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: August 10, 2021
    Assignee: International Business Machines Corporation
    Inventor: Atsushi Abe
  • Patent number: 11086570
    Abstract: A storage device, a controller and a method for operating a controller are disclosed. The controller includes a descriptor storage circuit configured to store at least one descriptor corresponding to a command received from a host in one descriptor queue among N (N is a natural number) number of descriptor queues; a descriptor queue selection circuit configured to select one descriptor queue among the descriptor queues, as a target descriptor queue; and a descriptor execution control circuit configured to determine whether an operation indicated by a first descriptor stored in the target descriptor queue is executable, based on information on an available power budget and information on a power consumption amount for the first descriptor, and, when the operation indicated by the first descriptor is executable, control whether to execute an operation indicated by a second descriptor stored together with the first descriptor in the target descriptor queue.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: August 10, 2021
    Assignee: SK hynix Inc.
    Inventor: Ga-Young Lee
  • Patent number: 11074014
    Abstract: An apparatus includes a data storage medium having a plurality of tracks. The apparatus also includes a write history buffer configured to store a history of prior write commands to the plurality of tracks. The apparatus further includes a controller communicatively coupled to the write history buffer. The controller is configured to receive a new write command directed to a first portion of a first track of the plurality of tracks on the data storage medium. The controller is further configured to determine whether to update ATI contribution measures from the first track based on the history of write commands to the first track.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: July 27, 2021
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Jian Qiang, Mark A. Gaertner, Kay Hee Tang, Chee Hou Peng
  • Patent number: 11074183
    Abstract: A method and apparatus for read wearing control for storage class memory (SCM) are disclosed. The read data control apparatus, located between a host and the SCM subsystem, comprises a read data cache, an address cache and an SCM controller. The address cache stores pointers pointing to data stored in logging area(s) located in the SCM. For a read request, the read wearing control determines whether the read request is a read data cache hit, an address cache hit or neither (i.e., read data cache miss and address cache miss). For the read data cache hit, the requested data is returned from the read data cache. For the address cache hit, the requested data is returned from the logging area(s) and the read data becomes a candidate to be placed in the read data cache. For read data cache and address cache misses, the requested data is returned from SCM.
    Type: Grant
    Filed: December 28, 2019
    Date of Patent: July 27, 2021
    Assignee: Wolley Inc.
    Inventors: Yu-Ming Chang, Tai-Chun Kuo, Chuen-Shen Bernard Shung
  • Patent number: 11074186
    Abstract: A computer-implemented method according to one embodiment includes managing a cache in a tiered data storage system. The cache is configured to be powered by a temporary power source during a power loss event. The managing includes determining an amount of time that the temporary power source is capable of powering the cache before the temporary power source is depleted, and maintaining a dynamic cache size. The maintaining includes dynamically selecting the cache size based on the amount of time that the temporary power source is capable of powering the cache before the temporary power source is depleted, and based on a latency of destaging extents of data in the cache.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: July 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ganesh Govindrao Chaudhari, Kushal Patel, Sachin Chandrakant Punadikar, Sarvesh S. Patel
  • Patent number: 11068173
    Abstract: It is hereby disclosed an apparatus for and a method of writing software objects into a rewritable nonvolatile memory of an electronic control unit of an internal combustion engine, wherein the method comprises: receiving an access request from a memory writing device, generating a seed code, transmitting the seed code to the memory writing device, generating a first key code on the basis of the seed code and a first identification code, generating a second key code on the basis of the seed code and a second identification code, receiving a reference key code from the memory writing device, comparing the reference key code with the first key code and/or with the second key code, and enabling the memory writing device to write software objects into the rewritable nonvolatile memory, if the reference key code corresponds to the first key code or to the second key code.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: July 20, 2021
    Assignee: Lombardini S.R.L.
    Inventors: Felice Di Iorio, Federico Costa, Roberto Massaro
  • Patent number: 11061614
    Abstract: An electronic apparatus includes a storage device having a plurality of memory blocks including a first memory block; and a controller configured to control the storage device to perform a read operation for the first memory block in response to a read request of a host. The controller controls the storage device to perform a refresh operation for the first memory block based on whether there is a difference value between a current pass read voltage and a previous pass read voltage which were applied to the first memory block when performing the read operation, and whether there is a difference between a current erase/write count and a previous erase/write count for the first memory block.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: July 13, 2021
    Assignee: SK hynix Inc.
    Inventor: Chui Sung Kang
  • Patent number: 11061828
    Abstract: A computer-implemented method, according to one approach, includes: receiving an I/O request which includes supplemental information pertaining to an anticipated workload of the I/O request. The supplemental information is used to determine whether to satisfy the I/O request using a primary cache. In response to determining to satisfy the I/O request using the primary cache, the I/O request is initiated using the primary cache, and performance characteristics experienced by the primary cache while satisfying the I/O request are evaluated. The supplemental information and the performance characteristics are further used to determine whether to satisfy a remainder of the I/O request using the secondary cache. In response to determining to satisfy a remainder of the I/O request using the secondary cache, the I/O request is demoted from the primary cache to the secondary cache, and a remainder of the I/O request is satisfied using the secondary cache.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: July 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: Beth Ann Peterson, Chung Man Fung, Lokesh Mohan Gupta, Kyler A. Anderson
  • Patent number: 11054993
    Abstract: An apparatus is described. The apparatus includes peer-to-peer intelligence to be integrated into a mass storage system having a cache and a backing store. The peer-to-peer intelligence is to move data between the cache and backing store without the data passing through main memory.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: July 6, 2021
    Assignee: Intel Corporation
    Inventors: Knut S. Grimsrud, Sanjeev N. Trika