Patents Examined by Ilwoo Park
  • Patent number: 12135658
    Abstract: A bus architecture is disclosed that provides for transaction queue reallocation on the modules communicating using the bus. A module can implement a transaction request queue by virtue of digital electronic circuitry, e.g., hardware or software or a combination of both. Some bus clogging issues that affect conventional systems can be circumvented by combining an out of order system bus protocol that uses a transaction request replay mechanism. Modules can evict less urgent transactions from transaction request queues to make room to insert more urgent transactions. Master modules can dynamically update a quality of service (QoS) value for a transaction while the transaction is still pending.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: November 5, 2024
    Assignee: ATMEL CORPORATION
    Inventors: Franck Lunadier, Vincent Debout
  • Patent number: 12131031
    Abstract: Systems and methods for automated tuning of Quality of Service (QoS) settings of volumes in a distributed storage system are provided. According to one embodiment, one or more characteristics of a workload of a client to which a storage node of multiple storage nodes of the distributed storage system is exposed are monitored. After a determination has been made that a characteristic meets or exceeds a threshold, (i) information regarding multiple QoS settings assigned to a volume of the storage node utilized by the client is obtained, (ii) a new value of a burst IOPS setting of the multiple QoS settings is calculated by increasing a current value of the burst IOPS setting by a factor dependent upon a first and a second QoS setting of the multiple QoS settings, and (iii) the new value of the burst IOPS setting is assigned to the volume for the client.
    Type: Grant
    Filed: July 3, 2023
    Date of Patent: October 29, 2024
    Assignee: NetApp, Inc.
    Inventors: Austino Longo, Tyler W. Cady
  • Patent number: 12124401
    Abstract: A data communication apparatus comprises a line driver configured to couple the data communication apparatus to a 1-wire serial bus; and a controller configured to: transmit a plurality of synchronization pulses over the 1-wire serial bus after a sequence start condition (SSC) has been transmitted over the 1-wire serial bus, the plurality of synchronization pulses being configured to synchronize one or more receiving devices coupled to the 1-wire serial bus to an untransmitted transmit clock signal; initiate an interrupt handling procedure when the plurality of synchronization pulses is encoded with a first value; and initiate a read transaction or a write transaction with at least one of the one or more receiving devices coupled to the 1-wire serial bus when the plurality of synchronization pulses is encoded with a second value.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: October 22, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Lalan Jee Mishra, Umesh Srikantiah, Francesco Gatta, Richard Dominic Wietfeldt
  • Patent number: 12111764
    Abstract: Systems, apparatuses, and methods related to memory systems and operation are described. A memory system may be coupled to a processor, which includes a memory controller. The memory controller may determine whether targeting of first data and second data by the processor to perform an operation results in processor-side cache misses. When targeting of the first data and the second data result in processor-side cache misses, the memory controller may determine a single memory access request that requests return of both the first data and the second data and instruct the processor to output the single memory access request to a memory system via one or more data buses coupled between the processor and the memory system to enable processing circuitry implemented in the processor to perform the operation based at least in part on the first data and the second data when returned from the memory system.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: October 8, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Harold Robert George Trout
  • Patent number: 12105575
    Abstract: Example implementations relate to executing a workload in a computing system including processing devices, memory devices, and a circuit switch. An example includes identifying first and second instruction-level portions to be consecutively executed by the computing system; determining a first subset of processing devices and a first subset of memory devices to be used to execute the first instruction-level portion; controlling the circuit switch to interconnect the first subset of processing devices and the first subset of memory devices during execution of the first instruction-level portion; determining a second subset of the processing devices and a second subset of the memory devices to be used to execute the second instruction-level portion; and controlling the circuit switch to interconnect the second subset of processing devices and the second subset of memory devices during execution of the second instruction-level portion.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: October 1, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Terrel Morris
  • Patent number: 12099453
    Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: September 24, 2024
    Assignee: NVIDIA Corporation
    Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
  • Patent number: 12079154
    Abstract: A storage engine has a pair of compute nodes, each compute node having a separate PCIe root complex and attached memory. The PCIe root complexes are interconnected by multiple Non-Transparent Bridge (NTB) links. The NTB resources are unequally shared, such that host IO devices are required to use a first subset of the NTB links to implement memory access operations on the memory of the peer compute node, whereas storage software memory access operations are able to be implemented on all of the NTB links. A NTB link arbitration system arbitrates usage of the first and second subsets of NTB links by the storage software, to distribute subsets of the storage software memory access operations on peer memory to the first and second subsets of NTB links, while causing all host IO device memory access operations on peer memory to be implemented on the first set of NTB links.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: September 3, 2024
    Assignee: Dell Products, L.P.
    Inventors: Jonathan Krasner, Ro Monserrat, Jerome Cartmell, Thomas Mackintosh
  • Patent number: 12073078
    Abstract: Some aspects as disclosed herein are directed to, for example, a system and method of providing flexible surge volume management to applications when performance capacity is available. The system and method may comprise determining when a data surge is occurring and in response determining available performance capacity and automatically allocating, the available performance capacity, to storage group applications performing data operations.
    Type: Grant
    Filed: June 3, 2022
    Date of Patent: August 27, 2024
    Assignee: Bank of America Corporation
    Inventor: Bijoy Shroff
  • Patent number: 12067270
    Abstract: Systems, methods, and apparatus for memory device security and row hammer mitigation are described. A control mechanism may be implemented in a front-end and/or a back-end of a memory sub-system to refresh rows of the memory. A row activation command having a row address at control circuitry of a memory sub-system and incrementing a first count of a row counter corresponding to the row address stored in a content addressable memory (CAM) of the memory sub-system may be received. Control circuitry may determine whether the first count is greater than a row hammer threshold (RHT) minus a second count of a CAM decrease counter (CDC); the second count may be incremented each time the CAM is full. A refresh command to the row address may be issued when a determination is made that the first count is greater than the RHT minus the second count.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: August 20, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Yang Lu, Sujeet Ayyapureddi, Edmund J. Gieske, Cagdas Dirik, Ameen D. Akel, Elliott C. Cooper-Balis, Amitava Majumdar, Robert M. Walker, Danilo Caraccio
  • Patent number: 12067258
    Abstract: A memory device includes: a memory cell array including a first memory cell group including memory cells located within a first physical distance from a reference node and a second memory cell group including memory cells located beyond the first physical distance from the reference node; a peripheral circuit configured to perform a program operation of applying program voltages increasing gradually to memory cells included in the memory cell array through word lines; and control logic configured to determine a time at which a first program permission voltage is applied to the first memory cell group and determine a magnitude of the first program permission voltage on the basis of a magnitude of the program voltages in response to a gradual increase in the program voltages, the control logic is further configured to control the peripheral circuit to apply the first program permission voltage to the first memory cell group through bit lines.
    Type: Grant
    Filed: October 10, 2022
    Date of Patent: August 20, 2024
    Assignee: SK hynix Inc.
    Inventors: Hyun Seob Shin, Dong Hun Kwak
  • Patent number: 12067256
    Abstract: A technique is configured to provide various data protection schemes, such as replication and erasure coding, for data blocks of volumes served by storage nodes of a cluster configured to perform deduplication of the data blocks. Additionally, the technique is configured to ensure that each deduplicated data block complies with data redundancy guarantees of the data protection schemes, while improving storage space of the storage nodes. In order to satisfy the data integrity guarantees while improving available storage space, the storage nodes perform periodic garbage collection for data blocks to optimize storage in accordance with currently applicable data protection schemes.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: August 20, 2024
    Assignee: NetApp, Inc.
    Inventors: Christopher Clark Corey, Daniel David McCarthy, Sneheet Kumar Mishra, Austino Nicholas Longo
  • Patent number: 12061554
    Abstract: Systems, apparatuses, and methods related to memory systems and operation are described. A memory system may be communicative coupled to a processor via one or more data buses. Additionally, the memory system may include one or more memory devices that store data to be used by processing circuitry implemented in the processor to perform an operation. Furthermore, the memory system may include a memory controller that receives a memory access request that return of the data via the one or more data buses and, in response, determines a storage location of the data in the one or more memory devices based at least in part on the memory access request and instructs the memory system to store the data directly into a processor-side cache integrated with the processing circuitry to enable the processing circuitry implemented in the processor to perform the operation based on the data.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: August 13, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Harold Robert George Trout
  • Patent number: 12061796
    Abstract: A storage device includes a memory device including a first memory region, a second memory region, and a third memory region, the first memory region having a lowest bit-density relative to the second memory region and the third memory region, a second memory region having a medium bit-density relative to the first memory region and the third memory region, and a third memory region having a highest bit-density relative to the first memory region and the second memory region; and a controller configured to control the memory device The controller is configured to distribute data received from a host to the first to third memory regions based on attributes of the data, to determine a current state based on a data distribution amount for each of the first to third memory regions and a respective size of each of the first to third memory regions, and to perform an action of increasing or decreasing a size of the second memory region under the current state based on a reinforcement learning result for mitigating
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: August 13, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Gyeongmin Nam, Chanha Kim, Seungryong Jang
  • Patent number: 12061809
    Abstract: Host access to a system DS1 can be configured for a logical device L1 so that L1 is exposed to the host over path P1 from DS1. Prior to configuring host access to L1 on another system DS2, configuration information of DS1 can be updated to include a fully populated uniform host configuration for the host with respect to L1. The fully populated uniform host configuration can identify P1 as well as path P2 between DS2 and the host. Even though P2 may not be established so that L1 is not yet exposed to the host over P2, DS1 can use the information included in the fully populated uniform host configuration to report information to the host regarding path state information for P1 and P2. The host can directly query DS2 regarding P2 in order to determine current up-to-date information regarding the path state of P2 with respect to L1.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: August 13, 2024
    Assignee: Dell Products L.P.
    Inventors: Dave J. Lindner, Mrinalini Chavan
  • Patent number: 12056044
    Abstract: A system, method, and apparatus are provided to facilitate data structures for a datatype engine and provide inline compaction. The system receives, by a network interface card (NIC), a command to read data from a host memory, wherein the command indicates a datatype. The system generates a plurality of read requests comprising offsets from a base address and corresponding lengths based on the datatype. The system issues the plurality of read requests to the host memory to obtain the data from the host memory. The system obtains a byte-mask descriptor corresponding to the datatype. The system performs, based on the obtained data and the byte-mask descriptor, on-the-fly compaction of the obtained data, thereby allowing the NIC to return a requested subset of the obtained data.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: August 6, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Keith D. Underwood, Robert L. Alverson, Christopher Michael Brueggen
  • Patent number: 12050777
    Abstract: A processing device in a memory sub-system determines whether a media endurance metric associated with a memory block of a memory device satisfies one or more conditions. In response to the one or more conditions being satisfied, one or more read margin levels corresponding to a page type associated with the memory device are determined. A machine learning model is applied to the one or more read margin levels to generate a margin prediction value based on the page type and a wordline group associated with the memory device. Based on the margin prediction value, the memory device is assigned to a selected bin of a set of bins. A media scan operation is executed on the memory device in accordance with a scan frequency associated with the selected bin.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: July 30, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Li-Te Chang, Murong Lang, Charles See Yeung Kwong, Vamsi Pavan Rayaprolu, Seungjune Jeon, Zhenming Zhou
  • Patent number: 12045472
    Abstract: A storage device includes a storage controller, which is configured to receive a command generated by a first virtual machine, from a host, and a non-volatile memory device, which is configured to store first data for the command. The command includes one of a retain command, which is generated to command the storage controller to retain the first data in the non-volatile memory device, or an erase command, which is generated to command the storage controller to erase the first data from the non-volatile memory device, when access between the first virtual machine and the storage controller at least temporarily interrupted.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: July 23, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hee Seok Eun, Ji Soo Kim
  • Patent number: 12032842
    Abstract: An apparatus in one embodiment includes at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to obtain in a host device information characterizing local-remote designations of respective first and second storage systems, one of which is designated as local and one of which is designated as remote, and to adjust path selection in a multi-path layer of the host device based at least in part on the obtained information characterizing the local-remote designations of the respective first and second storage systems. In some embodiments, a given logical storage device is accessible to the multi-path layer of the host device via a first set of paths to the first storage system and a second set of paths to the second storage system, and adjusting path selection in the multi-path layer comprises adjusting weights assigned to respective ones of the paths.
    Type: Grant
    Filed: October 10, 2022
    Date of Patent: July 9, 2024
    Assignee: Dell Products L.P.
    Inventors: Rimpesh Patel, Amit Pundalik Anchi, Vinay G. Rao
  • Patent number: 12014059
    Abstract: A storage device, a method of operating the storage device, and a method of operating a host device are provided. The storage device includes a nonvolatile memory (NVM) and a storage controller controlling the nonvolatile memory. The storage controller is configured to receive a command from a host device giving instructions to sanitize data with the use of a cryptographic erase. The storage controller is also configured to, in response to a request from the host device, transmit to the host device a first verification value indicative of whether a first media encryption key (MEK) stored in the NVM has been deleted and a second verification value indicative of whether a second MEK, which is different from the first MEK, has been generated and stored in the NVM.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: June 18, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyung-Jin Lee, Ji Soo Kim, Kyung-Woo Noh, Young Hyun Ji
  • Patent number: 12001681
    Abstract: This application provides a storage device, a distributed storage system, and a data processing method, and belongs to the field of storage technologies. In this application, an AI apparatus is disposed inside a storage device, so that the storage device has an AI computing capability. In addition, the storage device further includes a processor and a hard disk, and therefore further has a service data storage capability. Therefore, convergence of storage and AI computing power is implemented. An AI parameter and service data are transmitted inside the storage device through a high-speed interconnect network without a need of being forwarded through an external network. Therefore, a path for transmitting the service data and the AI parameter is greatly shortened, and the service data can be loaded nearby, thereby accelerating loading.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: June 4, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jinzhong Liu, Hongdong Zhang