Patents Examined by Edward Waddy, Jr.
-
Patent number: 12287964Abstract: A system and method for managing queues for persistent storage. In some embodiments, the method includes opening, by a first thread running in a host, a first storage object; and creating, by the host, in a memory of the host, a first block device queue, the first block device queue being dedicated to the first storage object.Type: GrantFiled: September 9, 2022Date of Patent: April 29, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Sudarsun Kannan, Yujie Ren, Rekha Pitchumani
-
Patent number: 12282425Abstract: Virtual memory pooling, including identifying GPUs of respective IHSs, wherein each of the GPUs is associated with a respective internal memory allocation; partitioning, for each GPU, the internal memory allocation associated with the GPU into a first memory allocation and a second memory allocation; allocating, for each GPU, the first memory allocation of the internal memory allocation associated with the GPU as accessible only by the associated GPU; pooling, for each GPU, the second memory allocation of the internal memory allocation associated with the GPU to define a virtual memory pool, the virtual memory pool accessible by each GPU; processing, at a first GPU, a computational task, including: accessing the first memory allocation associated with the first GPU; determining that processing of the computational task exceeds a capacity of the first memory allocation of the first GPU and in response, requesting access to the virtual memory pool.Type: GrantFiled: July 12, 2023Date of Patent: April 22, 2025Assignee: Dell Products L.P.Inventors: Ankit Singh, Deepaganesh Paulraj
-
Patent number: 12282434Abstract: The disclosed technology relates to determining physical zone data within a zoned namespace solid state drive (SSD), associated with logical zone data included in a first received input-output operation based on a mapping data structure within a namespace of the zoned namespace SSD. A second input-output operation specific to the determined physical zone data is generated wherein the second input-output operation and the received input-output operation is of a same type. The generated second input-output operation is completed using the determined physical zone data within the zoned namespace SSD.Type: GrantFiled: October 16, 2023Date of Patent: April 22, 2025Assignee: NETAPP, INC.Inventors: Abhijeet Prakash Gole, Rohit Shankar Singh, Douglas P. Doucette, Ratnesh Gupta, Sourav Sen, Prathamesh Deshpande
-
Patent number: 12277055Abstract: Systems and methods for address mapping for a memory system are described. A system address that includes a first set of bits may be received. The first set of bits may be partitioned into at least a second set of bits and a third set of bits. A fourth set of bits may be determined based on the second set of bits. A memory address may be determined by using the third set of bits and the fourth set of bits.Type: GrantFiled: February 18, 2021Date of Patent: April 15, 2025Assignee: Synopsys, Inc.Inventors: Jun Zhu, Toshinao Matsumura, Gokhan Gultoprak
-
Patent number: 12265487Abstract: The present disclosure discloses a method and circuit for accessing a write data path of an on-chip storage control unit. The method includes: transmitting, by the write data path interface, the write address and the write data to an address conversion unit; transmitting, by the address conversion unit, a target address and the write data to a plurality of storage control units, and determining, by the address conversion unit, a target storage control unit; obtaining, by the address conversion unit, a feedback signal of the target storage control unit, and transmitting, by the address conversion unit, the feedback signal to the target controller; and storing the write data.Type: GrantFiled: August 4, 2023Date of Patent: April 1, 2025Assignee: SUNLUNE (SINGAPORE) PTE. LTD.Inventors: Yusheng Zhang, Peijia Tian, Kai Cai, Fuquan Wang
-
Patent number: 12259821Abstract: There is provided an apparatus comprising input circuitry that receives requests comprising input addresses in an input domain. Output circuitry provides output addresses. The output addresses comprise secure physical addresses to secure storage circuitry and non-secure physical addresses to non-secure storage circuitry. Lookup circuitry stores a plurality of mappings comprising at least one mapping between the input addresses and the secure physical addresses, and at least one mapping between the input addresses and the non-secure physical addresses.Type: GrantFiled: January 29, 2020Date of Patent: March 25, 2025Assignee: Arm LimitedInventors: Simon John Craske, Jacob Eapen
-
Patent number: 12260084Abstract: A storage device is disclosed. The storage device may include storage for data. A host interface logic may receive a dataset and a logical address from a host. A stream assignment logic may assign a stream identifier (ID) to a compressed dataset based on a compression characteristic of the compressed dataset. The stream ID may be one of at least two stream IDs; the compressed dataset may be determined based on the dataset. A logical-to-physical translation layer may map the logical address to a physical address in the storage. A controller may store the compressed dataset at the physical address using the stream ID.Type: GrantFiled: August 29, 2022Date of Patent: March 25, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jingpei Yang, Jing Yang, Rekha Pitchumani
-
Patent number: 12248400Abstract: A computer-implemented method for allocating memory bandwidth of multiple CPU cores in a server includes: receiving an access request to a last level cache (LLC) shared by the multiple CPU cores in the server, the access request being sent from a core with a private cache holding copies of frequently accessed data from a memory; determining whether the access request is an LLC hit or an LLC miss; and controlling a memory bandwidth controller based on the determination. The memory bandwidth controller performs a memory bandwidth throttling to control a request rate between the private cache and the last level cache. The LLC hit of the access request causes the memory bandwidth throttling initiated by the memory bandwidth controller to be disabled and the LLC miss of the access request causes the memory bandwidth throttling initiated by the memory bandwidth controller to be enabled.Type: GrantFiled: August 16, 2023Date of Patent: March 11, 2025Assignee: Alibaba (China) Co., Ltd.Inventors: Lide Duan, Bowen Huang, Qichen Zhang, Shengcheng Wang, Yen-Kuang Chen, Hongzhong Zheng
-
Patent number: 12248404Abstract: Provided are ZNS standard based storage device providing data compression and method thereof. A provided method for processing an Input/Output (IO) command includes: providing, by a host to a storage device, a first command for writing data to a first storage zone; allocating, by the storage device according to a Write Pointer (WP) of the first storage zone, a first logical address index to the data to be written by the first command, wherein the first logical address index and a first size of the data to be written by the first command define a first logical address space; compressing the data to be written by the first command to obtain compressed data; storing the compressed data; recording an address for storing the compressed data in association with the first logical address index; providing the first logical address index to the host; and recording, by the host, a first host logical address (HLBA) accessed by the first command in association with the first logical address index.Type: GrantFiled: December 29, 2021Date of Patent: March 11, 2025Assignee: BEIJING MEMBLAZE TECHNOLOGY CO., LTDInventor: Rong Yuan
-
Patent number: 12242385Abstract: Methods, systems, and devices for virtual addresses for a memory system are described. In some examples, a virtual address space may be shared across a plurality of memory devices that are included in one or more domains. The memory devices may be able to communicate with each other directly. For example, a first memory device may be configured to generate a data packet that includes an identifier and an address that is included in the shared virtual address space. The data packet may be transmitted to a second memory device based on the identifier, and the second memory device may access a physical address based on the address.Type: GrantFiled: January 10, 2022Date of Patent: March 4, 2025Assignee: Micron Technology, Inc.Inventors: Bryan Hornung, Tony M. Brewer
-
Patent number: 12235756Abstract: Near-memory compute elements perform memory operations and temporarily store at least a portion of address information for the memory operations in local storage. A broadcast memory command is then issued to the near-memory compute elements that causes the near-memory compute elements to perform a subsequent memory operation using their respective address information stored in the local storage. This allows a single broadcast memory command to be used to perform memory operations across multiple memory elements, such as DRAM banks, using bank-specific address information. In one implementation, the approach is used to process workloads with irregular updates to memory while consuming less command bus bandwidth than conventional approaches. Implementations include using conditional flags to selectively designate address information in local storage that is to be processed with the broadcast memory command.Type: GrantFiled: December 21, 2021Date of Patent: February 25, 2025Assignee: Advanced Micro Devices, Inc.Inventors: Shaizeen Aga, Johnathan Alsop, Nuwan Jayasena
-
Patent number: 12229054Abstract: Aspects presented herein relate to methods and devices for graphics processing units including an apparatus. The apparatus may calculate a first average memory latency for the first configuration of the cache. Further, the apparatus may adjust the first configuration of the cache to a second configuration of the cache. The apparatus may calculate a second average memory latency for second configuration of the cache. Further, the apparatus may adjust the second configuration to a third configuration of the cache. The apparatus may calculate a third average memory latency for third configuration of the cache. The apparatus may output an indication of a lowest average memory latency of the first average memory latency, the second average memory latency, or a third average memory latency. Also, the apparatus may set, based on the lowest average memory latency, the cache to the first configuration, the second configuration, or a third configuration.Type: GrantFiled: March 31, 2023Date of Patent: February 18, 2025Assignee: QUALCOMM IncorporatedInventor: Suryanarayana Murthy Durbhakula
-
Patent number: 12229051Abstract: Memory modules and associated devices and methods are provided using a memory copy function between a cache memory and a main memory that may be implemented in hardware. Address translation may additionally be provided.Type: GrantFiled: December 29, 2022Date of Patent: February 18, 2025Assignee: Intel Germany GmbH & Co. KGInventors: Ritesh Banerjee, Jiaxiang Shi, Ingo Volkening
-
Patent number: 12216588Abstract: A memory module may include J memory chips configured to input/output data in response to each of a plurality of translated address signals; and an address remapping circuit configured to generate a plurality of preliminary translated address signals by adding first correction values to a target address signal provided from an exterior of the memory module, and to generate the plurality of translated address signals by shifting all bits of each of the plurality of preliminary translated address signals so that K bits included in a bit string of each of the plurality of preliminary translated address signals are moved to other positions of each bit string.Type: GrantFiled: December 6, 2021Date of Patent: February 4, 2025Assignee: SK hynix Inc.Inventors: Sung Woo Hyun, Hyeong Tak Ji, Myoung Seo Kim, Jae Hoon Kim, Eui Cheol Lim
-
Patent number: 12204441Abstract: A method includes receiving, via a communication link and at a device of an integrated circuit system, a cache line comprising a destination address, determining, via the device, a type of memory or storage associated with the destination address, the type of memory or storage comprising persistent or non-persistent, and tagging the cache line with metadata in a manner indicating the type of memory or storage associated with the destination address.Type: GrantFiled: December 24, 2020Date of Patent: January 21, 2025Assignee: Altera CorporationInventors: Sharath Raghava, Nagabhushan Chitlur, Harsha Gupta
-
Patent number: 12204455Abstract: A method includes synthesizing a hardware description language (HDL) code into a netlist comprising a first a second and a third components. The method further includes allocating addresses to each component of the netlist. Each allocated address includes assigned addresses and unassigned addresses. An internal address space for a chip is formed based on the allocated addresses. The internal address space includes assigned addresses followed by unassigned addresses for the first component concatenated to the assigned addresses followed by unassigned addresses for the second component concatenated to the assigned addresses followed by unassigned addresses for the third component. An external address space for components outside of the chip is generated that includes only the assigned addresses of the first component concatenated to the assigned addresses of the second component concatenated to the assigned addresses of the third component. Internal addresses are translated to external addresses and vice versa.Type: GrantFiled: February 22, 2023Date of Patent: January 21, 2025Assignee: Marvell Asia Pte LtdInventors: Saurabh Shrivastava, Shrikant Sundaram, Guy T. Hutchison
-
Patent number: 12197509Abstract: An algorithmic TCAM based ternary lookup method is provided. The method stores entries for ternary lookup into several sub-tables. All entries in each sub-table have a sub-table key that includes the same common portion of the entry. No two sub-tables are associated with the same sub-table key. The method stores the keys in a sub-table keys table in TCAM. Each key has a different priority. The method stores the entries for each sub-table in random access memory. Each entry in a sub-table has a different priority. The method receives a search request to perform a ternary lookup for an input data item. A ternary lookup into the ternary sub-table key table stored in TCAM is performed to retrieve a sub-table index. The method performs a ternary lookup across the entries of the sub-table associated with the retrieved index to identify the highest priority matched entry for the input data item.Type: GrantFiled: December 23, 2022Date of Patent: January 14, 2025Assignee: Barefoot Networks, Inc.Inventors: Patrick Bosshart, Michael G. Ferrara, Jay E. S. Peterson
-
Patent number: 12197326Abstract: A decoding device may determine a candidate data unit among a plurality of data units included in one data chunk, in parallel with an operation of decoding a target data unit among the plurality of data units. The decoding device may determine whether to decode the candidate data unit, and may decode the candidate data unit according to whether to decode the candidate data unit, after executing decoding on the target data unit.Type: GrantFiled: January 11, 2023Date of Patent: January 14, 2025Assignee: SK hynix Inc.Inventors: Dae Sung Kim, Bi Woong Chung
-
Patent number: 12197327Abstract: A decoding device may determine a candidate data unit among a plurality of data units included in one data chunk, in parallel with an operation of decoding a target data unit among the plurality of data units. The decoding device may determine whether to decode the candidate data unit, and may decode the candidate data unit according to whether to decode the candidate data unit, after executing decoding on the target data unit.Type: GrantFiled: June 21, 2023Date of Patent: January 14, 2025Assignee: SK hynix Inc.Inventors: Dae Sung Kim, Bi Woong Chung
-
Patent number: 12189538Abstract: The invention introduces a method for performing operations to namespaces of a flash memory device, by a processing unit of a storage device, at least including the steps: receiving a cross-namespace data-movement command from a host, requesting to move user data of a first logical address of a first namespace to a second logical address of a second namespace; cutting first physical address information corresponding to the first logical address of a first logical-physical mapping table corresponding to the first namespace; and storing the first physical address information in an entry corresponding to a second logical address of a second logical-physical mapping table corresponding to the second namespace.Type: GrantFiled: March 9, 2022Date of Patent: January 7, 2025Assignee: SILICON MOTION, INC.Inventor: Sheng-Liu Lin