Patents Examined by Titus Wong
  • Patent number: 12360930
    Abstract: Examples of computing systems that include I/O device(s) that respect an existing hardware resource partitioning in a modern computing platform are provided. The computing systems include at least one CPU having multiple cores and one or more CPU caches. The computing system also includes a main memory having locations, where each location maps to a set in the one or more CPU caches. A first subset of locations is partitioned for thread(s) of a first application and assigned to non-contiguous memory locations of the main memory. The computing system further includes an I/O device separate from the CPU that is configured to store I/O data in a second subset of locations that are different from the first subset of locations. The second subset of locations are non-contiguous memory locations of the main memory that are separated in address space according to a predefined pattern.
    Type: Grant
    Filed: November 20, 2023
    Date of Patent: July 15, 2025
    Assignee: Honeywell International Inc.
    Inventors: Pavel Zaykov, Larry James Miller
  • Patent number: 12335339
    Abstract: A system of prefetching a target address is applied to a server. The system includes: an Application Programming Interface (API) module, a threshold module, a control module and a first engine module, wherein the API module, the threshold module, the control module and the first engine module are all arranged in a first server; the API module acquires a Remote Direct Memory Access (RDMA) instruction in the first server; a threshold of the first engine module is set in the threshold module, and when a size of RDMA data corresponding to the RDMA instruction exceeds the threshold, the threshold module sends a thread increasing instruction to the control module; and the control module controls, according to the thread increasing instruction sent by the threshold module, a network card of the first server to increase the number of threads of the first engine module.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: June 17, 2025
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Ye Ren, Weisong Guo, Xiang Chen
  • Patent number: 12326816
    Abstract: Techniques for offload device address translation fetching are disclosed. In the illustrative embodiment, a processor of a compute device sends a translation fetch descriptor to an offload device before sending a corresponding work descriptor to the offload device. The offload device can request translations for virtual memory address and cache the corresponding physical addresses for later use. While the offload device is fetching virtual address translations, the compute device can perform other tasks before sending the corresponding work descriptor, including operations that modify the contents of the memory addresses whose translation are being cached. Even if the offload device does not cache the translations, the fetching can warm up the cache in a translation lookaside buffer. Such an approach can reduce the latency overhead that the offload device may otherwise incur in sending memory address translation requests that would be required to execute the work descriptor.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: June 10, 2025
    Assignee: Intel Corporation
    Inventors: Saurabh Gayen, Philip R. Lantz, Dhananjay A. Joshi, Rupin H. Vakharwala, Rajesh M. Sankaran, Narayan Ranganathan, Sanjay Kumar
  • Patent number: 12326806
    Abstract: Intelligent memory brokering for multiple process instances, such as relational databases (e.g., SQL servers), reclaims memory based on value, thereby minimizing cost across instances. An exemplary solution includes: based at least on a trigger event, determining a memory profile for each of a plurality of process instances at a computing node; determining an aggregate memory profile, the aggregate memory profile indicating a memory unit cost for each of a plurality of memory units; determining a count of memory units to be reclaimed; identifying, based at least on the aggregate memory profile and the count of memory units to be reclaimed, a count of memory units to be reclaimed within each process instance so that a total cost is minimized to reclaim the determined count; and communicating, to each process instance having identified memory units to be reclaimed, a count of memory units to be reclaimed within the process instance.
    Type: Grant
    Filed: June 19, 2023
    Date of Patent: June 10, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Manoj Syamala, Vivek Narasayya, Junfeng Dong, Ajay Kalhan, Shize Xu, Changsong Li, Pankaj Arora, Jiaqi Liu, John M. Oslake, Arnd Christian König
  • Patent number: 12282438
    Abstract: The technology disclosed herein pertains to a system and method for scaling storage using peer-to-peer NVMe communication, the method including selecting one of a plurality of NVMe devices as principal device to control communication with a host via a PCI switch, designating remainder of the plurality of NVMe devices as subordinate devices, and controlling the communication between the host and the subordinate devices using a PCI P2P DMA between the principal device and the subordinate devices.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: April 22, 2025
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Marc Timothy Jones, Jason Wayne Kinsey, Benjamin James Scott, Robert William Dixon
  • Patent number: 12277058
    Abstract: Disclosed are systems and methods that determine whether instances of data (e.g., forward activations, backward derivatives of activations) that are used to train deep neural networks are to be stored on-chip or off-chip. The disclosed systems and methods are also used to prune the data (discard or delete selected instances of data). A system includes a hierarchical arrangement of on-chip and off-chip memories, and also includes a hierarchical arrangement of data selector devices that are used to decide whether to discard data and where in the system the data is to be discarded.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: April 15, 2025
    Assignee: Alibaba Group Holding Limited
    Inventors: Minghai Qin, Chunsheng Liu, Zhibin Xiao, Tianchan Guan, Yuan Gao
  • Patent number: 12271752
    Abstract: A method of managing CPU cores in a data storage apparatus (DSA) configured to perform both host I/O (Input/Output) processing and background storage processing is provided. The method includes (a) selectively classifying background storage tasks in one of a first classification for longer-running tasks and a second classification for shorter-running tasks; (b) selecting CPU cores that are running fewer than a threshold number of first-classification background tasks to process host I/O requests; and (c) processing the host I/O requests on their selected CPU cores. An apparatus, system, and computer program product for performing a similar method are also provided.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: April 8, 2025
    Assignee: EMC IP Holding Company LLC
    Inventors: John Gillono, Philippe Armangau, Vamsi K. Vankamamidi, Ashok Tamilarasan
  • Patent number: 12271326
    Abstract: A data flow-based neural network multi-engine synchronous calculation system, include: a plurality of calculation engines each including a plurality of calculation modules and at least one cache module located at different layers, and each calculation module is configured to calculate an input calculation graph provided by the cache module or the calculation module of a previous layer of a layer where each calculation module is located, so as to obtain an output calculation graph; and at least one synchronization module each being configured to monitor the data amount of the input calculation graph stored by the cache module on the same layer in each calculation engine, and control, when the data amount reaches a preset value corresponding to each cache module, each cache module on the same layer to output the stored input calculation graph to the calculation module on a next layer.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: April 8, 2025
    Assignee: Shenzhen Corerain Technologies Co., Ltd.
    Inventors: Li Jiao, Yuanchao Li, Kuen Hung Tsoi, Xinyu Niu
  • Patent number: 12267387
    Abstract: A system comprises control circuitry that is operable to assign a first of a plurality of computing devices to serve file system requests destined for any of a first plurality of network addresses; assign a second of the computing devices to serve file system requests destined for any of a second plurality of network addresses; maintain statistics regarding file system requests sent to each of the first plurality of network addresses and the second plurality of network addresses; and reassign, based on the statistics, the first of the computing devices to serve file system requests destined for a selected one of the second plurality of network addresses.
    Type: Grant
    Filed: July 27, 2023
    Date of Patent: April 1, 2025
    Assignee: Weka.IO Ltd.
    Inventors: Maor Ben Dayan, Omri Palmon, Liran Zvibel
  • Patent number: 12259817
    Abstract: A smart storage device is provided. The smart storage device includes a smart interface connected to a host device. An accelerator circuit is connected to the smart interface through a data bus conforming to a CXL.cache protocol and a CXL.mem protocol. The accelerator circuit is configured to perform acceleration computation in response to a computation command of the host device. A storage controller is connected to the smart interface through a data bus conforming to a CXL.io protocol. The storage controller is configured to control a data access operation for a storage device in response to a data access command of the host device. The accelerator circuit is directly accessible to the storage device through an internal bus connected directly to the storage controller.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: March 25, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyeok Jun Choe, Youn Ho Jeon, Young Geon Yoo, Hyo-Deok Shin, I Poom Jeong
  • Patent number: 12260092
    Abstract: As described herein, an apparatus may include a memory that includes a first portion, a second portion, and a third portion. The apparatus may also include a memory controller that includes a first logical-to-physical table stored in a buffer memory. The memory controller may determine that the first portion is accessed sequential to the second portion and may adjust the first logical-to-physical table to cause a memory transaction performed by the memory controller to access the third portion as opposed to the first portion.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: March 25, 2025
    Assignee: Micron Technology, Inc.
    Inventor: Rajesh N. Gupta
  • Patent number: 12242403
    Abstract: A system is presented that includes two data processing systems that are coupled via a network, each data processing system including a reconfigurable processor with a reconfigurable processor memory, a host that is coupled to the reconfigurable processor and that includes a host processor and a host memory that is coupled to the host processor, and a network interface controller (NIC) that is operatively coupled to the reconfigurable processor and to the host processor. The reconfigurable processor of one of the data processing systems is configured to implement a virtual function that uses a virtual address for a memory access operation. An application programming interface (API) in the host processor translates the virtual address into a physical address, and the NIC uses the physical address to initiate a direct memory access operation at the reconfigurable processor memory or the host memory of the other data processing system.
    Type: Grant
    Filed: March 14, 2023
    Date of Patent: March 4, 2025
    Assignee: SambaNova Systems, Inc.
    Inventors: Conrad Alexander Turlik, Sudhakar Dindukurti, Anand Misra, Arjun Sabnis, Milad Sharif, Ravinder Kumar, Joshua Earle Polzin, Arnav Goel, Steven Dai
  • Patent number: 12236125
    Abstract: Methods, systems, and devices for performance monitoring for a memory system are described. A memory system may use a set of counters to determine state information for the memory system. The memory system may also use a set of timers to determine latency information for the memory system. In response to a request for performance information, the memory system may transmit state information, latency information, or both to a host system.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: February 25, 2025
    Assignee: Micron Technology, Inc.
    Inventor: David Andrew Roberts
  • Patent number: 12229061
    Abstract: Disclosed is an electronic device which includes a plurality of memory devices, a memory controller, a first signal line that makes electrical connection between the memory controller and a first branch point, a second signal line that makes electrical connection between the first branch point and a second branch point, a third signal line that makes electrical connection between the first branch point and a third branch point, a fourth signal line that electrically connects the second branch point and the first memory device, a fifth signal line that electrically connects the second branch point and the second memory device, a sixth signal line that electrically connects the third branch point and the third memory device, and a stub that includes a first end electrically connected with at least one of the plurality of signal lines, and a second end being left open-circuit.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: February 18, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kwangsoo Park, Jae-Sang Yun, Su-Jin Kim, Jiwoon Park
  • Patent number: 12229065
    Abstract: A DMA system includes two or more DMA engines that facilitate transfers of data through a shared memory. The DMA engines may operate independently of each other and with different throughputs. A data flow control module controls data flow through the shared memory by tracking status information of data blocks in the shared memory. The data flow control module updates the status information in response to read and write operations to indicate whether each block includes valid data that has not yet been read or if the block has been read and is available for writing. The data flow control module shares the status information with the DMA engines via a side-channel interface to enable the DMA engines to determine which block to write to or read from.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: February 18, 2025
    Assignee: Cryptography Research, Inc.
    Inventors: Winthrop John Wu, Samatha Gummalla, Bryan Jason Wang
  • Patent number: 12210904
    Abstract: A method for more efficiently storing genomic includes designating multiple different data storage techniques for storing genomic data generated by a genomic pipeline. The method further identifies a file, made up of multiple blocks, generated by the genomic pipeline. The method determines which data storage technique is most optimal to store each block of the file. In doing so, the method may consider the type of the file, the stage of the genomic pipeline that generated the file, the access frequency for blocks of the file, the most accessed blocks of the file, and the like. The method stores each block using the data storage technique determined to be most optimal after completion of a designated stage of the genomic pipeline, such that blocks of the file are stored using several different data storage techniques. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: January 28, 2025
    Assignee: International Business Machines Corporation
    Inventors: Sasikanth Eda, Sandeep R. Patil, William W. Owen, Kumaran Rajaram
  • Patent number: 12204780
    Abstract: Described apparatuses and methods relate to self-refresh arbitration. In a memory system with multiple memory components, an arbiter is configured to manage the occurrence of self-refresh operations. In aspects, the arbiter can receive one or more self-refresh request signals from at least one memory controller for authorization to command one or more memory components to enter a self-refresh mode. Upon receiving the one or more self-refresh request signals, the arbiter, based on a predetermined configuration, can transmit one or more self-refresh enable signals to the at least one memory controller with authorization to command the one or more memory components to enter the self-refresh mode. The configuration can ensure that fewer than all memory components simultaneously enter the self-refresh mode. In so doing, memory components can perform self-refresh operations without exceeding an instantaneous power threshold. The arbiter can be included in, for instance, a Compute Express Link™ (CXL™) memory module.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: January 21, 2025
    Assignee: Micron Technology, Inc.
    Inventors: Mark Kalei Hadrick, Yu-Sheng Hsu, John Christopher Sancon, Kang-Yong Kim, Yang Lu
  • Patent number: 12197373
    Abstract: Software and hardware for monitoring, testing, and developing communication and electronic systems on spacecrafts. Several systems are provided to monitor, test, and control a next generation SpaceCube, including a RadHard Monitor (RHM), a Mini ASTM Board, an FMC+ ASTM Board, a Mini Evaluation Board, a MEZZ and an Automated Test Suite. The RHM are FPGA IP and hardware configured to monitor COTS components. The Mini ASTM Board connects a Mini Processor Card to an ASTM for electrical testing. FMC+ ASTM Card connects FMC and FMC+ test cards to the ASTM for electrical testing. A Mini Evaluation Board supplies all necessary power to the Mini Processor card. The MEZZ is a multi-use GSE test board that is compatible with several different development platforms in the SpaceCube to allow developers to develop and test their software. The Automated Test Suite provides functional testing of an assembled SpaceCube board.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: January 14, 2025
    Assignee: United States of America as represented by the Administrator of NASA
    Inventors: Alessandro Geist, Travis Wise, Cody Brewer, Nicholas Franconi, Christopher Wilson, Jonathan Boblitt, Robin Ripley, Alan Gibson, Manuel Buenfil
  • Patent number: 12189551
    Abstract: The present disclosure relates to a computing system. The computing system may include a memory system including a plurality of memory devices configured to store raw data and a near data processor (NDP) configured to receive the raw data by a first bandwidth from the plurality of memory devices and generate intermediate data by performing a first operation on the raw data, and a host device coupled to the memory system by a second bandwidth and determining a resource to perform a second operation on the intermediate data based on a bandwidth ratio and a data size ratio.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: January 7, 2025
    Assignee: SK hynix Inc.
    Inventor: Joon Seop Sim
  • Patent number: 12193080
    Abstract: A method of handing over a mobile communication device from a first access network to a second access network (in one implementation) is as follows. In response to a protocol data unit session update request, a first computing device (e.g., an AMF) receives an identifier of a protocol data unit session and network slice information regarding a network slice to be used by the mobile communication device to communicate on the second access network using the protocol data unit session. The first computing device uses the network slice information to select a network slice instance and to select a second computing device (e.g., another AMF) within the network slice instance. The first computing device transmits, to the second computing device, a relocation request that includes the network slice information and the identifier of the protocol data unit session.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: January 7, 2025
    Assignee: ZTE Corporation
    Inventors: Jinguo Zhu, Zhendong Li