Direct Memory Accessing (dma) Patents (Class 710/22)
  • Patent number: 12373367
    Abstract: Disclosed are apparatuses, systems, and techniques that improve efficiency and decrease latency of remote direct memory access (RDMA) operations. The techniques include but are not limited to unified RDMA operations that are recognizable by various communicating devices, such as network controllers and target memory devices, as requests to establish, set, and/or update arrival indicators in the target memory devices responsive to arrival of one or more portions of the data being communicated.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: July 29, 2025
    Assignee: Mellanox Technologies, Ltd.
    Inventors: Daniel Marcovitch, Roman Nudelman, Noam Bloch
  • Patent number: 12367540
    Abstract: An apparatus to facilitate processing in a multi-tile device is disclosed. The apparatus comprises a plurality of processing tiles, each including a memory device and a plurality of processing resources, coupled to the device memory, and a memory management unit to manage the memory devices in each of the plurality of tiles to perform allocation of memory resources among the memory devices for execution by the plurality of processing resources.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: July 22, 2025
    Assignee: Intel Corporation
    Inventors: Michal Mrozek, Bartosz Dunajski, Ben Ashbaugh, Brandon Fliflet
  • Patent number: 12360833
    Abstract: A system and method for producing and transmitting reduced data is disclosed. In some embodiments, the system comprises an ACF and reduction processors. The reduction processors are configured to perform a data reduction process. The ACF is configured to obtain access to input data from multiple flows, the input data identified by SGLEs included in input SGLs, and move a portion of the input data from each flow of the multiple flows to a respective reduction processor of the multiple reduction processors, such that each reduction processor receives a respective portion of the input data from each flow. The ACF is further configured to obtain access to reduced data produced from the input data using the data reduction process performed by the multiple reduction processors and move the reduced data to one or more destinations, where the reduced data identified by an output SGL.
    Type: Grant
    Filed: October 1, 2024
    Date of Patent: July 15, 2025
    Assignee: Enfabrica Corporation
    Inventors: Shrijeet Mukherjee, Thomas Norrie, Shimon Muller
  • Patent number: 12341678
    Abstract: Methods and systems for selective direct access to a processing unit of a network device are described. A network interface of the network device receives packets of a flow. The network interface determines based on attributes of the packets that the packets are to be directly sent to the processing unit. In response to determining that the packets are to be directly sent to the processing unit, they are directly sent to the processing unit for processing.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: June 24, 2025
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Amir Roozbeh, Alireza Farshin, Dejan Kostic, Gerald Q Maguire, Jr.
  • Patent number: 12333003
    Abstract: An information processing device, includes: a metadata generator generating, based on an update request of firmware, first metadata including identification of the firmware; a time manager; a validity period determiner determining a first validity period for the first metadata based on time acquired from the time manager; a counter counting up a value per unit time; an acquirer acquiring a first counter value of the counter for the first metadata; a storage storing entries in which second metadata including identification of firmware, a second validity period of the second metadata, and a second counter value of the counter having been acquired for the second metadata are associated; and a determiner detecting the second metadata including same identification as the first metadata, acquire the second validity period and the second counter value from the entry including the detected second metadata, and detecting falsification of the first validity period.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: June 17, 2025
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Ryuiti Koike, Yurie Shinke, Shinya Takumi, Jun Kanai
  • Patent number: 12333184
    Abstract: There is provided a recording apparatus that records data to a memory card, comprising: a control unit configured to repeatedly transmit a data recording instruction that includes information designating recording target data and a recording destination to the memory card which manages a memory for data recording in the memory card as a plurality of recording areas and which can execute data recording with a guaranteed minimum recording speed in units of recording areas, wherein when transmitting a first data recording instruction that designates a start portion of a first recording area as a recording destination, the control unit requests the memory card to execute data recording with the guaranteed minimum recording speed with respect to the first recording area by including a speed guarantee request in the first data recording instruction.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: June 17, 2025
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yasuhiro Shiraishi, Akio Fujii, Hiroshi Noda, Ryo Akamatsu, Tsutomu Ando, Hitoshi Kimura, Daisuke Nakajima, Toshifumi Nishiura, Naoki Yamagata, Yoshihisa Ishikawa
  • Patent number: 12332814
    Abstract: The present disclosure provides methods and apparatuses for processing data. In some embodiments, the method includes operating a device by transmitting, to a host, first data using a plurality of direct memory access (DMA) channels regardless of an order of each DMA channel of the plurality of DMA channels. The method further includes obtaining an optimal number of the DMA channels based on a process capacity of a receiver buffer of the host. The method further includes transmitting, by the device to the host, second data based on the optimal number of the DMA channels.
    Type: Grant
    Filed: June 29, 2023
    Date of Patent: June 17, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Hanju Lee
  • Patent number: 12293093
    Abstract: A method includes: receiving control data at a first data selector of a plurality of data selectors, in which the control data comprises (i) a configuration registry address specifying a location in a configuration state registry and (ii) configuration data specifying a circuit configuration state of a circuit element of a computational circuit; transferring the control data, from the first data selector, to an entry in a trigger table registry; responsive to a first trigger event occurring, transferring the configuration data to the location in the configuration state registry specified by the configuration registry address; and updating a state of the circuit element based on the configuration data.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: May 6, 2025
    Assignee: Google LLC
    Inventors: Michial Allen Gunter, Reiner Pope, Brian Foley, Charles Henry Leichner, IV
  • Patent number: 12287745
    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.
    Type: Grant
    Filed: August 2, 2023
    Date of Patent: April 29, 2025
    Assignee: Google LLC
    Inventors: Mark William Gottscho, Matthew William Ashcraft, Thomas Norrie, Oliver Edward Bowen
  • Patent number: 12279152
    Abstract: Systems, methods, and circuitries are provided for using an application Layer 2 buffer for reordering out of sequence (OOS) packets when possible to reduce an amount of memory allocated to a baseband (BB) Layer 2 (L2) buffer. In one example, a baseband circuitry of a user equipment (UE), includes BB memory, configured as a BB L2 buffer and one or more BB processors. The BB processors are configured to receive an OOS packet from a physical layer; and in response to an APP L2 buffer status indicating at least a first amount of memory is available, send the OOS packet to APP circuitry for storing in an APP L2 buffer.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: April 15, 2025
    Assignee: Apple Inc.
    Inventors: Abhishek Anand Konda, Bobby Jose, Vijay Venkataraman
  • Patent number: 12253950
    Abstract: A processing apparatus, a method and a system for executing data processing on a plurality of channels are disclosed. The processing apparatus for executing data processing on a plurality of channels includes: a channel information acquiring circuit, configured to acquire channel information of the plurality of channels; a storing circuit, including a plurality of storage regions corresponding to the plurality of channels, in which the storage regions are configured to store data information for the plurality of channels; a data reading control circuit, configured to read target data information corresponding to the channel information from a target storage region among the plurality of storage regions of the storing circuit, according to the channel information; and a cache circuit, configured to pre-store the target data information read from the target storage region of the storing circuit, by the data reading control circuit, to wait for use in the data processing.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: March 18, 2025
    Assignee: BEIJING ESWIN COMPUTING TECHNOLOGY CO., LTD.
    Inventor: Shilin Luo
  • Patent number: 12248416
    Abstract: A network adapter includes a network interface, a bus interface, a hardware-implemented data-path and a programmable Data-Plane Accelerator (DPA). The network interface is to communicate with a network. The bus interface is to communicate with an external device over a peripheral bus. The hardware-implemented data-path includes a plurality of packet-processing engines to process data units exchanged between the network and the external device. The DPA is to expose on the peripheral bus a User-Defined Peripheral-bus Device (UDPD), to run user-programmable logic that implements the UDPD, and to process transactions issued from the external device to the UDPD by reusing one or more of the packet-processing engines of the data-path.
    Type: Grant
    Filed: May 6, 2024
    Date of Patent: March 11, 2025
    Assignee: Mellanox Technologies, Ltd
    Inventors: Daniel Marcovitch, Eliav Bar-Ilan, Ran Avraham Koren, Liran Liss, Oren Duer, Shahaf Shuler
  • Patent number: 12248786
    Abstract: Controlling a data processing (DP) array includes creating a replica of a register address space of the DP array based on the design and the DP array. A sequence of instructions, including write instructions and read instructions, is received. The write instructions correspond to buffer descriptors specifying runtime data movements for a design for a DP array. The write instructions are converted into transaction instructions and the read instructions are converted into wait instructions based on the replica of the register address space. The transaction instructions and the wait instructions are included in an instruction buffer. The instruction buffer is provided to a microcontroller configured to execute the transaction instructions and the wait instructions to implement the runtime data movements for the design as implemented in the DP array. In another aspect, the instruction buffer is stored in a file for subsequent execution by the microcontroller.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: March 11, 2025
    Assignee: Xilinx, Inc.
    Inventors: Xiao Teng, Tejus Siddagangaiah, Bryan Lozano, Ehsan Ghasemi, Rajeev Patwari, Elliott Delaye, Jorn Tuyls, Aaron Ng, Sanket Pandit, Pramod Peethambaran, Satyaprakash Pareek
  • Patent number: 12229068
    Abstract: Technology related to broadcast packet direct memory access (DMA) operations is disclosed. When a network interface controller (NIC) connected to a host computer receives a broadcast packet, it can transmit a request to an agent process running on the host computer for a plurality of destination buffers. In some embodiments, the request to the agent comprises all or part of the packet, or metadata about the packet. In such embodiments, the agent can use the contents of the request to identify services that should receive the packet. Alternatively, the NIC can identify the destination services and can transmit identifiers for the destination services to the agent. The agent can transmit requests for memory buffers to the services and can receive memory location identifiers in response. The agent can transmit the identifiers to the NIC, which can perform multiple DMA operations to write the broadcast packet to the identified memory locations.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: February 18, 2025
    Assignee: FS, Inc.
    Inventors: Hao Cai, Timothy S. Michels, Daniel J. McDermott, David Ryan
  • Patent number: 12216806
    Abstract: Memory devices, systems including memory devices, and methods of operating memory devices are described, in which self-lock security may be implemented to control access to a fuse array (or other secure features) of the memory devices based on a predefined event associated with the memory device operation. The predefined event may include an operating parameter of the memory device, one or more commands directed to the memory device, or both. The memory device may monitor the predefined event and determine that the predefined event satisfies a threshold. The threshold may be related to a time elapsed since the predefined event has occurred or a certain pattern in the one or more commands. Subsequently, the memory device may disable a circuit configured to access the fuse array based on the determination such that an access to the fuse array is no longer allowed.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: February 4, 2025
    Assignee: Micron Technology, Inc.
    Inventors: Nathaniel J. Meier, Brenton P. Van Leeuwen
  • Patent number: 12204897
    Abstract: Apparatuses, systems, and techniques to perform computational operations in response to one or more compute uniform device architecture (CUDA) programs. In at least one embodiment, one or more computational operations are to cause one or more other computational operations to wait until a portion of matrix multiply-accumulate (MMA) operations have been performed.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: January 21, 2025
    Assignee: NVIDIA CORPORATION
    Inventors: Harold Carter Edwards, Kyrylo Perelygin, Maciej Tyrlik, Gokul Ramaswamy Hirisave Chandra Shekhara, Balaji Krishna Yugandhar Atukuri, Rishkul Kulkarni, Konstantinos Kyriakopoulos, Edward H. Gornish, David Allan Berson, Bageshri Sathe, James Player, Aman Arora, Alan Kaatz, Andrew Kerr, Haicheng Wu, Cris Cecka, Vijay Thakkar, Sean Treichler, Jack H. Choquette, Aditya Avinash Atluri, Apoorv Parle, Ronny Meir Krashinsky, Cody Addison, Girish Bhaskarrao Bharambe
  • Patent number: 12204592
    Abstract: A method for use in a computing device, the method comprising: detecting a first command to copy data to a remote system, the first command including a scatter-gather list (SGL) that identifies the data that is desired to be copied, the SGL including a plurality of entries that identify a plurality of memory regions, each of the plurality of entries identifying a different one of the plurality of memory regions; generating metadata for the SGL, for each of the data blocks in the plurality of memory regions, wherein the metadata identifies at least one of: (i) a location where a respective integrity field for the data block is stored, and (ii) a respective integrity operation that is required to be performed on the data block; registering the plurality of entries and the metadata under a key; generating a command capsule that includes the key; and transmitting the command capsule.
    Type: Grant
    Filed: June 2, 2023
    Date of Patent: January 21, 2025
    Assignee: Dell Products L.P.
    Inventor: Jinxian Xing
  • Patent number: 12197970
    Abstract: Embodiments of a multi-processor array are disclosed that may include a plurality of processors, local memories, configurable communication elements, and direct memory access (DMA) engines, and a DMA controller. Each processor may be coupled to one of the local memories, and the plurality of processors, local memories, and configurable communication elements may be coupled together in an interspersed arrangement. The DMA controller may be configured to control the operation of the plurality of DMA engines.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: January 14, 2025
    Assignee: HyperX Logic, Inc.
    Inventors: Carl S. Dobbs, Michael R. Trocino, Keith M. Bindloss
  • Patent number: 12189553
    Abstract: A method includes transmitting first data with a first priority through a first dedicated interface on a transmit side of a PCIe system. The method also includes transmitting second data with a second priority through a second dedicated interface on the transmit side of the PCIe system. The method includes transmitting the first data and the second data to a receive side of the PCIe system using two or more virtual channels over a PCIe link, where the first data uses a first virtual channel and the second data uses a second virtual channel.
    Type: Grant
    Filed: September 25, 2023
    Date of Patent: January 7, 2025
    Assignee: Texas Instruments Incorporated
    Inventors: Chunhua Hu, Sanand Prasad
  • Patent number: 12189475
    Abstract: A storage device includes a non-volatile memory device that includes memory blocks each including one or more memory cells, a combo integrated circuit (IC) that includes a temperature sensor and a memory, and a controller that is connected with the combo IC through first channels and controls the non-volatile memory device to write or read data in or from selected memory cells. When the controller determines that a first event occurs based on temperature data read from the combo IC, the controller records first event data in the memory of the combo IC. In a first operation mode, the combo IC outputs the first event data to the controller through the first channels. In a second operation mode, under control of an external host, the combo IC outputs the first event data to the external host through second channels different from the first channels.
    Type: Grant
    Filed: May 10, 2023
    Date of Patent: January 7, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jihong Kim, Yongkoo Jeong, Jooyoung Kim
  • Patent number: 12182201
    Abstract: A graph data storage method for non-uniform memory access architecture (NUMA) processing system is provided. The processing system includes at least one computing device, each computing device corresponding to multiple memories, and each memory corresponding to multiple processors. The method includes: performing three-level partitioning on graph data to obtain multiple third-level partitions based on a communication mode among computing device(s), memories, and processors; and separately storing graph data of the multiple third-level partitions in NUMA nodes corresponding to the processors. A graph data storage system and an electronic device are further provided.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: December 31, 2024
    Assignee: ZHEJIANG TMALL TECHNOLOGY CO., LTD.
    Inventors: Wenfei Fan, Wenyuan Yu, Jingbo Xu, Xiaojian Luo
  • Patent number: 12164436
    Abstract: Apparatus comprises address processing circuitry to detect information relating to an input memory address provided by address information tables; the address processing circuitry being configured to select an address information table at a given table level according to earlier information entry in an address information table; and the address processing circuitry being configured to select an information entry in the selected address information table according to an offset component, the offset component being defined so that contiguous instances of that portion of the input memory address indicate contiguously addressed information entries; the address processing circuitry comprising detector circuitry to detect whether indicator data is set to indicate whether a group of one or more contiguously addressed information entries in the selected address information table provide at least one base address indicating a location within a contiguously addressed region comprising multiple address information table
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: December 10, 2024
    Assignee: Arm Limited
    Inventor: Andrew Brookfield Swaine
  • Patent number: 12131163
    Abstract: A processor may implement self-relative memory addressing by providing load and store instructions that include self-relative addressing modes. A memory address may contain a self-relative pointer, where the memory address stores a memory offset that, when added to the memory address, defines another memory address. The self-relative addressing mode may also support invalid memory addresses using a reserved offset value, where a load instruction providing the self-relative addressing mode may return a NULL value or generate an exception when determining that the stored offset value is equal to the reserved offset value and where a store instruction providing the self-relative addressing mode may store the reserved offset value when determining that the pointer is an invalid or NULL memory address.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: October 29, 2024
    Assignee: Oracle International Corporation
    Inventor: Mario Wolczko
  • Patent number: 12117947
    Abstract: The present disclosure relates to information processing methods, physical machines, and peripheral component interconnect express (PCIE) devices. In one example method, a PCIE device receives, in a live migration process of a to-be-migrated virtual machine (VM), a packet corresponding to the to-be-migrated VM, where the to-be-migrated VM is one of a plurality of VMs. The PCIE device determines a direct memory access (DMA) address based on the packet. The PCIE device sends the DMA address to a physical function (PF) driver.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: October 15, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Changchun Ouyang, Shui Cao, Zihao Xiang
  • Patent number: 12112395
    Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one exemplary implementation, an address allocation process comprises: establishing space for managed pointers across a plurality of memories, including allocating one of the managed pointers with a first portion of memory associated with a first one of a plurality of processors; and performing a process of automatically managing accesses to the managed pointers across the plurality of processors and corresponding memories. The automated management can include ensuring consistent information associated with the managed pointers is copied from the first portion of memory to a second portion of memory associated with a second one of the plurality of processors based upon initiation of an accesses to the managed pointers from the second one of the plurality of processors.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: October 8, 2024
    Assignee: NVIDIA Corporation
    Inventors: Stephen Jones, Vivek Kini, Piotr Jaroszynski, Mark Hairgrove, David Fontaine, Cameron Buschardt, Lucien Dunning, John Hubbard
  • Patent number: 12093209
    Abstract: Technologies for batching remote descriptors of serialized objects in streaming pipelines are described. One method of a first computing device generates a streaming batch of remote descriptors. Each remote descriptor uniquely identifies a contiguous block of a serialized object. The first computing device sends at least one of the remote descriptors to a second computing device before the streaming batch is completed. At least some contents of a contiguous block are obtained for storage at a second memory associated with the second computing device before the streaming batch is completed.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: September 17, 2024
    Assignee: Nvidia Corporation
    Inventors: Ryan Olson, Michael Demoret, Bartley Richardson
  • Patent number: 12079514
    Abstract: Methods, systems, and devices for improved performance in a fragmented memory system are described. The memory system may detect conditions associated with a random access parameter stored at the memory system to assess a level of data fragmentation. The memory system may determine that a random access parameter, such as a data fragmentation parameter, a size of information associated with an access command, a depth of a command queue, a delay duration, or a quantity of commands satisfies a threshold. If one or more of the random access parameters satisfies the threshold, the memory system may transmit a request for the host system to increase an associated clock frequency. The host system may increase the number of commands sent to the memory system in a duration of time. That is, the host system may compensate for a slow-down due to data storage fragmentation by increasing the command processing rate.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: September 3, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Vanaja Urrinkala, Sharath Chandra Ambula
  • Patent number: 12066955
    Abstract: Systems and method for transferring data are disclosed herein. In an embodiment, a method of transferring data includes reading a plurality of bytes from a first memory, discarding first bytes of the plurality of bytes, realigning second bytes of the plurality of bytes, and storing the realigned second bytes in a second memory.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: August 20, 2024
    Assignee: HUGHES NETWORK SYSTEMS, LLC
    Inventors: Aneeshwar Danda, Robert H. Lager, Sahithi Vemuri
  • Patent number: 12066960
    Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: August 20, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Patent number: 12047296
    Abstract: Remote Direct Memory Access (RDMA) over Internet Protocol and/or Ethernet has gained attention for datacenters. However, the sheer scale of the required RDMA networks presents a challenge. Accordingly, optical infrastructures wavelength division multiplexing within a datacenter environment have also gain attention through the wide low cost bandwidth it offers with easy expansion within this environment. However, latency is a significant issue for many applications rather than bandwidth between devices. Accordingly the inventors have established a design methodology where the network prioritises latency over bandwidth where bandwidth utilization and management offer reduced latency for such applications. Accordingly, the inventors exploit loss-tolerant RDMA architectures with quota-based traffic control, message level load balancing and a global view of virtual connections over commodity switches with simple priority queues.
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: July 23, 2024
    Inventor: Yunqu Liu
  • Patent number: 12026005
    Abstract: Embodiments are described for a data processing tool configured to cease operations of a plurality of database readers when detecting a congestion condition in the data processing tool. In some embodiments, the data processing tool comprises a memory, one or more processors, and a plurality of database readers. The one or more processors, coupled to the memory and the plurality of database readers are configured to determine a congestion condition in at least one data pipeline of a plurality of data pipelines of the data processing tool. Each data pipeline of the plurality of data pipelines connects a database reader and a transformer of the data processing tool, a transformer and a database writer of the data processing tool, or two transformers of the data processing tool. The one or more processors are further configured to refrain from reading data from one or more databases responsive to the congestion condition.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: July 2, 2024
    Assignee: SAP SE
    Inventors: Reinhard Sudmeier, Sreenivasulu Gelle, Alexander Ocher
  • Patent number: 12026098
    Abstract: Techniques are disclosed relating to updating page pools in the context of cached page pool descriptors. In some embodiments, a processor is configured to assign a set of processing work to a first page pool of memory pages. Page manager circuitry may cache page pool descriptor entries in cache circuitry, where a given page pool descriptor entry indicates a set of pages assigned to a page pool. In response to a determination to grow the first page pool, the processor may communicate a grow list to the page manager circuitry, that identifies a set of memory blocks from the memory to be added to the first page pool. The page manager circuitry may then update a cached page pool descriptor entry for the first page pool to indicate the added memory blocks and generate a signal to inform the processor that the cached page pool descriptor entry is updated.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: July 2, 2024
    Assignee: Apple Inc.
    Inventors: Arjun Thottappilly, David A. Gotwalt, Frank W. Liljeros
  • Patent number: 12019539
    Abstract: Exemplary methods, apparatuses, and systems including an adaptive configuration manager for controlling configurations of memory devices. The adaptive configuration manager receives a plurality of payloads from a host. The adaptive configuration manager identifies a profile of the host from a plurality of pre-determined host profiles. The adaptive configuration manager identifies a distribution of the plurality of memory access requests, the distribution including a set of sequential payloads and a set of random payloads. The adaptive configuration manager generates a memory access command using the profile of the host including a distribution of random and sequential access. The adaptive configuration manager executes the memory access command using the profile and a payload of the plurality of payloads.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: June 25, 2024
    Assignee: MICRON TECHNOLOGY, INC.
    Inventor: Manjunath Chandrashekaraiah
  • Patent number: 12013794
    Abstract: According to a first aspect, execution logic is configured to perform a linear capability transfer operation which transfers a physical capability from a partition of a first software modules to a partition of a second of software module without retaining it in the partition of the first. According to a second, alternative or additional aspect, the execution logic is configured to perform a sharding operation whereby a physical capability is divided into at least two instances, which may later be combined.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: June 18, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David T. Chisnall, Sylvan W. Clebsch, Cédric Alain Marie Christophe Fournet
  • Patent number: 12015954
    Abstract: Systems and methods for causing control information to be sent from a source base station to a target base station via a user plane is provided. In a network that uses control plane and user plane separation, the systems and methods provided herein reduce time delays between handovers in the control plane and handovers in the user plane. The time delays are reduced by minimizing a duration of the handover preparation phase by sending control information with payload packets between network functions via in-band signaling in the user plane.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: June 18, 2024
    Assignee: Nokia Solutions and Networks GmbH & Co. KG
    Inventor: Klaus Hoffman
  • Patent number: 12007918
    Abstract: Provided are a Peripheral Component Interconnect Express (PCIe) interface device and a method of operating the same. The PCIe interface device may include a performance analyzer and a traffic class controller. The performance analyzer may be configured to measure throughputs of multiple functions executed on one or more Direct Memory Access (DMA) devices. The traffic class controller may be configured to allocate traffic class values to transaction layer packets received from the multiple functions based on the throughputs of the multiple functions.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: June 11, 2024
    Assignee: SK hynix Inc.
    Inventors: Yong Tae Jeon, Ji Woon Yang, Sang Hyun Yoon, Se Hyeon Han
  • Patent number: 12001357
    Abstract: A direct memory access (DMA) circuit is provided. The DMA circuit may include a plurality of groups of direct memory access channels, wherein each of the groups includes at least one DMA channel and a resource usage counter configured to count an execution time in which one of the DMA channels of the group is executed, and an arbiter configured to evaluate a value of the resource usage counter of a group upon a request for execution time by one of the DMA channels of the group, and, taking into account a result of the evaluation, to assign, delay assignment, or deny execution time for using the direct memory access to one of the groups.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: June 4, 2024
    Assignee: Infineon Technologies AG
    Inventors: Frank Hellwig, Sandeep Vangipuram
  • Patent number: 12001855
    Abstract: In a general aspect, an observability pipeline system includes a pack data processing engine. In some aspects, an observability pipeline system includes data processing engines that are configured according to system default configuration settings and system local configuration settings. A pack file received from a remote computer system contains routes, pipelines, and pack default configuration settings. A pack data processing engine includes the routes and pipelines from the pack file. Pack local configuration settings, defined for the pack data processing engine, inherit at least one of the system default configuration settings and at least one of the pack default configuration settings. The pack local configuration settings are isolated from the system local configuration settings. When pipeline data is processed in the observability pipeline system on the computer system, the pack data processing engine is applied to the pipeline data.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: June 4, 2024
    Assignee: Cribl, Inc.
    Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito
  • Patent number: 12002167
    Abstract: A method in a virtual, augmented, or mixed reality system includes a GPU determining/detecting an absence of image data. The method also includes shutting down a portion/component/function of the GPU. The method further includes shutting down a communication link between the GPU and a DB. Moreover, the method includes shutting down a portion/component/function of the DB. In addition, the method includes shutting down a communication link between the DB and a display panel. The further also includes shutting down a portion/component/function of the display panel.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: June 4, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Jose Felix Rodriguez, Ricardo Martinez Perez, Reza Nourai, Robert Blake Taylor
  • Patent number: 12001681
    Abstract: This application provides a storage device, a distributed storage system, and a data processing method, and belongs to the field of storage technologies. In this application, an AI apparatus is disposed inside a storage device, so that the storage device has an AI computing capability. In addition, the storage device further includes a processor and a hard disk, and therefore further has a service data storage capability. Therefore, convergence of storage and AI computing power is implemented. An AI parameter and service data are transmitted inside the storage device through a high-speed interconnect network without a need of being forwarded through an external network. Therefore, a path for transmitting the service data and the AI parameter is greatly shortened, and the service data can be loaded nearby, thereby accelerating loading.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: June 4, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jinzhong Liu, Hongdong Zhang
  • Patent number: 11983128
    Abstract: Techniques to reduce overhead in a direct memory access (DMA) engine can include processing descriptors from a descriptor queue to obtain a striding configuration to generate tensorized memory descriptors. The striding configuration can include, for each striding dimension, a stride and a repetition number indicating a number of times to repeat striding in the corresponding striding dimension. One or more sets of tensorized memory descriptors can be generated based on the striding configuration. Data transfers are then performed based on the generated tensorized memory descriptors.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: May 14, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Kun Xu, Ron Diamant, Ilya Minkin, Mohammad El-Shabani, Raymond S. Whiteside, Uday Shilton Udayaselvam
  • Patent number: 11952798
    Abstract: Provided are a motor driving circuit, a control method therefor, and a driving chip. The motor driving circuit includes a logic module and a push-pull module, a channel selection module, an instruction recognition module, and an isolating switch module. An input signal is outputted by the logic module and the push-pull module to control the motor. The channel selection module is configured to select a channel for the input signal to make the input signal to be connected to the isolating switch module or the instruction recognition module, or disconnected. The instruction recognition module is configured to perform a corresponding operation on the isolating switch module according to an inputted instruction. The isolating switch module is configured to receive an instruction of the channel selection module and an instruction of the instruction recognition module to connect or disconnect the logic module.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: April 9, 2024
    Assignee: GREE ELECTRIC APPLIANCES, INC. OF ZHUHAI
    Inventors: Ji He, Junchao Chen, Yang Lan
  • Patent number: 11947973
    Abstract: The present disclosure is related to a system that may include a first computing device that may perform a plurality of data processing operations and a second computing device that may receive a modification to one or more components of a first data operation, identify a first subset of the plurality of data processing operations that corresponds to the one or more components, and determine one or more alternate parameters associated with the one or more components. The second computing device may then identify a second subset of the plurality of data processing operations that corresponds to the one or more alternate parameters and send a notification to the first computing device indicative of a modification to the first subset and the second subset.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: April 2, 2024
    Assignee: United Services Automobile Association (USAA)
    Inventors: Oscar Guerra, Megan Sarah Jennings
  • Patent number: 11947460
    Abstract: Apparatus, method and code for fabrication of the apparatus, the apparatus comprising a cache providing a plurality of cache lines, each cache line storing a block of data; cache access control circuitry, responsive to an access request, to determine whether a hit condition is present in the cache; and cache configuration control circuitry to set, in response to a merging trigger event, merge indication state identifying multiple cache lines to be treated as a merged cache line to store multiple blocks of data, wherein when the merge indication state indicates that the given cache line is part of the merged cache line, the cache access control circuitry is responsive to detecting the hit condition to allow access to any of the data blocks stored in the multiple cache lines forming the merged cache line.
    Type: Grant
    Filed: April 26, 2022
    Date of Patent: April 2, 2024
    Assignee: Arm Limited
    Inventors: Vladimir Vasekin, David Michael Bull, Vincent Rezard, Anton Antonov
  • Patent number: 11934332
    Abstract: Devices, methods, and systems are provided. In one example, a device is described to include a device interface that receives data from at least one data source; a data shuffle unit that collects the data received from the at least one data source, receives a descriptor that describes a data shuffle operation to perform on the data received from the at least one data source, performs the data shuffle operation on the collected data to produce shuffled data, and provides the shuffled data to at least one data target.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: March 19, 2024
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Daniel Marcovitch, Dotan David Levi, Eyal Srebro, Eliel Peretz, Roee Moyal, Richard Graham, Gil Bloch, Sean Pieper
  • Patent number: 11914541
    Abstract: In example implementations, a computing device is provided. The computing device includes an expansion interface, a first device, a second device, and a processor communicatively coupled to the expansion interface. The expansion interface includes a plurality of slots. Two slots of the plurality of slots are controlled by a single reset signal. The first device is connected to a first slot of the two slots and has a feature that is compatible with the single reset signal. The second device is connected to a second slot of the two slots and does not have the feature compatible with the single reset signal. The process is to detect the first device connected to the first slot and the second device connected to the second slot and disable the feature by preventing the first slot and the second slot from receiving the single reset signal.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: February 27, 2024
    Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Wen Bin Lin, ChiWei Ding, Chun Yi Liu, Shuo-Cheng Cheng, Chao-Wen Cheng
  • Patent number: 11886365
    Abstract: Techniques for improving the handling of peripherals in a computer system, including through the use of a DMA control circuit that helps manage the flow of data between memory and the peripherals via an intermediate storage buffer. The DMA control circuit is configured to control timing of DMA transfers between sample buffers in the memory and the intermediate storage buffer. The DMA control circuit may output a priority value of the DMA control circuit for accesses to memory, where the priority value based on stored quality of service (QoS) information and current channel data buffer levels for different DMA channels. The DMA control circuit may separately arbitrate between multiple active transmit and receive channels. Still further, the DMA control circuit may store, for a given data transfer over a particular DMA channel, timestamp information indicative of completion of the DMA and peripheral-side operations.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: January 30, 2024
    Assignee: Apple Inc.
    Inventors: Brett D. George, Rohit K. Gupta, Do Kyung Kim, Paul W. Glendenning
  • Patent number: 11853600
    Abstract: A memory module with multiple memory devices includes a buffer system that manages communication between a memory controller and the memory devices. The memory module additionally includes a command input port to receive command and address signals from a controller and, also in support of capacity extensions, a command relay circuit coupled to the command port to convey the commands and addresses from the memory module to another module or modules. Relaying commands and addresses introduces a delay, and the buffer system that manages communication between the memory controller and the memory devices can be configured to time data communication to account for that delay.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: December 26, 2023
    Assignee: Rambus Inc.
    Inventors: Frederick A Ware, Scott C. Best
  • Patent number: 11853784
    Abstract: An example electronic apparatus is for accelerating a para-virtualization network interface. The electronic apparatus includes a descriptor hub performing bi-directionally communication with a guest memory accessible by a guest and with a host memory accessible by a host. The guest includes a plurality of virtual machines. The host includes a plurality of virtual function devices. The virtual machines are communicatively coupled to the electronic apparatus through a central processing unit. The communication is based upon para-virtualization packet descriptors and network interface controller virtual function-specific descriptors. The electronic apparatus also includes a device association table communicatively coupled to the descriptor hub and to store associations between the virtual machines and the virtual function devices. The electronic apparatus further includes an input-output memory map unit (IOMMU) to perform direct memory access (DMA) remapping and interrupt remapping.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 26, 2023
    Assignee: Intel Corporation
    Inventors: Yigang Zhou, Cunming Liang
  • Patent number: 11836083
    Abstract: A compute node includes a memory, a processor and a peripheral device. The memory is to store memory pages. The processor is to run software that accesses the memory, and to identify one or more first memory pages that are accessed by the software in the memory. The peripheral device is to directly access one or more second memory pages in the memory of the compute node using Direct Memory Access (DMA), and to notify the processor of the second memory pages that are accessed using DMA. The processor is further to maintain a data structure that tracks both (i) the first memory pages as identified by the processor and (ii) the second memory pages as notified by the peripheral device.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: December 5, 2023
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Ran Avraham Koren, Ariel Shahar, Liran Liss, Gabi Liron, Aviad Shaul Yehezkel