Direct Memory Accessing (dma) Patents (Class 710/22)
-
Patent number: 12293093Abstract: A method includes: receiving control data at a first data selector of a plurality of data selectors, in which the control data comprises (i) a configuration registry address specifying a location in a configuration state registry and (ii) configuration data specifying a circuit configuration state of a circuit element of a computational circuit; transferring the control data, from the first data selector, to an entry in a trigger table registry; responsive to a first trigger event occurring, transferring the configuration data to the location in the configuration state registry specified by the configuration registry address; and updating a state of the circuit element based on the configuration data.Type: GrantFiled: June 28, 2022Date of Patent: May 6, 2025Assignee: Google LLCInventors: Michial Allen Gunter, Reiner Pope, Brian Foley, Charles Henry Leichner, IV
-
Patent number: 12287745Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.Type: GrantFiled: August 2, 2023Date of Patent: April 29, 2025Assignee: Google LLCInventors: Mark William Gottscho, Matthew William Ashcraft, Thomas Norrie, Oliver Edward Bowen
-
Patent number: 12279152Abstract: Systems, methods, and circuitries are provided for using an application Layer 2 buffer for reordering out of sequence (OOS) packets when possible to reduce an amount of memory allocated to a baseband (BB) Layer 2 (L2) buffer. In one example, a baseband circuitry of a user equipment (UE), includes BB memory, configured as a BB L2 buffer and one or more BB processors. The BB processors are configured to receive an OOS packet from a physical layer; and in response to an APP L2 buffer status indicating at least a first amount of memory is available, send the OOS packet to APP circuitry for storing in an APP L2 buffer.Type: GrantFiled: August 18, 2022Date of Patent: April 15, 2025Assignee: Apple Inc.Inventors: Abhishek Anand Konda, Bobby Jose, Vijay Venkataraman
-
Patent number: 12253950Abstract: A processing apparatus, a method and a system for executing data processing on a plurality of channels are disclosed. The processing apparatus for executing data processing on a plurality of channels includes: a channel information acquiring circuit, configured to acquire channel information of the plurality of channels; a storing circuit, including a plurality of storage regions corresponding to the plurality of channels, in which the storage regions are configured to store data information for the plurality of channels; a data reading control circuit, configured to read target data information corresponding to the channel information from a target storage region among the plurality of storage regions of the storing circuit, according to the channel information; and a cache circuit, configured to pre-store the target data information read from the target storage region of the storing circuit, by the data reading control circuit, to wait for use in the data processing.Type: GrantFiled: December 27, 2022Date of Patent: March 18, 2025Assignee: BEIJING ESWIN COMPUTING TECHNOLOGY CO., LTD.Inventor: Shilin Luo
-
Patent number: 12248416Abstract: A network adapter includes a network interface, a bus interface, a hardware-implemented data-path and a programmable Data-Plane Accelerator (DPA). The network interface is to communicate with a network. The bus interface is to communicate with an external device over a peripheral bus. The hardware-implemented data-path includes a plurality of packet-processing engines to process data units exchanged between the network and the external device. The DPA is to expose on the peripheral bus a User-Defined Peripheral-bus Device (UDPD), to run user-programmable logic that implements the UDPD, and to process transactions issued from the external device to the UDPD by reusing one or more of the packet-processing engines of the data-path.Type: GrantFiled: May 6, 2024Date of Patent: March 11, 2025Assignee: Mellanox Technologies, LtdInventors: Daniel Marcovitch, Eliav Bar-Ilan, Ran Avraham Koren, Liran Liss, Oren Duer, Shahaf Shuler
-
Patent number: 12248786Abstract: Controlling a data processing (DP) array includes creating a replica of a register address space of the DP array based on the design and the DP array. A sequence of instructions, including write instructions and read instructions, is received. The write instructions correspond to buffer descriptors specifying runtime data movements for a design for a DP array. The write instructions are converted into transaction instructions and the read instructions are converted into wait instructions based on the replica of the register address space. The transaction instructions and the wait instructions are included in an instruction buffer. The instruction buffer is provided to a microcontroller configured to execute the transaction instructions and the wait instructions to implement the runtime data movements for the design as implemented in the DP array. In another aspect, the instruction buffer is stored in a file for subsequent execution by the microcontroller.Type: GrantFiled: August 8, 2022Date of Patent: March 11, 2025Assignee: Xilinx, Inc.Inventors: Xiao Teng, Tejus Siddagangaiah, Bryan Lozano, Ehsan Ghasemi, Rajeev Patwari, Elliott Delaye, Jorn Tuyls, Aaron Ng, Sanket Pandit, Pramod Peethambaran, Satyaprakash Pareek
-
Patent number: 12229068Abstract: Technology related to broadcast packet direct memory access (DMA) operations is disclosed. When a network interface controller (NIC) connected to a host computer receives a broadcast packet, it can transmit a request to an agent process running on the host computer for a plurality of destination buffers. In some embodiments, the request to the agent comprises all or part of the packet, or metadata about the packet. In such embodiments, the agent can use the contents of the request to identify services that should receive the packet. Alternatively, the NIC can identify the destination services and can transmit identifiers for the destination services to the agent. The agent can transmit requests for memory buffers to the services and can receive memory location identifiers in response. The agent can transmit the identifiers to the NIC, which can perform multiple DMA operations to write the broadcast packet to the identified memory locations.Type: GrantFiled: March 16, 2022Date of Patent: February 18, 2025Assignee: FS, Inc.Inventors: Hao Cai, Timothy S. Michels, Daniel J. McDermott, David Ryan
-
Patent number: 12216806Abstract: Memory devices, systems including memory devices, and methods of operating memory devices are described, in which self-lock security may be implemented to control access to a fuse array (or other secure features) of the memory devices based on a predefined event associated with the memory device operation. The predefined event may include an operating parameter of the memory device, one or more commands directed to the memory device, or both. The memory device may monitor the predefined event and determine that the predefined event satisfies a threshold. The threshold may be related to a time elapsed since the predefined event has occurred or a certain pattern in the one or more commands. Subsequently, the memory device may disable a circuit configured to access the fuse array based on the determination such that an access to the fuse array is no longer allowed.Type: GrantFiled: November 7, 2022Date of Patent: February 4, 2025Assignee: Micron Technology, Inc.Inventors: Nathaniel J. Meier, Brenton P. Van Leeuwen
-
Patent number: 12204897Abstract: Apparatuses, systems, and techniques to perform computational operations in response to one or more compute uniform device architecture (CUDA) programs. In at least one embodiment, one or more computational operations are to cause one or more other computational operations to wait until a portion of matrix multiply-accumulate (MMA) operations have been performed.Type: GrantFiled: November 30, 2022Date of Patent: January 21, 2025Assignee: NVIDIA CORPORATIONInventors: Harold Carter Edwards, Kyrylo Perelygin, Maciej Tyrlik, Gokul Ramaswamy Hirisave Chandra Shekhara, Balaji Krishna Yugandhar Atukuri, Rishkul Kulkarni, Konstantinos Kyriakopoulos, Edward H. Gornish, David Allan Berson, Bageshri Sathe, James Player, Aman Arora, Alan Kaatz, Andrew Kerr, Haicheng Wu, Cris Cecka, Vijay Thakkar, Sean Treichler, Jack H. Choquette, Aditya Avinash Atluri, Apoorv Parle, Ronny Meir Krashinsky, Cody Addison, Girish Bhaskarrao Bharambe
-
Patent number: 12204592Abstract: A method for use in a computing device, the method comprising: detecting a first command to copy data to a remote system, the first command including a scatter-gather list (SGL) that identifies the data that is desired to be copied, the SGL including a plurality of entries that identify a plurality of memory regions, each of the plurality of entries identifying a different one of the plurality of memory regions; generating metadata for the SGL, for each of the data blocks in the plurality of memory regions, wherein the metadata identifies at least one of: (i) a location where a respective integrity field for the data block is stored, and (ii) a respective integrity operation that is required to be performed on the data block; registering the plurality of entries and the metadata under a key; generating a command capsule that includes the key; and transmitting the command capsule.Type: GrantFiled: June 2, 2023Date of Patent: January 21, 2025Assignee: Dell Products L.P.Inventor: Jinxian Xing
-
Patent number: 12197970Abstract: Embodiments of a multi-processor array are disclosed that may include a plurality of processors, local memories, configurable communication elements, and direct memory access (DMA) engines, and a DMA controller. Each processor may be coupled to one of the local memories, and the plurality of processors, local memories, and configurable communication elements may be coupled together in an interspersed arrangement. The DMA controller may be configured to control the operation of the plurality of DMA engines.Type: GrantFiled: May 6, 2021Date of Patent: January 14, 2025Assignee: HyperX Logic, Inc.Inventors: Carl S. Dobbs, Michael R. Trocino, Keith M. Bindloss
-
Patent number: 12189475Abstract: A storage device includes a non-volatile memory device that includes memory blocks each including one or more memory cells, a combo integrated circuit (IC) that includes a temperature sensor and a memory, and a controller that is connected with the combo IC through first channels and controls the non-volatile memory device to write or read data in or from selected memory cells. When the controller determines that a first event occurs based on temperature data read from the combo IC, the controller records first event data in the memory of the combo IC. In a first operation mode, the combo IC outputs the first event data to the controller through the first channels. In a second operation mode, under control of an external host, the combo IC outputs the first event data to the external host through second channels different from the first channels.Type: GrantFiled: May 10, 2023Date of Patent: January 7, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jihong Kim, Yongkoo Jeong, Jooyoung Kim
-
Patent number: 12189553Abstract: A method includes transmitting first data with a first priority through a first dedicated interface on a transmit side of a PCIe system. The method also includes transmitting second data with a second priority through a second dedicated interface on the transmit side of the PCIe system. The method includes transmitting the first data and the second data to a receive side of the PCIe system using two or more virtual channels over a PCIe link, where the first data uses a first virtual channel and the second data uses a second virtual channel.Type: GrantFiled: September 25, 2023Date of Patent: January 7, 2025Assignee: Texas Instruments IncorporatedInventors: Chunhua Hu, Sanand Prasad
-
Patent number: 12182201Abstract: A graph data storage method for non-uniform memory access architecture (NUMA) processing system is provided. The processing system includes at least one computing device, each computing device corresponding to multiple memories, and each memory corresponding to multiple processors. The method includes: performing three-level partitioning on graph data to obtain multiple third-level partitions based on a communication mode among computing device(s), memories, and processors; and separately storing graph data of the multiple third-level partitions in NUMA nodes corresponding to the processors. A graph data storage system and an electronic device are further provided.Type: GrantFiled: January 27, 2021Date of Patent: December 31, 2024Assignee: ZHEJIANG TMALL TECHNOLOGY CO., LTD.Inventors: Wenfei Fan, Wenyuan Yu, Jingbo Xu, Xiaojian Luo
-
Patent number: 12164436Abstract: Apparatus comprises address processing circuitry to detect information relating to an input memory address provided by address information tables; the address processing circuitry being configured to select an address information table at a given table level according to earlier information entry in an address information table; and the address processing circuitry being configured to select an information entry in the selected address information table according to an offset component, the offset component being defined so that contiguous instances of that portion of the input memory address indicate contiguously addressed information entries; the address processing circuitry comprising detector circuitry to detect whether indicator data is set to indicate whether a group of one or more contiguously addressed information entries in the selected address information table provide at least one base address indicating a location within a contiguously addressed region comprising multiple address information tableType: GrantFiled: May 20, 2021Date of Patent: December 10, 2024Assignee: Arm LimitedInventor: Andrew Brookfield Swaine
-
Patent number: 12131163Abstract: A processor may implement self-relative memory addressing by providing load and store instructions that include self-relative addressing modes. A memory address may contain a self-relative pointer, where the memory address stores a memory offset that, when added to the memory address, defines another memory address. The self-relative addressing mode may also support invalid memory addresses using a reserved offset value, where a load instruction providing the self-relative addressing mode may return a NULL value or generate an exception when determining that the stored offset value is equal to the reserved offset value and where a store instruction providing the self-relative addressing mode may store the reserved offset value when determining that the pointer is an invalid or NULL memory address.Type: GrantFiled: April 30, 2021Date of Patent: October 29, 2024Assignee: Oracle International CorporationInventor: Mario Wolczko
-
Patent number: 12117947Abstract: The present disclosure relates to information processing methods, physical machines, and peripheral component interconnect express (PCIE) devices. In one example method, a PCIE device receives, in a live migration process of a to-be-migrated virtual machine (VM), a packet corresponding to the to-be-migrated VM, where the to-be-migrated VM is one of a plurality of VMs. The PCIE device determines a direct memory access (DMA) address based on the packet. The PCIE device sends the DMA address to a physical function (PF) driver.Type: GrantFiled: March 18, 2021Date of Patent: October 15, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Changchun Ouyang, Shui Cao, Zihao Xiang
-
Patent number: 12112395Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one exemplary implementation, an address allocation process comprises: establishing space for managed pointers across a plurality of memories, including allocating one of the managed pointers with a first portion of memory associated with a first one of a plurality of processors; and performing a process of automatically managing accesses to the managed pointers across the plurality of processors and corresponding memories. The automated management can include ensuring consistent information associated with the managed pointers is copied from the first portion of memory to a second portion of memory associated with a second one of the plurality of processors based upon initiation of an accesses to the managed pointers from the second one of the plurality of processors.Type: GrantFiled: July 2, 2020Date of Patent: October 8, 2024Assignee: NVIDIA CorporationInventors: Stephen Jones, Vivek Kini, Piotr Jaroszynski, Mark Hairgrove, David Fontaine, Cameron Buschardt, Lucien Dunning, John Hubbard
-
Patent number: 12093209Abstract: Technologies for batching remote descriptors of serialized objects in streaming pipelines are described. One method of a first computing device generates a streaming batch of remote descriptors. Each remote descriptor uniquely identifies a contiguous block of a serialized object. The first computing device sends at least one of the remote descriptors to a second computing device before the streaming batch is completed. At least some contents of a contiguous block are obtained for storage at a second memory associated with the second computing device before the streaming batch is completed.Type: GrantFiled: July 11, 2022Date of Patent: September 17, 2024Assignee: Nvidia CorporationInventors: Ryan Olson, Michael Demoret, Bartley Richardson
-
Patent number: 12079514Abstract: Methods, systems, and devices for improved performance in a fragmented memory system are described. The memory system may detect conditions associated with a random access parameter stored at the memory system to assess a level of data fragmentation. The memory system may determine that a random access parameter, such as a data fragmentation parameter, a size of information associated with an access command, a depth of a command queue, a delay duration, or a quantity of commands satisfies a threshold. If one or more of the random access parameters satisfies the threshold, the memory system may transmit a request for the host system to increase an associated clock frequency. The host system may increase the number of commands sent to the memory system in a duration of time. That is, the host system may compensate for a slow-down due to data storage fragmentation by increasing the command processing rate.Type: GrantFiled: March 9, 2022Date of Patent: September 3, 2024Assignee: Micron Technology, Inc.Inventors: Vanaja Urrinkala, Sharath Chandra Ambula
-
Patent number: 12066955Abstract: Systems and method for transferring data are disclosed herein. In an embodiment, a method of transferring data includes reading a plurality of bytes from a first memory, discarding first bytes of the plurality of bytes, realigning second bytes of the plurality of bytes, and storing the realigned second bytes in a second memory.Type: GrantFiled: December 30, 2021Date of Patent: August 20, 2024Assignee: HUGHES NETWORK SYSTEMS, LLCInventors: Aneeshwar Danda, Robert H. Lager, Sahithi Vemuri
-
Patent number: 12066960Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.Type: GrantFiled: December 27, 2021Date of Patent: August 20, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
-
Patent number: 12047296Abstract: Remote Direct Memory Access (RDMA) over Internet Protocol and/or Ethernet has gained attention for datacenters. However, the sheer scale of the required RDMA networks presents a challenge. Accordingly, optical infrastructures wavelength division multiplexing within a datacenter environment have also gain attention through the wide low cost bandwidth it offers with easy expansion within this environment. However, latency is a significant issue for many applications rather than bandwidth between devices. Accordingly the inventors have established a design methodology where the network prioritises latency over bandwidth where bandwidth utilization and management offer reduced latency for such applications. Accordingly, the inventors exploit loss-tolerant RDMA architectures with quota-based traffic control, message level load balancing and a global view of virtual connections over commodity switches with simple priority queues.Type: GrantFiled: July 14, 2022Date of Patent: July 23, 2024Inventor: Yunqu Liu
-
Patent number: 12026005Abstract: Embodiments are described for a data processing tool configured to cease operations of a plurality of database readers when detecting a congestion condition in the data processing tool. In some embodiments, the data processing tool comprises a memory, one or more processors, and a plurality of database readers. The one or more processors, coupled to the memory and the plurality of database readers are configured to determine a congestion condition in at least one data pipeline of a plurality of data pipelines of the data processing tool. Each data pipeline of the plurality of data pipelines connects a database reader and a transformer of the data processing tool, a transformer and a database writer of the data processing tool, or two transformers of the data processing tool. The one or more processors are further configured to refrain from reading data from one or more databases responsive to the congestion condition.Type: GrantFiled: October 18, 2022Date of Patent: July 2, 2024Assignee: SAP SEInventors: Reinhard Sudmeier, Sreenivasulu Gelle, Alexander Ocher
-
Patent number: 12026098Abstract: Techniques are disclosed relating to updating page pools in the context of cached page pool descriptors. In some embodiments, a processor is configured to assign a set of processing work to a first page pool of memory pages. Page manager circuitry may cache page pool descriptor entries in cache circuitry, where a given page pool descriptor entry indicates a set of pages assigned to a page pool. In response to a determination to grow the first page pool, the processor may communicate a grow list to the page manager circuitry, that identifies a set of memory blocks from the memory to be added to the first page pool. The page manager circuitry may then update a cached page pool descriptor entry for the first page pool to indicate the added memory blocks and generate a signal to inform the processor that the cached page pool descriptor entry is updated.Type: GrantFiled: April 21, 2022Date of Patent: July 2, 2024Assignee: Apple Inc.Inventors: Arjun Thottappilly, David A. Gotwalt, Frank W. Liljeros
-
Patent number: 12019539Abstract: Exemplary methods, apparatuses, and systems including an adaptive configuration manager for controlling configurations of memory devices. The adaptive configuration manager receives a plurality of payloads from a host. The adaptive configuration manager identifies a profile of the host from a plurality of pre-determined host profiles. The adaptive configuration manager identifies a distribution of the plurality of memory access requests, the distribution including a set of sequential payloads and a set of random payloads. The adaptive configuration manager generates a memory access command using the profile of the host including a distribution of random and sequential access. The adaptive configuration manager executes the memory access command using the profile and a payload of the plurality of payloads.Type: GrantFiled: July 1, 2022Date of Patent: June 25, 2024Assignee: MICRON TECHNOLOGY, INC.Inventor: Manjunath Chandrashekaraiah
-
Patent number: 12015954Abstract: Systems and methods for causing control information to be sent from a source base station to a target base station via a user plane is provided. In a network that uses control plane and user plane separation, the systems and methods provided herein reduce time delays between handovers in the control plane and handovers in the user plane. The time delays are reduced by minimizing a duration of the handover preparation phase by sending control information with payload packets between network functions via in-band signaling in the user plane.Type: GrantFiled: November 1, 2018Date of Patent: June 18, 2024Assignee: Nokia Solutions and Networks GmbH & Co. KGInventor: Klaus Hoffman
-
Patent number: 12013794Abstract: According to a first aspect, execution logic is configured to perform a linear capability transfer operation which transfers a physical capability from a partition of a first software modules to a partition of a second of software module without retaining it in the partition of the first. According to a second, alternative or additional aspect, the execution logic is configured to perform a sharding operation whereby a physical capability is divided into at least two instances, which may later be combined.Type: GrantFiled: October 28, 2020Date of Patent: June 18, 2024Assignee: Microsoft Technology Licensing, LLCInventors: David T. Chisnall, Sylvan W. Clebsch, Cédric Alain Marie Christophe Fournet
-
Patent number: 12007918Abstract: Provided are a Peripheral Component Interconnect Express (PCIe) interface device and a method of operating the same. The PCIe interface device may include a performance analyzer and a traffic class controller. The performance analyzer may be configured to measure throughputs of multiple functions executed on one or more Direct Memory Access (DMA) devices. The traffic class controller may be configured to allocate traffic class values to transaction layer packets received from the multiple functions based on the throughputs of the multiple functions.Type: GrantFiled: September 3, 2021Date of Patent: June 11, 2024Assignee: SK hynix Inc.Inventors: Yong Tae Jeon, Ji Woon Yang, Sang Hyun Yoon, Se Hyeon Han
-
Patent number: 12002167Abstract: A method in a virtual, augmented, or mixed reality system includes a GPU determining/detecting an absence of image data. The method also includes shutting down a portion/component/function of the GPU. The method further includes shutting down a communication link between the GPU and a DB. Moreover, the method includes shutting down a portion/component/function of the DB. In addition, the method includes shutting down a communication link between the DB and a display panel. The further also includes shutting down a portion/component/function of the display panel.Type: GrantFiled: November 30, 2021Date of Patent: June 4, 2024Assignee: Magic Leap, Inc.Inventors: Jose Felix Rodriguez, Ricardo Martinez Perez, Reza Nourai, Robert Blake Taylor
-
Patent number: 12001681Abstract: This application provides a storage device, a distributed storage system, and a data processing method, and belongs to the field of storage technologies. In this application, an AI apparatus is disposed inside a storage device, so that the storage device has an AI computing capability. In addition, the storage device further includes a processor and a hard disk, and therefore further has a service data storage capability. Therefore, convergence of storage and AI computing power is implemented. An AI parameter and service data are transmitted inside the storage device through a high-speed interconnect network without a need of being forwarded through an external network. Therefore, a path for transmitting the service data and the AI parameter is greatly shortened, and the service data can be loaded nearby, thereby accelerating loading.Type: GrantFiled: February 22, 2022Date of Patent: June 4, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Jinzhong Liu, Hongdong Zhang
-
Patent number: 12001357Abstract: A direct memory access (DMA) circuit is provided. The DMA circuit may include a plurality of groups of direct memory access channels, wherein each of the groups includes at least one DMA channel and a resource usage counter configured to count an execution time in which one of the DMA channels of the group is executed, and an arbiter configured to evaluate a value of the resource usage counter of a group upon a request for execution time by one of the DMA channels of the group, and, taking into account a result of the evaluation, to assign, delay assignment, or deny execution time for using the direct memory access to one of the groups.Type: GrantFiled: March 4, 2022Date of Patent: June 4, 2024Assignee: Infineon Technologies AGInventors: Frank Hellwig, Sandeep Vangipuram
-
Patent number: 12001855Abstract: In a general aspect, an observability pipeline system includes a pack data processing engine. In some aspects, an observability pipeline system includes data processing engines that are configured according to system default configuration settings and system local configuration settings. A pack file received from a remote computer system contains routes, pipelines, and pack default configuration settings. A pack data processing engine includes the routes and pipelines from the pack file. Pack local configuration settings, defined for the pack data processing engine, inherit at least one of the system default configuration settings and at least one of the pack default configuration settings. The pack local configuration settings are isolated from the system local configuration settings. When pipeline data is processed in the observability pipeline system on the computer system, the pack data processing engine is applied to the pipeline data.Type: GrantFiled: May 17, 2022Date of Patent: June 4, 2024Assignee: Cribl, Inc.Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito
-
Patent number: 11983128Abstract: Techniques to reduce overhead in a direct memory access (DMA) engine can include processing descriptors from a descriptor queue to obtain a striding configuration to generate tensorized memory descriptors. The striding configuration can include, for each striding dimension, a stride and a repetition number indicating a number of times to repeat striding in the corresponding striding dimension. One or more sets of tensorized memory descriptors can be generated based on the striding configuration. Data transfers are then performed based on the generated tensorized memory descriptors.Type: GrantFiled: December 16, 2022Date of Patent: May 14, 2024Assignee: Amazon Technologies, Inc.Inventors: Kun Xu, Ron Diamant, Ilya Minkin, Mohammad El-Shabani, Raymond S. Whiteside, Uday Shilton Udayaselvam
-
Patent number: 11952798Abstract: Provided are a motor driving circuit, a control method therefor, and a driving chip. The motor driving circuit includes a logic module and a push-pull module, a channel selection module, an instruction recognition module, and an isolating switch module. An input signal is outputted by the logic module and the push-pull module to control the motor. The channel selection module is configured to select a channel for the input signal to make the input signal to be connected to the isolating switch module or the instruction recognition module, or disconnected. The instruction recognition module is configured to perform a corresponding operation on the isolating switch module according to an inputted instruction. The isolating switch module is configured to receive an instruction of the channel selection module and an instruction of the instruction recognition module to connect or disconnect the logic module.Type: GrantFiled: March 18, 2021Date of Patent: April 9, 2024Assignee: GREE ELECTRIC APPLIANCES, INC. OF ZHUHAIInventors: Ji He, Junchao Chen, Yang Lan
-
Patent number: 11947460Abstract: Apparatus, method and code for fabrication of the apparatus, the apparatus comprising a cache providing a plurality of cache lines, each cache line storing a block of data; cache access control circuitry, responsive to an access request, to determine whether a hit condition is present in the cache; and cache configuration control circuitry to set, in response to a merging trigger event, merge indication state identifying multiple cache lines to be treated as a merged cache line to store multiple blocks of data, wherein when the merge indication state indicates that the given cache line is part of the merged cache line, the cache access control circuitry is responsive to detecting the hit condition to allow access to any of the data blocks stored in the multiple cache lines forming the merged cache line.Type: GrantFiled: April 26, 2022Date of Patent: April 2, 2024Assignee: Arm LimitedInventors: Vladimir Vasekin, David Michael Bull, Vincent Rezard, Anton Antonov
-
Patent number: 11947973Abstract: The present disclosure is related to a system that may include a first computing device that may perform a plurality of data processing operations and a second computing device that may receive a modification to one or more components of a first data operation, identify a first subset of the plurality of data processing operations that corresponds to the one or more components, and determine one or more alternate parameters associated with the one or more components. The second computing device may then identify a second subset of the plurality of data processing operations that corresponds to the one or more alternate parameters and send a notification to the first computing device indicative of a modification to the first subset and the second subset.Type: GrantFiled: September 13, 2021Date of Patent: April 2, 2024Assignee: United Services Automobile Association (USAA)Inventors: Oscar Guerra, Megan Sarah Jennings
-
Patent number: 11934332Abstract: Devices, methods, and systems are provided. In one example, a device is described to include a device interface that receives data from at least one data source; a data shuffle unit that collects the data received from the at least one data source, receives a descriptor that describes a data shuffle operation to perform on the data received from the at least one data source, performs the data shuffle operation on the collected data to produce shuffled data, and provides the shuffled data to at least one data target.Type: GrantFiled: February 1, 2022Date of Patent: March 19, 2024Assignee: MELLANOX TECHNOLOGIES, LTD.Inventors: Daniel Marcovitch, Dotan David Levi, Eyal Srebro, Eliel Peretz, Roee Moyal, Richard Graham, Gil Bloch, Sean Pieper
-
Patent number: 11914541Abstract: In example implementations, a computing device is provided. The computing device includes an expansion interface, a first device, a second device, and a processor communicatively coupled to the expansion interface. The expansion interface includes a plurality of slots. Two slots of the plurality of slots are controlled by a single reset signal. The first device is connected to a first slot of the two slots and has a feature that is compatible with the single reset signal. The second device is connected to a second slot of the two slots and does not have the feature compatible with the single reset signal. The process is to detect the first device connected to the first slot and the second device connected to the second slot and disable the feature by preventing the first slot and the second slot from receiving the single reset signal.Type: GrantFiled: March 29, 2022Date of Patent: February 27, 2024Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventors: Wen Bin Lin, ChiWei Ding, Chun Yi Liu, Shuo-Cheng Cheng, Chao-Wen Cheng
-
Patent number: 11886365Abstract: Techniques for improving the handling of peripherals in a computer system, including through the use of a DMA control circuit that helps manage the flow of data between memory and the peripherals via an intermediate storage buffer. The DMA control circuit is configured to control timing of DMA transfers between sample buffers in the memory and the intermediate storage buffer. The DMA control circuit may output a priority value of the DMA control circuit for accesses to memory, where the priority value based on stored quality of service (QoS) information and current channel data buffer levels for different DMA channels. The DMA control circuit may separately arbitrate between multiple active transmit and receive channels. Still further, the DMA control circuit may store, for a given data transfer over a particular DMA channel, timestamp information indicative of completion of the DMA and peripheral-side operations.Type: GrantFiled: September 14, 2021Date of Patent: January 30, 2024Assignee: Apple Inc.Inventors: Brett D. George, Rohit K. Gupta, Do Kyung Kim, Paul W. Glendenning
-
Patent number: 11853600Abstract: A memory module with multiple memory devices includes a buffer system that manages communication between a memory controller and the memory devices. The memory module additionally includes a command input port to receive command and address signals from a controller and, also in support of capacity extensions, a command relay circuit coupled to the command port to convey the commands and addresses from the memory module to another module or modules. Relaying commands and addresses introduces a delay, and the buffer system that manages communication between the memory controller and the memory devices can be configured to time data communication to account for that delay.Type: GrantFiled: April 20, 2021Date of Patent: December 26, 2023Assignee: Rambus Inc.Inventors: Frederick A Ware, Scott C. Best
-
Patent number: 11853784Abstract: An example electronic apparatus is for accelerating a para-virtualization network interface. The electronic apparatus includes a descriptor hub performing bi-directionally communication with a guest memory accessible by a guest and with a host memory accessible by a host. The guest includes a plurality of virtual machines. The host includes a plurality of virtual function devices. The virtual machines are communicatively coupled to the electronic apparatus through a central processing unit. The communication is based upon para-virtualization packet descriptors and network interface controller virtual function-specific descriptors. The electronic apparatus also includes a device association table communicatively coupled to the descriptor hub and to store associations between the virtual machines and the virtual function devices. The electronic apparatus further includes an input-output memory map unit (IOMMU) to perform direct memory access (DMA) remapping and interrupt remapping.Type: GrantFiled: December 22, 2016Date of Patent: December 26, 2023Assignee: Intel CorporationInventors: Yigang Zhou, Cunming Liang
-
Patent number: 11836083Abstract: A compute node includes a memory, a processor and a peripheral device. The memory is to store memory pages. The processor is to run software that accesses the memory, and to identify one or more first memory pages that are accessed by the software in the memory. The peripheral device is to directly access one or more second memory pages in the memory of the compute node using Direct Memory Access (DMA), and to notify the processor of the second memory pages that are accessed using DMA. The processor is further to maintain a data structure that tracks both (i) the first memory pages as identified by the processor and (ii) the second memory pages as notified by the peripheral device.Type: GrantFiled: November 29, 2021Date of Patent: December 5, 2023Assignee: MELLANOX TECHNOLOGIES, LTD.Inventors: Ran Avraham Koren, Ariel Shahar, Liran Liss, Gabi Liron, Aviad Shaul Yehezkel
-
Patent number: 11822812Abstract: A method of providing more efficient and streamlined data access to DRAM storage medium by all of multiple processors within a system on a chip (SoC) requires every processor to send use-of-bus request. When the request is for local access (that is, for access to that part of DRAM which is reserved for that processor), the processor reads or writes to the DRAM storage medium through its own arbitrator and own memory controller. When the request is for non-local access (that is, to DRAM within the storage medium which is reserved for another processor), the processor reads or writes to the “foreign” address in the storage medium through its own arbiter, its own memory controller, and its own DMA controller. A data access system is also disclosed.Type: GrantFiled: December 17, 2021Date of Patent: November 21, 2023Assignee: HON HAI PRECISION INDUSTRY CO., LTD.Inventor: Chiung-Hsi Fan-Chiang
-
Patent number: 11810056Abstract: Systems and methods are described herein for routing data by transferring a physical storage device for at least part of a route between source and destination locations. In one example, a computing resource service provider, may receive a request to transfer data from a customer center to a data center. The service provider may determine a route, which includes one or more of a physical path or a network path, for the data loaded onto a physical storage device to reach the data center from the customer center. Determining the route may include associating respective cost values to individual physical and network paths between physical stations between the customer and end data centers, and selecting one or more of the paths to reduce a total cost of the route. Route information may then be associated with the physical storage device based on the route.Type: GrantFiled: March 4, 2020Date of Patent: November 7, 2023Assignee: Amazon Technologies, Inc.Inventors: Ryan Michael Eccles, Siddhartha Roy, Vaibhav Tyagi, Wayne William Duso, Danny Wei
-
Patent number: 11809835Abstract: A method, computer program product, and computing system for defining a queue. The queue may be based on a linked list and may be a first-in, first-out (FIFO) queue that may be configured to be use used with multiple producers and a single consumer. The queue may include a plurality of queue elements. A tail element and a head element may be defined from the plurality of elements within the queue. The tail element may point to a last element of the plurality of elements and the head element may point to a first element of a plurality of elements. An element may be dequeued from the tail element, which may include determining if the tail element is in a null state. An element may be enqueued to the head element, which may include adding a new element to the queue.Type: GrantFiled: April 22, 2021Date of Patent: November 7, 2023Assignee: EMC IP Holding Company, LLCInventors: Vladimir Shveidel, Lior Kamran
-
Patent number: 11789644Abstract: Semiconductor memory systems and architectures for shared memory access implements memory-centric structures using a quasi-volatile memory. In one embodiment, a memory processor array includes an array of memory cubes, each memory cube in communication with a processor mini core to form a computational memory. In another embodiment, a memory system includes processing units and one or more mini core-memory module both in communication with a memory management unit. Mini processor cores in each mini core-memory module execute tasks designated to the mini core-memory module by a given processing unit using data stored in the associated quasi-volatile memory circuits of the mini core-memory module.Type: GrantFiled: October 6, 2022Date of Patent: October 17, 2023Assignee: SUNRISE MEMORY CORPORATIONInventor: Robert D. Norman
-
Patent number: 11762585Abstract: Methods, systems, and devices related to operating a memory array are described. A system may include a memory device and a host device. A memory device may indicate information about a temperature of the memory device, which may include sending an indication to the host device after receiving a signal that initializes the operation of the memory device or storing an indication, for example in a register, about the temperature of the memory device. The information may include an indication that a temperature of the memory device or a rate of change of the temperature of the memory device has satisfied a threshold. Operation of the memory device, or the host device, or both may be modified based on the information about the temperature of the memory device. Operational modifications may include delaying a sending or processing of memory commands until the threshold is satisfied.Type: GrantFiled: February 19, 2021Date of Patent: September 19, 2023Assignee: Micron Technology, Inc.Inventors: Aaron P. Boehm, Scott E. Schaefer
-
Patent number: 11755509Abstract: Memory controllers, devices, modules, systems and associated methods are disclosed. In one embodiment, a memory controller is disclosed. The memory controller includes write queue logic that has first storage to temporarily store signal components of a write operation. The signal components include an address and write data. A transfer interface issues the signal components of the write operation to a bank of a storage class memory (SCM) device and generates a time value. The time value represents a minimum time interval after which a subsequent write operation can be issued to the bank. The write queue logic includes an issue queue to store the address and the time value for a duration corresponding to the time value.Type: GrantFiled: April 7, 2022Date of Patent: September 12, 2023Assignee: Rambus Inc.Inventors: Frederick A. Ware, Brent Haukness
-
Patent number: 11733870Abstract: Disclosed herein are systems having an integrated circuit device disposed within an integrated circuit package having a periphery, and within this periphery a transaction processor is configured to receive a combination of signals (e.g., using a standard memory interface), and intercept some of the signals to initiate a data transformation, and forward the other signals to one or more memory controllers within the periphery to execute standard memory access operations (e.g., with a set of DRAM devices). The DRAM devices may or may not be in within the package periphery. In some embodiments, the transaction processor can include a data plane and control plane to decode and route the combination of signals. In other embodiments, off-load engines and processor cores within the periphery can support execution and acceleration of the data transformations.Type: GrantFiled: January 16, 2019Date of Patent: August 22, 2023Assignee: Rambus Inc.Inventors: David Wang, Nirmal Saxena