Direct Memory Accessing (dma) Patents (Class 710/22)
-
Patent number: 12164436Abstract: Apparatus comprises address processing circuitry to detect information relating to an input memory address provided by address information tables; the address processing circuitry being configured to select an address information table at a given table level according to earlier information entry in an address information table; and the address processing circuitry being configured to select an information entry in the selected address information table according to an offset component, the offset component being defined so that contiguous instances of that portion of the input memory address indicate contiguously addressed information entries; the address processing circuitry comprising detector circuitry to detect whether indicator data is set to indicate whether a group of one or more contiguously addressed information entries in the selected address information table provide at least one base address indicating a location within a contiguously addressed region comprising multiple address information tableType: GrantFiled: May 20, 2021Date of Patent: December 10, 2024Assignee: Arm LimitedInventor: Andrew Brookfield Swaine
-
Patent number: 12131163Abstract: A processor may implement self-relative memory addressing by providing load and store instructions that include self-relative addressing modes. A memory address may contain a self-relative pointer, where the memory address stores a memory offset that, when added to the memory address, defines another memory address. The self-relative addressing mode may also support invalid memory addresses using a reserved offset value, where a load instruction providing the self-relative addressing mode may return a NULL value or generate an exception when determining that the stored offset value is equal to the reserved offset value and where a store instruction providing the self-relative addressing mode may store the reserved offset value when determining that the pointer is an invalid or NULL memory address.Type: GrantFiled: April 30, 2021Date of Patent: October 29, 2024Assignee: Oracle International CorporationInventor: Mario Wolczko
-
Patent number: 12117947Abstract: The present disclosure relates to information processing methods, physical machines, and peripheral component interconnect express (PCIE) devices. In one example method, a PCIE device receives, in a live migration process of a to-be-migrated virtual machine (VM), a packet corresponding to the to-be-migrated VM, where the to-be-migrated VM is one of a plurality of VMs. The PCIE device determines a direct memory access (DMA) address based on the packet. The PCIE device sends the DMA address to a physical function (PF) driver.Type: GrantFiled: March 18, 2021Date of Patent: October 15, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Changchun Ouyang, Shui Cao, Zihao Xiang
-
Patent number: 12112395Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one exemplary implementation, an address allocation process comprises: establishing space for managed pointers across a plurality of memories, including allocating one of the managed pointers with a first portion of memory associated with a first one of a plurality of processors; and performing a process of automatically managing accesses to the managed pointers across the plurality of processors and corresponding memories. The automated management can include ensuring consistent information associated with the managed pointers is copied from the first portion of memory to a second portion of memory associated with a second one of the plurality of processors based upon initiation of an accesses to the managed pointers from the second one of the plurality of processors.Type: GrantFiled: July 2, 2020Date of Patent: October 8, 2024Assignee: NVIDIA CorporationInventors: Stephen Jones, Vivek Kini, Piotr Jaroszynski, Mark Hairgrove, David Fontaine, Cameron Buschardt, Lucien Dunning, John Hubbard
-
Patent number: 12093209Abstract: Technologies for batching remote descriptors of serialized objects in streaming pipelines are described. One method of a first computing device generates a streaming batch of remote descriptors. Each remote descriptor uniquely identifies a contiguous block of a serialized object. The first computing device sends at least one of the remote descriptors to a second computing device before the streaming batch is completed. At least some contents of a contiguous block are obtained for storage at a second memory associated with the second computing device before the streaming batch is completed.Type: GrantFiled: July 11, 2022Date of Patent: September 17, 2024Assignee: Nvidia CorporationInventors: Ryan Olson, Michael Demoret, Bartley Richardson
-
Patent number: 12079514Abstract: Methods, systems, and devices for improved performance in a fragmented memory system are described. The memory system may detect conditions associated with a random access parameter stored at the memory system to assess a level of data fragmentation. The memory system may determine that a random access parameter, such as a data fragmentation parameter, a size of information associated with an access command, a depth of a command queue, a delay duration, or a quantity of commands satisfies a threshold. If one or more of the random access parameters satisfies the threshold, the memory system may transmit a request for the host system to increase an associated clock frequency. The host system may increase the number of commands sent to the memory system in a duration of time. That is, the host system may compensate for a slow-down due to data storage fragmentation by increasing the command processing rate.Type: GrantFiled: March 9, 2022Date of Patent: September 3, 2024Assignee: Micron Technology, Inc.Inventors: Vanaja Urrinkala, Sharath Chandra Ambula
-
Patent number: 12066955Abstract: Systems and method for transferring data are disclosed herein. In an embodiment, a method of transferring data includes reading a plurality of bytes from a first memory, discarding first bytes of the plurality of bytes, realigning second bytes of the plurality of bytes, and storing the realigned second bytes in a second memory.Type: GrantFiled: December 30, 2021Date of Patent: August 20, 2024Assignee: HUGHES NETWORK SYSTEMS, LLCInventors: Aneeshwar Danda, Robert H. Lager, Sahithi Vemuri
-
Patent number: 12066960Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.Type: GrantFiled: December 27, 2021Date of Patent: August 20, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
-
Patent number: 12047296Abstract: Remote Direct Memory Access (RDMA) over Internet Protocol and/or Ethernet has gained attention for datacenters. However, the sheer scale of the required RDMA networks presents a challenge. Accordingly, optical infrastructures wavelength division multiplexing within a datacenter environment have also gain attention through the wide low cost bandwidth it offers with easy expansion within this environment. However, latency is a significant issue for many applications rather than bandwidth between devices. Accordingly the inventors have established a design methodology where the network prioritises latency over bandwidth where bandwidth utilization and management offer reduced latency for such applications. Accordingly, the inventors exploit loss-tolerant RDMA architectures with quota-based traffic control, message level load balancing and a global view of virtual connections over commodity switches with simple priority queues.Type: GrantFiled: July 14, 2022Date of Patent: July 23, 2024Inventor: Yunqu Liu
-
Patent number: 12026005Abstract: Embodiments are described for a data processing tool configured to cease operations of a plurality of database readers when detecting a congestion condition in the data processing tool. In some embodiments, the data processing tool comprises a memory, one or more processors, and a plurality of database readers. The one or more processors, coupled to the memory and the plurality of database readers are configured to determine a congestion condition in at least one data pipeline of a plurality of data pipelines of the data processing tool. Each data pipeline of the plurality of data pipelines connects a database reader and a transformer of the data processing tool, a transformer and a database writer of the data processing tool, or two transformers of the data processing tool. The one or more processors are further configured to refrain from reading data from one or more databases responsive to the congestion condition.Type: GrantFiled: October 18, 2022Date of Patent: July 2, 2024Assignee: SAP SEInventors: Reinhard Sudmeier, Sreenivasulu Gelle, Alexander Ocher
-
Patent number: 12026098Abstract: Techniques are disclosed relating to updating page pools in the context of cached page pool descriptors. In some embodiments, a processor is configured to assign a set of processing work to a first page pool of memory pages. Page manager circuitry may cache page pool descriptor entries in cache circuitry, where a given page pool descriptor entry indicates a set of pages assigned to a page pool. In response to a determination to grow the first page pool, the processor may communicate a grow list to the page manager circuitry, that identifies a set of memory blocks from the memory to be added to the first page pool. The page manager circuitry may then update a cached page pool descriptor entry for the first page pool to indicate the added memory blocks and generate a signal to inform the processor that the cached page pool descriptor entry is updated.Type: GrantFiled: April 21, 2022Date of Patent: July 2, 2024Assignee: Apple Inc.Inventors: Arjun Thottappilly, David A. Gotwalt, Frank W. Liljeros
-
Patent number: 12019539Abstract: Exemplary methods, apparatuses, and systems including an adaptive configuration manager for controlling configurations of memory devices. The adaptive configuration manager receives a plurality of payloads from a host. The adaptive configuration manager identifies a profile of the host from a plurality of pre-determined host profiles. The adaptive configuration manager identifies a distribution of the plurality of memory access requests, the distribution including a set of sequential payloads and a set of random payloads. The adaptive configuration manager generates a memory access command using the profile of the host including a distribution of random and sequential access. The adaptive configuration manager executes the memory access command using the profile and a payload of the plurality of payloads.Type: GrantFiled: July 1, 2022Date of Patent: June 25, 2024Assignee: MICRON TECHNOLOGY, INC.Inventor: Manjunath Chandrashekaraiah
-
Patent number: 12013794Abstract: According to a first aspect, execution logic is configured to perform a linear capability transfer operation which transfers a physical capability from a partition of a first software modules to a partition of a second of software module without retaining it in the partition of the first. According to a second, alternative or additional aspect, the execution logic is configured to perform a sharding operation whereby a physical capability is divided into at least two instances, which may later be combined.Type: GrantFiled: October 28, 2020Date of Patent: June 18, 2024Assignee: Microsoft Technology Licensing, LLCInventors: David T. Chisnall, Sylvan W. Clebsch, Cédric Alain Marie Christophe Fournet
-
Patent number: 12015954Abstract: Systems and methods for causing control information to be sent from a source base station to a target base station via a user plane is provided. In a network that uses control plane and user plane separation, the systems and methods provided herein reduce time delays between handovers in the control plane and handovers in the user plane. The time delays are reduced by minimizing a duration of the handover preparation phase by sending control information with payload packets between network functions via in-band signaling in the user plane.Type: GrantFiled: November 1, 2018Date of Patent: June 18, 2024Assignee: Nokia Solutions and Networks GmbH & Co. KGInventor: Klaus Hoffman
-
Patent number: 12007918Abstract: Provided are a Peripheral Component Interconnect Express (PCIe) interface device and a method of operating the same. The PCIe interface device may include a performance analyzer and a traffic class controller. The performance analyzer may be configured to measure throughputs of multiple functions executed on one or more Direct Memory Access (DMA) devices. The traffic class controller may be configured to allocate traffic class values to transaction layer packets received from the multiple functions based on the throughputs of the multiple functions.Type: GrantFiled: September 3, 2021Date of Patent: June 11, 2024Assignee: SK hynix Inc.Inventors: Yong Tae Jeon, Ji Woon Yang, Sang Hyun Yoon, Se Hyeon Han
-
Patent number: 12002167Abstract: A method in a virtual, augmented, or mixed reality system includes a GPU determining/detecting an absence of image data. The method also includes shutting down a portion/component/function of the GPU. The method further includes shutting down a communication link between the GPU and a DB. Moreover, the method includes shutting down a portion/component/function of the DB. In addition, the method includes shutting down a communication link between the DB and a display panel. The further also includes shutting down a portion/component/function of the display panel.Type: GrantFiled: November 30, 2021Date of Patent: June 4, 2024Assignee: Magic Leap, Inc.Inventors: Jose Felix Rodriguez, Ricardo Martinez Perez, Reza Nourai, Robert Blake Taylor
-
Patent number: 12001681Abstract: This application provides a storage device, a distributed storage system, and a data processing method, and belongs to the field of storage technologies. In this application, an AI apparatus is disposed inside a storage device, so that the storage device has an AI computing capability. In addition, the storage device further includes a processor and a hard disk, and therefore further has a service data storage capability. Therefore, convergence of storage and AI computing power is implemented. An AI parameter and service data are transmitted inside the storage device through a high-speed interconnect network without a need of being forwarded through an external network. Therefore, a path for transmitting the service data and the AI parameter is greatly shortened, and the service data can be loaded nearby, thereby accelerating loading.Type: GrantFiled: February 22, 2022Date of Patent: June 4, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Jinzhong Liu, Hongdong Zhang
-
Patent number: 12001855Abstract: In a general aspect, an observability pipeline system includes a pack data processing engine. In some aspects, an observability pipeline system includes data processing engines that are configured according to system default configuration settings and system local configuration settings. A pack file received from a remote computer system contains routes, pipelines, and pack default configuration settings. A pack data processing engine includes the routes and pipelines from the pack file. Pack local configuration settings, defined for the pack data processing engine, inherit at least one of the system default configuration settings and at least one of the pack default configuration settings. The pack local configuration settings are isolated from the system local configuration settings. When pipeline data is processed in the observability pipeline system on the computer system, the pack data processing engine is applied to the pipeline data.Type: GrantFiled: May 17, 2022Date of Patent: June 4, 2024Assignee: Cribl, Inc.Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito
-
Patent number: 12001357Abstract: A direct memory access (DMA) circuit is provided. The DMA circuit may include a plurality of groups of direct memory access channels, wherein each of the groups includes at least one DMA channel and a resource usage counter configured to count an execution time in which one of the DMA channels of the group is executed, and an arbiter configured to evaluate a value of the resource usage counter of a group upon a request for execution time by one of the DMA channels of the group, and, taking into account a result of the evaluation, to assign, delay assignment, or deny execution time for using the direct memory access to one of the groups.Type: GrantFiled: March 4, 2022Date of Patent: June 4, 2024Assignee: Infineon Technologies AGInventors: Frank Hellwig, Sandeep Vangipuram
-
Patent number: 11983128Abstract: Techniques to reduce overhead in a direct memory access (DMA) engine can include processing descriptors from a descriptor queue to obtain a striding configuration to generate tensorized memory descriptors. The striding configuration can include, for each striding dimension, a stride and a repetition number indicating a number of times to repeat striding in the corresponding striding dimension. One or more sets of tensorized memory descriptors can be generated based on the striding configuration. Data transfers are then performed based on the generated tensorized memory descriptors.Type: GrantFiled: December 16, 2022Date of Patent: May 14, 2024Assignee: Amazon Technologies, Inc.Inventors: Kun Xu, Ron Diamant, Ilya Minkin, Mohammad El-Shabani, Raymond S. Whiteside, Uday Shilton Udayaselvam
-
Patent number: 11952798Abstract: Provided are a motor driving circuit, a control method therefor, and a driving chip. The motor driving circuit includes a logic module and a push-pull module, a channel selection module, an instruction recognition module, and an isolating switch module. An input signal is outputted by the logic module and the push-pull module to control the motor. The channel selection module is configured to select a channel for the input signal to make the input signal to be connected to the isolating switch module or the instruction recognition module, or disconnected. The instruction recognition module is configured to perform a corresponding operation on the isolating switch module according to an inputted instruction. The isolating switch module is configured to receive an instruction of the channel selection module and an instruction of the instruction recognition module to connect or disconnect the logic module.Type: GrantFiled: March 18, 2021Date of Patent: April 9, 2024Assignee: GREE ELECTRIC APPLIANCES, INC. OF ZHUHAIInventors: Ji He, Junchao Chen, Yang Lan
-
Patent number: 11947460Abstract: Apparatus, method and code for fabrication of the apparatus, the apparatus comprising a cache providing a plurality of cache lines, each cache line storing a block of data; cache access control circuitry, responsive to an access request, to determine whether a hit condition is present in the cache; and cache configuration control circuitry to set, in response to a merging trigger event, merge indication state identifying multiple cache lines to be treated as a merged cache line to store multiple blocks of data, wherein when the merge indication state indicates that the given cache line is part of the merged cache line, the cache access control circuitry is responsive to detecting the hit condition to allow access to any of the data blocks stored in the multiple cache lines forming the merged cache line.Type: GrantFiled: April 26, 2022Date of Patent: April 2, 2024Assignee: Arm LimitedInventors: Vladimir Vasekin, David Michael Bull, Vincent Rezard, Anton Antonov
-
Patent number: 11947973Abstract: The present disclosure is related to a system that may include a first computing device that may perform a plurality of data processing operations and a second computing device that may receive a modification to one or more components of a first data operation, identify a first subset of the plurality of data processing operations that corresponds to the one or more components, and determine one or more alternate parameters associated with the one or more components. The second computing device may then identify a second subset of the plurality of data processing operations that corresponds to the one or more alternate parameters and send a notification to the first computing device indicative of a modification to the first subset and the second subset.Type: GrantFiled: September 13, 2021Date of Patent: April 2, 2024Assignee: United Services Automobile Association (USAA)Inventors: Oscar Guerra, Megan Sarah Jennings
-
Patent number: 11934332Abstract: Devices, methods, and systems are provided. In one example, a device is described to include a device interface that receives data from at least one data source; a data shuffle unit that collects the data received from the at least one data source, receives a descriptor that describes a data shuffle operation to perform on the data received from the at least one data source, performs the data shuffle operation on the collected data to produce shuffled data, and provides the shuffled data to at least one data target.Type: GrantFiled: February 1, 2022Date of Patent: March 19, 2024Assignee: MELLANOX TECHNOLOGIES, LTD.Inventors: Daniel Marcovitch, Dotan David Levi, Eyal Srebro, Eliel Peretz, Roee Moyal, Richard Graham, Gil Bloch, Sean Pieper
-
Patent number: 11914541Abstract: In example implementations, a computing device is provided. The computing device includes an expansion interface, a first device, a second device, and a processor communicatively coupled to the expansion interface. The expansion interface includes a plurality of slots. Two slots of the plurality of slots are controlled by a single reset signal. The first device is connected to a first slot of the two slots and has a feature that is compatible with the single reset signal. The second device is connected to a second slot of the two slots and does not have the feature compatible with the single reset signal. The process is to detect the first device connected to the first slot and the second device connected to the second slot and disable the feature by preventing the first slot and the second slot from receiving the single reset signal.Type: GrantFiled: March 29, 2022Date of Patent: February 27, 2024Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventors: Wen Bin Lin, ChiWei Ding, Chun Yi Liu, Shuo-Cheng Cheng, Chao-Wen Cheng
-
Patent number: 11886365Abstract: Techniques for improving the handling of peripherals in a computer system, including through the use of a DMA control circuit that helps manage the flow of data between memory and the peripherals via an intermediate storage buffer. The DMA control circuit is configured to control timing of DMA transfers between sample buffers in the memory and the intermediate storage buffer. The DMA control circuit may output a priority value of the DMA control circuit for accesses to memory, where the priority value based on stored quality of service (QoS) information and current channel data buffer levels for different DMA channels. The DMA control circuit may separately arbitrate between multiple active transmit and receive channels. Still further, the DMA control circuit may store, for a given data transfer over a particular DMA channel, timestamp information indicative of completion of the DMA and peripheral-side operations.Type: GrantFiled: September 14, 2021Date of Patent: January 30, 2024Assignee: Apple Inc.Inventors: Brett D. George, Rohit K. Gupta, Do Kyung Kim, Paul W. Glendenning
-
Patent number: 11853784Abstract: An example electronic apparatus is for accelerating a para-virtualization network interface. The electronic apparatus includes a descriptor hub performing bi-directionally communication with a guest memory accessible by a guest and with a host memory accessible by a host. The guest includes a plurality of virtual machines. The host includes a plurality of virtual function devices. The virtual machines are communicatively coupled to the electronic apparatus through a central processing unit. The communication is based upon para-virtualization packet descriptors and network interface controller virtual function-specific descriptors. The electronic apparatus also includes a device association table communicatively coupled to the descriptor hub and to store associations between the virtual machines and the virtual function devices. The electronic apparatus further includes an input-output memory map unit (IOMMU) to perform direct memory access (DMA) remapping and interrupt remapping.Type: GrantFiled: December 22, 2016Date of Patent: December 26, 2023Assignee: Intel CorporationInventors: Yigang Zhou, Cunming Liang
-
Patent number: 11853600Abstract: A memory module with multiple memory devices includes a buffer system that manages communication between a memory controller and the memory devices. The memory module additionally includes a command input port to receive command and address signals from a controller and, also in support of capacity extensions, a command relay circuit coupled to the command port to convey the commands and addresses from the memory module to another module or modules. Relaying commands and addresses introduces a delay, and the buffer system that manages communication between the memory controller and the memory devices can be configured to time data communication to account for that delay.Type: GrantFiled: April 20, 2021Date of Patent: December 26, 2023Assignee: Rambus Inc.Inventors: Frederick A Ware, Scott C. Best
-
Patent number: 11836083Abstract: A compute node includes a memory, a processor and a peripheral device. The memory is to store memory pages. The processor is to run software that accesses the memory, and to identify one or more first memory pages that are accessed by the software in the memory. The peripheral device is to directly access one or more second memory pages in the memory of the compute node using Direct Memory Access (DMA), and to notify the processor of the second memory pages that are accessed using DMA. The processor is further to maintain a data structure that tracks both (i) the first memory pages as identified by the processor and (ii) the second memory pages as notified by the peripheral device.Type: GrantFiled: November 29, 2021Date of Patent: December 5, 2023Assignee: MELLANOX TECHNOLOGIES, LTD.Inventors: Ran Avraham Koren, Ariel Shahar, Liran Liss, Gabi Liron, Aviad Shaul Yehezkel
-
Patent number: 11822812Abstract: A method of providing more efficient and streamlined data access to DRAM storage medium by all of multiple processors within a system on a chip (SoC) requires every processor to send use-of-bus request. When the request is for local access (that is, for access to that part of DRAM which is reserved for that processor), the processor reads or writes to the DRAM storage medium through its own arbitrator and own memory controller. When the request is for non-local access (that is, to DRAM within the storage medium which is reserved for another processor), the processor reads or writes to the “foreign” address in the storage medium through its own arbiter, its own memory controller, and its own DMA controller. A data access system is also disclosed.Type: GrantFiled: December 17, 2021Date of Patent: November 21, 2023Assignee: HON HAI PRECISION INDUSTRY CO., LTD.Inventor: Chiung-Hsi Fan-Chiang
-
Patent number: 11810056Abstract: Systems and methods are described herein for routing data by transferring a physical storage device for at least part of a route between source and destination locations. In one example, a computing resource service provider, may receive a request to transfer data from a customer center to a data center. The service provider may determine a route, which includes one or more of a physical path or a network path, for the data loaded onto a physical storage device to reach the data center from the customer center. Determining the route may include associating respective cost values to individual physical and network paths between physical stations between the customer and end data centers, and selecting one or more of the paths to reduce a total cost of the route. Route information may then be associated with the physical storage device based on the route.Type: GrantFiled: March 4, 2020Date of Patent: November 7, 2023Assignee: Amazon Technologies, Inc.Inventors: Ryan Michael Eccles, Siddhartha Roy, Vaibhav Tyagi, Wayne William Duso, Danny Wei
-
Patent number: 11809835Abstract: A method, computer program product, and computing system for defining a queue. The queue may be based on a linked list and may be a first-in, first-out (FIFO) queue that may be configured to be use used with multiple producers and a single consumer. The queue may include a plurality of queue elements. A tail element and a head element may be defined from the plurality of elements within the queue. The tail element may point to a last element of the plurality of elements and the head element may point to a first element of a plurality of elements. An element may be dequeued from the tail element, which may include determining if the tail element is in a null state. An element may be enqueued to the head element, which may include adding a new element to the queue.Type: GrantFiled: April 22, 2021Date of Patent: November 7, 2023Assignee: EMC IP Holding Company, LLCInventors: Vladimir Shveidel, Lior Kamran
-
Patent number: 11789644Abstract: Semiconductor memory systems and architectures for shared memory access implements memory-centric structures using a quasi-volatile memory. In one embodiment, a memory processor array includes an array of memory cubes, each memory cube in communication with a processor mini core to form a computational memory. In another embodiment, a memory system includes processing units and one or more mini core-memory module both in communication with a memory management unit. Mini processor cores in each mini core-memory module execute tasks designated to the mini core-memory module by a given processing unit using data stored in the associated quasi-volatile memory circuits of the mini core-memory module.Type: GrantFiled: October 6, 2022Date of Patent: October 17, 2023Assignee: SUNRISE MEMORY CORPORATIONInventor: Robert D. Norman
-
Patent number: 11762585Abstract: Methods, systems, and devices related to operating a memory array are described. A system may include a memory device and a host device. A memory device may indicate information about a temperature of the memory device, which may include sending an indication to the host device after receiving a signal that initializes the operation of the memory device or storing an indication, for example in a register, about the temperature of the memory device. The information may include an indication that a temperature of the memory device or a rate of change of the temperature of the memory device has satisfied a threshold. Operation of the memory device, or the host device, or both may be modified based on the information about the temperature of the memory device. Operational modifications may include delaying a sending or processing of memory commands until the threshold is satisfied.Type: GrantFiled: February 19, 2021Date of Patent: September 19, 2023Assignee: Micron Technology, Inc.Inventors: Aaron P. Boehm, Scott E. Schaefer
-
Patent number: 11755509Abstract: Memory controllers, devices, modules, systems and associated methods are disclosed. In one embodiment, a memory controller is disclosed. The memory controller includes write queue logic that has first storage to temporarily store signal components of a write operation. The signal components include an address and write data. A transfer interface issues the signal components of the write operation to a bank of a storage class memory (SCM) device and generates a time value. The time value represents a minimum time interval after which a subsequent write operation can be issued to the bank. The write queue logic includes an issue queue to store the address and the time value for a duration corresponding to the time value.Type: GrantFiled: April 7, 2022Date of Patent: September 12, 2023Assignee: Rambus Inc.Inventors: Frederick A. Ware, Brent Haukness
-
Patent number: 11733870Abstract: Disclosed herein are systems having an integrated circuit device disposed within an integrated circuit package having a periphery, and within this periphery a transaction processor is configured to receive a combination of signals (e.g., using a standard memory interface), and intercept some of the signals to initiate a data transformation, and forward the other signals to one or more memory controllers within the periphery to execute standard memory access operations (e.g., with a set of DRAM devices). The DRAM devices may or may not be in within the package periphery. In some embodiments, the transaction processor can include a data plane and control plane to decode and route the combination of signals. In other embodiments, off-load engines and processor cores within the periphery can support execution and acceleration of the data transformations.Type: GrantFiled: January 16, 2019Date of Patent: August 22, 2023Assignee: Rambus Inc.Inventors: David Wang, Nirmal Saxena
-
Patent number: 11714651Abstract: A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.Type: GrantFiled: May 26, 2021Date of Patent: August 1, 2023Assignee: Deep Vision Inc.Inventors: Mohamed Shahim, Raju Datla, Abhilash Bharath Ghanore, Lava Kumar Bokam, Suresh Kumar Vennam, Rajashekar Reddy Ereddy
-
Patent number: 11698734Abstract: Method and apparatus for managing data in a storage device, such as a solid-state drive (SSD). In some embodiments, a main memory has memory cells arranged on dies arranged as die sets accessible using parallel channels. A controller is configured to arbitrate resources required by access commands to transfer data to or from the main memory using the parallel channels, to monitor an occurrence rate of collisions between commands requiring an overlapping set of the resources, and to adjust a ratio among different types of commands executed by the controller responsive to the occurrence rate of the collisions. In further embodiments, the controller may divide a full command into multiple partial commands, each of which are executed as the associated system resources become available. In some cases, the ratio is established between read commands and write commands issued to the main memory.Type: GrantFiled: July 20, 2021Date of Patent: July 11, 2023Assignee: Seagate Technology LLCInventors: Jonathan M. Henze, Jeffrey J. Pream, Ryan J. Goss
-
Patent number: 11694743Abstract: A chip system includes a first chip, a first DRAM, a second chip and a second DRAM. The first chip includes a first DRAM controller and a first serial transmission interface. The first DRAM is coupled to the first DRAM controller. The second chip includes a second DTAM controller and a second serial transmission interface. The second serial transmission interface is coupled to the first serial transmission interface. The second DRAM is coupled to the second DRAM controller. When the first chip intends to store first data and second data, the first chip stores the first data into the first DRAM via the first DRAM controller, and transmits the second data to the second chip via the first serial transmission interface; and the second chip stores the second data into the second DRAM via the second DRAM controller.Type: GrantFiled: June 6, 2021Date of Patent: July 4, 2023Assignee: Realtek Semiconductor Corp.Inventor: Ching-Sheng Cheng
-
Patent number: 11693787Abstract: In an example, a device includes a memory and a processor core coupled to the memory via a memory management unit (MMU). The device also includes a system MMU (SMMU) cross-referencing virtual addresses (VAs) with intermediate physical addresses (IPAs) and IPAs with physical addresses (PAs). The device further includes a physical address table (PAT) cross-referencing IPAs with each other and cross-referencing PAs with each other. The device also includes a peripheral virtualization unit (PVU) cross-referencing IPAs with PAs, and a routing circuit coupled to the memory, the SMMU, the PAT, and the PVU. The routing circuit is configured to receive a request comprising an address and an attribute and to route the request through at least one of the SMMU, the PAT, or the PVU based on the address and the attribute.Type: GrantFiled: February 9, 2021Date of Patent: July 4, 2023Assignee: Texas Instruments IncorporatedInventors: Sriramakrishnan Govindarajan, Gregory Raymond Shurtz, Mihir Narendra Mody, Charles Lance Fuoco, Donald E. Steiss, Jonathan Elliot Bergsagel, Jason A.T. Jones
-
Patent number: 11695630Abstract: High-speed control of disaggregated network systems is implemented. A control device includes a main memory and an interface. The main memory shares management information contained in memory spaces possessed by a plurality of respective component devices connected and stores the management information as integrated management information. When the management information is to be updated, the interface transmits an update signal for updating the management information to the component devices and receives a response signal to the update signal from the component devices.Type: GrantFiled: February 28, 2020Date of Patent: July 4, 2023Assignee: NEC CORPORATIONInventors: Shigeyuki Yanagimachi, Akio Tajima, Kiyo Ishii, Syu Namiki
-
Patent number: 11681639Abstract: In a virtualized computer system in which a guest operating system runs on a virtual machine of a virtualized computer system, a computer-implemented method of providing the guest operating system with direct access to a hardware device coupled to the virtualized computer system via a communication interface, the method including: (a) obtaining first configuration register information corresponding to the hardware device, the hardware device connected to the virtualized computer system via the communication interface; (b) creating a passthrough device by copying at least part of the first configuration register information to generate second configuration register information corresponding to the passthrough device; and (c) enabling the guest operating system to directly access the hardware device corresponding to the passthrough device by providing access to the second configuration register information of the passthrough device.Type: GrantFiled: March 23, 2021Date of Patent: June 20, 2023Assignee: VMware, Inc.Inventors: Mallik Mahalingam, Michael Nelson
-
Patent number: 11669464Abstract: Examples herein describe performing non-sequential DMA read and writes. Rather than storing data sequentially, a DMA engine can write data into memory using non-sequential memory addresses. A data processing engine (DPE) controller can submit a first job using first parameters that instruct the DMA engine to store data using a first non-sequential write pattern. The DPE controller can also submit a second job using second parameters that instruct the DMA engine to store data using a second, different non-sequential write pattern. In this manner, the DMA engine can switch to performing DMA writes using different non-sequential patterns. Similarly, the DMA engine can use non-sequential reads to retrieve data from memory. When performing a first DMA read, the DMA engine can retrieve data from memory using a first sequential pattern and then perform a second DMA read where data is retrieved from memory using a second non-sequential read pattern.Type: GrantFiled: April 24, 2020Date of Patent: June 6, 2023Assignee: XILINX, INC.Inventors: Goran Hk Bilski, Baris Ozgul, David Clarke, Juan J. Noguera Serra, Jan Langer, Zachary Dickman, Sneha Bhalchandra Date, Tim Tuan
-
Patent number: 11650742Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.Type: GrantFiled: September 17, 2019Date of Patent: May 16, 2023Assignee: Micron Technology, Inc.Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
-
Patent number: 11650736Abstract: Disclosed are the SGL processing acceleration method and the storage device. The disclosed SGL processing acceleration method includes: obtaining the SGL associated with the IO command; generating the host space descriptor list and the DTU descriptor list according to the SGL; obtaining one or more host space descriptors of the host space descriptor list according to the DTU descriptor of the DTU descriptor list; and initiating the data transmission according to the obtained one or more host space descriptors.Type: GrantFiled: December 4, 2020Date of Patent: May 16, 2023Assignee: SHANGHAI STARBLAZE INDUSTRIAL CO., LTD.Inventors: Ze Zhang, Hao Cheng Huang, Yi Lei Wang
-
Patent number: 11630743Abstract: One example method includes determining a modulus such as a Weibull modulus for a recovery operation. Enablement and disablement of a read ahead cache are performed based on the modulus. The modulus is a linearization of a cumulative distribution function, where failures correspond to non-sequential accesses and successes correspond to sequential accesses.Type: GrantFiled: November 23, 2020Date of Patent: April 18, 2023Assignee: EMC IP HOLDING COMPANY LLCInventors: Keyur B. Desai, Dominick J. Santangelo
-
Patent number: 11630601Abstract: A method and apparatus for performing access control of a memory device with aid of a multi-phase memory-mapped queue are provided. The method includes: receiving a first host command from a host device; and in response to the first host command, utilizing a processing circuit within the controller to send a first operation command to the NV memory through a control logic circuit of the controller, and trigger a first set of secondary processing circuits within the controller to operate and interact via the multi-phase memory-mapped queue, for accessing the first data for the host device, wherein the processing circuit and the first set of secondary processing circuits share the multi-phase memory-mapped queue, and use the multi-phase memory-mapped queue as multiple chained message queues associated with multiple phases, respectively, for performing message queuing for a chained processing architecture including the processing circuit and the first set of secondary processing circuits.Type: GrantFiled: November 1, 2021Date of Patent: April 18, 2023Assignee: Silicon Motion, Inc.Inventors: Cheng Yi, Kaihong Wang, Sheng-I Hsu, I-Ling Tseng
-
Patent number: 11625276Abstract: In general, embodiments disclosed herein relate to using high bandwidth memory (HBM) in a booting process. In embodiments disclosed herein, a region of the HBM is set aside as an additional memory pool (also referred to as a pool) for drivers and/or other memory heap requests in the booting process. One or more embodiments maintain the existing memory pool below four GB, but provide an additional resource for drivers and heap requests.Type: GrantFiled: January 8, 2021Date of Patent: April 11, 2023Assignee: Dell Products L.P.Inventors: Wei Liu, PoYu Cheng
-
Patent number: 11604748Abstract: A computing device is provided, including a plurality of memory devices, a plurality of direct memory access (DMA) controllers, and an on-chip interconnect. The on-chip interconnect may be configured to implement control logic to convey a read request from a primary DMA controller of the plurality of DMA controllers to a source memory device of the plurality of memory devices. The on-chip interconnect may be further configured to implement the control logic to convey a read response from the source memory device to the primary DMA controller and one or more secondary DMA controllers of the plurality of DMA controllers.Type: GrantFiled: October 30, 2020Date of Patent: March 14, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Ruihua Peng, Monica Man Kay Tang, Xiaoling Xu
-
Patent number: 11604649Abstract: A technique for block data transfer is disclosed that reduces data transfer and memory access overheads and significantly reduces multiprocessor activity and energy consumption. Threads executing on a multiprocessor needing data stored in global memory can request and store the needed data in on-chip shared memory, which can be accessed by the threads multiple times. The data can be loaded from global memory and stored in shared memory using an instruction which directs the data into the shared memory without storing the data in registers and/or cache memory of the multiprocessor during the data transfer.Type: GrantFiled: June 30, 2021Date of Patent: March 14, 2023Assignee: NVIDIA CorporationInventors: Andrew Kerr, Jack Choquette, Xiaogang Qiu, Omkar Paranjape, Poornachandra Rao, Shirish Gadre, Steven J. Heinrich, Manan Patel, Olivier Giroux, Alan Kaatz