Direct Memory Accessing (dma) Patents (Class 710/22)
  • Patent number: 11952798
    Abstract: Provided are a motor driving circuit, a control method therefor, and a driving chip. The motor driving circuit includes a logic module and a push-pull module, a channel selection module, an instruction recognition module, and an isolating switch module. An input signal is outputted by the logic module and the push-pull module to control the motor. The channel selection module is configured to select a channel for the input signal to make the input signal to be connected to the isolating switch module or the instruction recognition module, or disconnected. The instruction recognition module is configured to perform a corresponding operation on the isolating switch module according to an inputted instruction. The isolating switch module is configured to receive an instruction of the channel selection module and an instruction of the instruction recognition module to connect or disconnect the logic module.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: April 9, 2024
    Assignee: GREE ELECTRIC APPLIANCES, INC. OF ZHUHAI
    Inventors: Ji He, Junchao Chen, Yang Lan
  • Patent number: 11947973
    Abstract: The present disclosure is related to a system that may include a first computing device that may perform a plurality of data processing operations and a second computing device that may receive a modification to one or more components of a first data operation, identify a first subset of the plurality of data processing operations that corresponds to the one or more components, and determine one or more alternate parameters associated with the one or more components. The second computing device may then identify a second subset of the plurality of data processing operations that corresponds to the one or more alternate parameters and send a notification to the first computing device indicative of a modification to the first subset and the second subset.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: April 2, 2024
    Assignee: United Services Automobile Association (USAA)
    Inventors: Oscar Guerra, Megan Sarah Jennings
  • Patent number: 11947460
    Abstract: Apparatus, method and code for fabrication of the apparatus, the apparatus comprising a cache providing a plurality of cache lines, each cache line storing a block of data; cache access control circuitry, responsive to an access request, to determine whether a hit condition is present in the cache; and cache configuration control circuitry to set, in response to a merging trigger event, merge indication state identifying multiple cache lines to be treated as a merged cache line to store multiple blocks of data, wherein when the merge indication state indicates that the given cache line is part of the merged cache line, the cache access control circuitry is responsive to detecting the hit condition to allow access to any of the data blocks stored in the multiple cache lines forming the merged cache line.
    Type: Grant
    Filed: April 26, 2022
    Date of Patent: April 2, 2024
    Assignee: Arm Limited
    Inventors: Vladimir Vasekin, David Michael Bull, Vincent Rezard, Anton Antonov
  • Patent number: 11934332
    Abstract: Devices, methods, and systems are provided. In one example, a device is described to include a device interface that receives data from at least one data source; a data shuffle unit that collects the data received from the at least one data source, receives a descriptor that describes a data shuffle operation to perform on the data received from the at least one data source, performs the data shuffle operation on the collected data to produce shuffled data, and provides the shuffled data to at least one data target.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: March 19, 2024
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Daniel Marcovitch, Dotan David Levi, Eyal Srebro, Eliel Peretz, Roee Moyal, Richard Graham, Gil Bloch, Sean Pieper
  • Patent number: 11914541
    Abstract: In example implementations, a computing device is provided. The computing device includes an expansion interface, a first device, a second device, and a processor communicatively coupled to the expansion interface. The expansion interface includes a plurality of slots. Two slots of the plurality of slots are controlled by a single reset signal. The first device is connected to a first slot of the two slots and has a feature that is compatible with the single reset signal. The second device is connected to a second slot of the two slots and does not have the feature compatible with the single reset signal. The process is to detect the first device connected to the first slot and the second device connected to the second slot and disable the feature by preventing the first slot and the second slot from receiving the single reset signal.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: February 27, 2024
    Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Wen Bin Lin, ChiWei Ding, Chun Yi Liu, Shuo-Cheng Cheng, Chao-Wen Cheng
  • Patent number: 11886365
    Abstract: Techniques for improving the handling of peripherals in a computer system, including through the use of a DMA control circuit that helps manage the flow of data between memory and the peripherals via an intermediate storage buffer. The DMA control circuit is configured to control timing of DMA transfers between sample buffers in the memory and the intermediate storage buffer. The DMA control circuit may output a priority value of the DMA control circuit for accesses to memory, where the priority value based on stored quality of service (QoS) information and current channel data buffer levels for different DMA channels. The DMA control circuit may separately arbitrate between multiple active transmit and receive channels. Still further, the DMA control circuit may store, for a given data transfer over a particular DMA channel, timestamp information indicative of completion of the DMA and peripheral-side operations.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: January 30, 2024
    Assignee: Apple Inc.
    Inventors: Brett D. George, Rohit K. Gupta, Do Kyung Kim, Paul W. Glendenning
  • Patent number: 11853784
    Abstract: An example electronic apparatus is for accelerating a para-virtualization network interface. The electronic apparatus includes a descriptor hub performing bi-directionally communication with a guest memory accessible by a guest and with a host memory accessible by a host. The guest includes a plurality of virtual machines. The host includes a plurality of virtual function devices. The virtual machines are communicatively coupled to the electronic apparatus through a central processing unit. The communication is based upon para-virtualization packet descriptors and network interface controller virtual function-specific descriptors. The electronic apparatus also includes a device association table communicatively coupled to the descriptor hub and to store associations between the virtual machines and the virtual function devices. The electronic apparatus further includes an input-output memory map unit (IOMMU) to perform direct memory access (DMA) remapping and interrupt remapping.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 26, 2023
    Assignee: Intel Corporation
    Inventors: Yigang Zhou, Cunming Liang
  • Patent number: 11853600
    Abstract: A memory module with multiple memory devices includes a buffer system that manages communication between a memory controller and the memory devices. The memory module additionally includes a command input port to receive command and address signals from a controller and, also in support of capacity extensions, a command relay circuit coupled to the command port to convey the commands and addresses from the memory module to another module or modules. Relaying commands and addresses introduces a delay, and the buffer system that manages communication between the memory controller and the memory devices can be configured to time data communication to account for that delay.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: December 26, 2023
    Assignee: Rambus Inc.
    Inventors: Frederick A Ware, Scott C. Best
  • Patent number: 11836083
    Abstract: A compute node includes a memory, a processor and a peripheral device. The memory is to store memory pages. The processor is to run software that accesses the memory, and to identify one or more first memory pages that are accessed by the software in the memory. The peripheral device is to directly access one or more second memory pages in the memory of the compute node using Direct Memory Access (DMA), and to notify the processor of the second memory pages that are accessed using DMA. The processor is further to maintain a data structure that tracks both (i) the first memory pages as identified by the processor and (ii) the second memory pages as notified by the peripheral device.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: December 5, 2023
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Ran Avraham Koren, Ariel Shahar, Liran Liss, Gabi Liron, Aviad Shaul Yehezkel
  • Patent number: 11822812
    Abstract: A method of providing more efficient and streamlined data access to DRAM storage medium by all of multiple processors within a system on a chip (SoC) requires every processor to send use-of-bus request. When the request is for local access (that is, for access to that part of DRAM which is reserved for that processor), the processor reads or writes to the DRAM storage medium through its own arbitrator and own memory controller. When the request is for non-local access (that is, to DRAM within the storage medium which is reserved for another processor), the processor reads or writes to the “foreign” address in the storage medium through its own arbiter, its own memory controller, and its own DMA controller. A data access system is also disclosed.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: November 21, 2023
    Assignee: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventor: Chiung-Hsi Fan-Chiang
  • Patent number: 11810056
    Abstract: Systems and methods are described herein for routing data by transferring a physical storage device for at least part of a route between source and destination locations. In one example, a computing resource service provider, may receive a request to transfer data from a customer center to a data center. The service provider may determine a route, which includes one or more of a physical path or a network path, for the data loaded onto a physical storage device to reach the data center from the customer center. Determining the route may include associating respective cost values to individual physical and network paths between physical stations between the customer and end data centers, and selecting one or more of the paths to reduce a total cost of the route. Route information may then be associated with the physical storage device based on the route.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: November 7, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Ryan Michael Eccles, Siddhartha Roy, Vaibhav Tyagi, Wayne William Duso, Danny Wei
  • Patent number: 11809835
    Abstract: A method, computer program product, and computing system for defining a queue. The queue may be based on a linked list and may be a first-in, first-out (FIFO) queue that may be configured to be use used with multiple producers and a single consumer. The queue may include a plurality of queue elements. A tail element and a head element may be defined from the plurality of elements within the queue. The tail element may point to a last element of the plurality of elements and the head element may point to a first element of a plurality of elements. An element may be dequeued from the tail element, which may include determining if the tail element is in a null state. An element may be enqueued to the head element, which may include adding a new element to the queue.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: November 7, 2023
    Assignee: EMC IP Holding Company, LLC
    Inventors: Vladimir Shveidel, Lior Kamran
  • Patent number: 11789644
    Abstract: Semiconductor memory systems and architectures for shared memory access implements memory-centric structures using a quasi-volatile memory. In one embodiment, a memory processor array includes an array of memory cubes, each memory cube in communication with a processor mini core to form a computational memory. In another embodiment, a memory system includes processing units and one or more mini core-memory module both in communication with a memory management unit. Mini processor cores in each mini core-memory module execute tasks designated to the mini core-memory module by a given processing unit using data stored in the associated quasi-volatile memory circuits of the mini core-memory module.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: October 17, 2023
    Assignee: SUNRISE MEMORY CORPORATION
    Inventor: Robert D. Norman
  • Patent number: 11762585
    Abstract: Methods, systems, and devices related to operating a memory array are described. A system may include a memory device and a host device. A memory device may indicate information about a temperature of the memory device, which may include sending an indication to the host device after receiving a signal that initializes the operation of the memory device or storing an indication, for example in a register, about the temperature of the memory device. The information may include an indication that a temperature of the memory device or a rate of change of the temperature of the memory device has satisfied a threshold. Operation of the memory device, or the host device, or both may be modified based on the information about the temperature of the memory device. Operational modifications may include delaying a sending or processing of memory commands until the threshold is satisfied.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: September 19, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Aaron P. Boehm, Scott E. Schaefer
  • Patent number: 11755509
    Abstract: Memory controllers, devices, modules, systems and associated methods are disclosed. In one embodiment, a memory controller is disclosed. The memory controller includes write queue logic that has first storage to temporarily store signal components of a write operation. The signal components include an address and write data. A transfer interface issues the signal components of the write operation to a bank of a storage class memory (SCM) device and generates a time value. The time value represents a minimum time interval after which a subsequent write operation can be issued to the bank. The write queue logic includes an issue queue to store the address and the time value for a duration corresponding to the time value.
    Type: Grant
    Filed: April 7, 2022
    Date of Patent: September 12, 2023
    Assignee: Rambus Inc.
    Inventors: Frederick A. Ware, Brent Haukness
  • Patent number: 11733870
    Abstract: Disclosed herein are systems having an integrated circuit device disposed within an integrated circuit package having a periphery, and within this periphery a transaction processor is configured to receive a combination of signals (e.g., using a standard memory interface), and intercept some of the signals to initiate a data transformation, and forward the other signals to one or more memory controllers within the periphery to execute standard memory access operations (e.g., with a set of DRAM devices). The DRAM devices may or may not be in within the package periphery. In some embodiments, the transaction processor can include a data plane and control plane to decode and route the combination of signals. In other embodiments, off-load engines and processor cores within the periphery can support execution and acceleration of the data transformations.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: August 22, 2023
    Assignee: Rambus Inc.
    Inventors: David Wang, Nirmal Saxena
  • Patent number: 11714651
    Abstract: A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: August 1, 2023
    Assignee: Deep Vision Inc.
    Inventors: Mohamed Shahim, Raju Datla, Abhilash Bharath Ghanore, Lava Kumar Bokam, Suresh Kumar Vennam, Rajashekar Reddy Ereddy
  • Patent number: 11698734
    Abstract: Method and apparatus for managing data in a storage device, such as a solid-state drive (SSD). In some embodiments, a main memory has memory cells arranged on dies arranged as die sets accessible using parallel channels. A controller is configured to arbitrate resources required by access commands to transfer data to or from the main memory using the parallel channels, to monitor an occurrence rate of collisions between commands requiring an overlapping set of the resources, and to adjust a ratio among different types of commands executed by the controller responsive to the occurrence rate of the collisions. In further embodiments, the controller may divide a full command into multiple partial commands, each of which are executed as the associated system resources become available. In some cases, the ratio is established between read commands and write commands issued to the main memory.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: July 11, 2023
    Assignee: Seagate Technology LLC
    Inventors: Jonathan M. Henze, Jeffrey J. Pream, Ryan J. Goss
  • Patent number: 11695630
    Abstract: High-speed control of disaggregated network systems is implemented. A control device includes a main memory and an interface. The main memory shares management information contained in memory spaces possessed by a plurality of respective component devices connected and stores the management information as integrated management information. When the management information is to be updated, the interface transmits an update signal for updating the management information to the component devices and receives a response signal to the update signal from the component devices.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: July 4, 2023
    Assignee: NEC CORPORATION
    Inventors: Shigeyuki Yanagimachi, Akio Tajima, Kiyo Ishii, Syu Namiki
  • Patent number: 11694743
    Abstract: A chip system includes a first chip, a first DRAM, a second chip and a second DRAM. The first chip includes a first DRAM controller and a first serial transmission interface. The first DRAM is coupled to the first DRAM controller. The second chip includes a second DTAM controller and a second serial transmission interface. The second serial transmission interface is coupled to the first serial transmission interface. The second DRAM is coupled to the second DRAM controller. When the first chip intends to store first data and second data, the first chip stores the first data into the first DRAM via the first DRAM controller, and transmits the second data to the second chip via the first serial transmission interface; and the second chip stores the second data into the second DRAM via the second DRAM controller.
    Type: Grant
    Filed: June 6, 2021
    Date of Patent: July 4, 2023
    Assignee: Realtek Semiconductor Corp.
    Inventor: Ching-Sheng Cheng
  • Patent number: 11693787
    Abstract: In an example, a device includes a memory and a processor core coupled to the memory via a memory management unit (MMU). The device also includes a system MMU (SMMU) cross-referencing virtual addresses (VAs) with intermediate physical addresses (IPAs) and IPAs with physical addresses (PAs). The device further includes a physical address table (PAT) cross-referencing IPAs with each other and cross-referencing PAs with each other. The device also includes a peripheral virtualization unit (PVU) cross-referencing IPAs with PAs, and a routing circuit coupled to the memory, the SMMU, the PAT, and the PVU. The routing circuit is configured to receive a request comprising an address and an attribute and to route the request through at least one of the SMMU, the PAT, or the PVU based on the address and the attribute.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: July 4, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Sriramakrishnan Govindarajan, Gregory Raymond Shurtz, Mihir Narendra Mody, Charles Lance Fuoco, Donald E. Steiss, Jonathan Elliot Bergsagel, Jason A.T. Jones
  • Patent number: 11681639
    Abstract: In a virtualized computer system in which a guest operating system runs on a virtual machine of a virtualized computer system, a computer-implemented method of providing the guest operating system with direct access to a hardware device coupled to the virtualized computer system via a communication interface, the method including: (a) obtaining first configuration register information corresponding to the hardware device, the hardware device connected to the virtualized computer system via the communication interface; (b) creating a passthrough device by copying at least part of the first configuration register information to generate second configuration register information corresponding to the passthrough device; and (c) enabling the guest operating system to directly access the hardware device corresponding to the passthrough device by providing access to the second configuration register information of the passthrough device.
    Type: Grant
    Filed: March 23, 2021
    Date of Patent: June 20, 2023
    Assignee: VMware, Inc.
    Inventors: Mallik Mahalingam, Michael Nelson
  • Patent number: 11669464
    Abstract: Examples herein describe performing non-sequential DMA read and writes. Rather than storing data sequentially, a DMA engine can write data into memory using non-sequential memory addresses. A data processing engine (DPE) controller can submit a first job using first parameters that instruct the DMA engine to store data using a first non-sequential write pattern. The DPE controller can also submit a second job using second parameters that instruct the DMA engine to store data using a second, different non-sequential write pattern. In this manner, the DMA engine can switch to performing DMA writes using different non-sequential patterns. Similarly, the DMA engine can use non-sequential reads to retrieve data from memory. When performing a first DMA read, the DMA engine can retrieve data from memory using a first sequential pattern and then perform a second DMA read where data is retrieved from memory using a second non-sequential read pattern.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: June 6, 2023
    Assignee: XILINX, INC.
    Inventors: Goran Hk Bilski, Baris Ozgul, David Clarke, Juan J. Noguera Serra, Jan Langer, Zachary Dickman, Sneha Bhalchandra Date, Tim Tuan
  • Patent number: 11650742
    Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: May 16, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
  • Patent number: 11650736
    Abstract: Disclosed are the SGL processing acceleration method and the storage device. The disclosed SGL processing acceleration method includes: obtaining the SGL associated with the IO command; generating the host space descriptor list and the DTU descriptor list according to the SGL; obtaining one or more host space descriptors of the host space descriptor list according to the DTU descriptor of the DTU descriptor list; and initiating the data transmission according to the obtained one or more host space descriptors.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: May 16, 2023
    Assignee: SHANGHAI STARBLAZE INDUSTRIAL CO., LTD.
    Inventors: Ze Zhang, Hao Cheng Huang, Yi Lei Wang
  • Patent number: 11630743
    Abstract: One example method includes determining a modulus such as a Weibull modulus for a recovery operation. Enablement and disablement of a read ahead cache are performed based on the modulus. The modulus is a linearization of a cumulative distribution function, where failures correspond to non-sequential accesses and successes correspond to sequential accesses.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: April 18, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Keyur B. Desai, Dominick J. Santangelo
  • Patent number: 11630601
    Abstract: A method and apparatus for performing access control of a memory device with aid of a multi-phase memory-mapped queue are provided. The method includes: receiving a first host command from a host device; and in response to the first host command, utilizing a processing circuit within the controller to send a first operation command to the NV memory through a control logic circuit of the controller, and trigger a first set of secondary processing circuits within the controller to operate and interact via the multi-phase memory-mapped queue, for accessing the first data for the host device, wherein the processing circuit and the first set of secondary processing circuits share the multi-phase memory-mapped queue, and use the multi-phase memory-mapped queue as multiple chained message queues associated with multiple phases, respectively, for performing message queuing for a chained processing architecture including the processing circuit and the first set of secondary processing circuits.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: April 18, 2023
    Assignee: Silicon Motion, Inc.
    Inventors: Cheng Yi, Kaihong Wang, Sheng-I Hsu, I-Ling Tseng
  • Patent number: 11625276
    Abstract: In general, embodiments disclosed herein relate to using high bandwidth memory (HBM) in a booting process. In embodiments disclosed herein, a region of the HBM is set aside as an additional memory pool (also referred to as a pool) for drivers and/or other memory heap requests in the booting process. One or more embodiments maintain the existing memory pool below four GB, but provide an additional resource for drivers and heap requests.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: April 11, 2023
    Assignee: Dell Products L.P.
    Inventors: Wei Liu, PoYu Cheng
  • Patent number: 11604748
    Abstract: A computing device is provided, including a plurality of memory devices, a plurality of direct memory access (DMA) controllers, and an on-chip interconnect. The on-chip interconnect may be configured to implement control logic to convey a read request from a primary DMA controller of the plurality of DMA controllers to a source memory device of the plurality of memory devices. The on-chip interconnect may be further configured to implement the control logic to convey a read response from the source memory device to the primary DMA controller and one or more secondary DMA controllers of the plurality of DMA controllers.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: March 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruihua Peng, Monica Man Kay Tang, Xiaoling Xu
  • Patent number: 11604649
    Abstract: A technique for block data transfer is disclosed that reduces data transfer and memory access overheads and significantly reduces multiprocessor activity and energy consumption. Threads executing on a multiprocessor needing data stored in global memory can request and store the needed data in on-chip shared memory, which can be accessed by the threads multiple times. The data can be loaded from global memory and stored in shared memory using an instruction which directs the data into the shared memory without storing the data in registers and/or cache memory of the multiprocessor during the data transfer.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: March 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Andrew Kerr, Jack Choquette, Xiaogang Qiu, Omkar Paranjape, Poornachandra Rao, Shirish Gadre, Steven J. Heinrich, Manan Patel, Olivier Giroux, Alan Kaatz
  • Patent number: 11593289
    Abstract: A memory contains a linked list of records representative of a plurality of data transfers via a direct memory access control circuit. Each record is representative of parameters of an associated data transfer of the plurality of data transfers. The parameters of each record include a transfer start condition of the associated data transfer and a transfer end event of the associated data transfer.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: February 28, 2023
    Assignees: STMicroelectronics (Grenoble 2) SAS, STMicroelectronics (Rousset) SAS
    Inventors: François Cloute, Sandrine Lendre
  • Patent number: 11593274
    Abstract: A semiconductor device includes an address translation device configured to identify a plurality of address translation tables which is used for address translation having a plurality of stages; and an adder configured to identify a stage in the address translation when executing the address translation, wherein the address translation device configured to perform cache control for information of a first address translation table used in a last stage of the address translation when the stage is the final stage.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: February 28, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Shinya Hiramoto
  • Patent number: 11573800
    Abstract: An apparatus, and corresponding method, for input/output (I/O) value determination, generates an I/O instruction for an I/O device, the I/O device including a state machine with state transition logic. The apparatus comprises a controller that includes a simplified state machine with a reduced version of the state transition logic of the state machine of the I/O device. The controller is configured to improve instruction execution performance of a processor core by employing the simplified state machine to predict at least one state value of at least one I/O device true state value to be affected by the I/O instruction at the I/O device.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: February 7, 2023
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Jason D. Zebchuk, Wilson P. Snyder, II, Michael S. Bertone
  • Patent number: 11567666
    Abstract: An electronic device includes a memory, a processor that executes a software entity, a page migration engine (PME), and an input-output memory management unit (IOMMU). The software entity and the PME perform operations for preparing to migrate a page of memory that is accessible by at least one IO device in the memory, the software entity and the PME set migration state information in a page table entry for the page of memory and information in reverse map table (RMT) entries involved with migrating the page of memory based on the operations being performed. The IOMMU controls usage of information from the page table entry and controls performance of memory accesses of the page of memory based on the migration state information in the page table entry and information in the RMT entries. When the operations for preparing to migrate the page of memory are completed, the PME migrates the page of memory in the memory.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: January 31, 2023
    Assignee: ATI Technologies ULC
    Inventors: Philip Ng, Nippon Raval
  • Patent number: 11550586
    Abstract: A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 10, 2023
    Assignee: Deep Vision Inc.
    Inventors: Mohamed Shahim, Raju Datla, Rehan Hameed, Shilpa Kallem
  • Patent number: 11550660
    Abstract: A method includes providing an interposition driver, and context switching into a kernel associated with a persistent memory using the interposition driver to create a consistent view of the persistent memory.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: January 10, 2023
    Assignee: Red Hat, Inc.
    Inventor: Jeffrey E. Moyer
  • Patent number: 11544395
    Abstract: A system and method for providing transactional data privacy while maintaining data usability, including the use of different obfuscation functions for different data types to securely obfuscate the data, in real-time, while maintaining its statistical characteristics. In accordance with an embodiment, the system comprises an obfuscation process that captures data while it is being received in the form of data changes at a first or source system, selects one or more obfuscation techniques to be used with the data according to the type of data captured, and obfuscates the data, using the selected one or more obfuscation techniques, to create an obfuscated data, for use in generating a trail file containing the obfuscated data, or applying the data changes to a target or second system.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: January 3, 2023
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Shenoda Guirguis, Alok Pareek, Stephen Wilkes
  • Patent number: 11544194
    Abstract: A method of performing a copy-on-write on a shared memory page is carried out by a device communicating with a processor via a coherence interconnect. The method includes: adding a page table entry so that a request to read a first cache line of the shared memory page includes a cache-line address of the shared memory page and a request to write to a second cache line of the shared memory page includes a cache-line address of a new memory page; in response to the request to write to the second cache line, storing new data of the second cache line in a second memory and associating the second cache-line address with the new data stored in the second memory; and in response to a request to read the second cache line, reading the new data of the second cache line from the second memory.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: January 3, 2023
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Andreas Nowatzyk, Pratap Subrahmanyam
  • Patent number: 11545062
    Abstract: In some examples, an electronic device comprises a voltage supply circuit to provide a reference voltage usable to discharge pixels in a display device; and a scaler circuit coupled to the voltage supply circuit. The scaler circuit is to buffer first and second frames and dynamically control the voltage supply circuit to modify the reference voltage based on a frequency of the first frame differing from a frequency of the second frame.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: January 3, 2023
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Yi-Fan Lin, Kai-Chieh Chang, Chang-Chih Yeh
  • Patent number: 11537457
    Abstract: A method of offloading performance of a workload includes receiving, on a first computing system acting as an initiator, a first function call from a caller, the first function call to be executed by an accelerator on a second computing system acting as a target, the first computing system coupled to the second computing system by a network; determining a type of the first function call; and generating a list of parameter values of the first function call.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: December 27, 2022
    Assignee: INTEL CORPORATION
    Inventors: Pradeep Pappachan, Sujoy Sen, Joseph Grecco, Mukesh Gangadhar Bhavani Venkatesan, Reshma Lal
  • Patent number: 11531623
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: December 20, 2022
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Patent number: 11520717
    Abstract: An integrated circuit having a data processing engine (DPE) array can include a plurality of memory tiles. A first memory tile can include a first direct memory access (DMA) engine, a first random-access memory (RAM) connected to the first DMA engine, and a first stream switch coupled to the first DMA engine. The first DMA engine may be coupled to a second RAM disposed in a second memory tile. The first stream switch may be coupled to a second stream switch disposed in the second memory tile.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: December 6, 2022
    Assignee: Xilinx, Inc.
    Inventors: David Clarke, Peter McColgan, Zachary Dickman, Jose Marques, Juan J. Noguera Serra, Tim Tuan, Baris Ozgul, Jan Langer
  • Patent number: 11520906
    Abstract: A computer-readable medium comprises instructions that, when executed, cause a processor to execute an untrusted workload manager to manage execution of at least one guest workload.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: December 6, 2022
    Assignee: Intel Corporation
    Inventors: David M. Durham, Siddhartha Chhabra, Ravi L. Sahita, Barry E. Huntley, Gilbert Neiger, Gideon Gerzon, Baiju V. Patel
  • Patent number: 11513716
    Abstract: A technique for maintaining synchronization between two arrays includes assigning one array to be a preferred array and the other array to be a non-preferred array. When write requests are received at the preferred array, the writes are applied locally first and then applied remotely. However, when write requests are received at the non-preferred array, such writes are applied remotely first and then applied locally. Thus, writes are applied first on the preferred array and then on the non-preferred array, regardless of whether the writes are initially received at the preferred array or the non-preferred array.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: November 29, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Nagasimha Haravu, Alan L. Taylor, David Meiri, Dmitry Nikolayevich Tylik
  • Patent number: 11507298
    Abstract: Example computational storage systems and methods are described. In one implementation, a storage drive controller includes a non-volatile memory subsystem to process multiple commands. Multiple versatile processing arrays are coupled to the non-volatile memory subsystem. The multiple versatile processing arrays can process multiple in-situ tasks. A host direct memory access module provides direct access to at least one memory device.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: November 22, 2022
    Assignee: PETAIO INC.
    Inventors: Fan Yang, Changyou Xu, Lingqi Zeng
  • Patent number: 11500440
    Abstract: Examples include a computing system including a network input/output (I/O) device, the network I/O device including a microcontroller, a network controller, and a proxy mode monitor to enter a proxy mode by causing transfer of control of the network controller from a processor to the microcontroller without resetting the network controller, and to exit the proxy mode by causing transfer of control of the network controller from the microcontroller to the processor without resetting the network controller.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: November 15, 2022
    Assignee: Intel Corporation
    Inventors: Boon Leong Ong, Girish J. Shirasat, Suraj A. Gajendra, Alok Anand
  • Patent number: 11483853
    Abstract: Methods, systems, and devices for wireless communications are described for identifying a file having a set of packets that are configured to be processed together. The base station may determine a segmentation scheme for the file based on a size of the file and identify a batch of assignments for communicating the file via a batch of transmissions based on the segmentation scheme. The base station may communicate the file via the batch of transmissions with a user equipment (UE) during the batch of assignments. Additional techniques are described herein for a UE to identify that the UE is storing a file in a buffer at the UE. The UE may transmit an indication to the base station that the buffer includes the file. In some cases, the UE may indicate a size of the file.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: October 25, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Prashanth Haridas Hande, Wanshi Chen, Linhai He, Naga Bhushan, Jay Kumar Sundararajan
  • Patent number: 11475152
    Abstract: A computer system for securing computer files from modification may include a processor; a first data storage area operatively coupled to the processor; a non-volatile second data storage area; and a control circuit. The second data storage area may be physically separate from the first data storage area. The second data storage area may store files that are executable by the processor, including executable files of an operating system configured to save temporary files on the at least a first data storage area. The control circuit may operatively couple the second data storage area to the processor, and may be operable in a first mode configured to block commands received from the processor and configured to modify the second data storage area from being communicated to the second data storage area. In a second mode, all commands may be allowed to the first and second data storage areas.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: October 18, 2022
    Assignee: CRU Data Security Group, LLC
    Inventors: Larry Hampel, Randal Barber
  • Patent number: 11461052
    Abstract: A storage system has two submission queue doorbell registers associated with a submission queue in a host. The storage system fetches and executes a command from the submission queue only in response to both submission queue doorbell registers being written. The second submission queue doorbell register may be visible (and directly written to) by the host or invisible (and indirectly written to) by the host. The use of two submission queue doorbell registers for a single submission queue can be used as a protection mechanism to protect an administration command submission queue of a child controller in a multiple physical function Non-Volatile Memory Express (NVMe) device (MFND).
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: October 4, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventor: Shay Benisty
  • Patent number: 11461041
    Abstract: A storage device includes; a nonvolatile storage including a first region and a second region, a storage controller controlling operation of the nonvolatile storage, and a buffer memory connected to the storage controller. The storage controller stores user data received from a host device in the second region, stores metadata associated with management of the user data and generated by a file system of the host device in the first region, loads the metadata from the first region to the buffer memory in response to address information for an index node (inode) associated with the metadata, and accesses the target data in the second region using the metadata loaded to the buffer memory.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: October 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Junghoon Kim, Seonghun Kim, Hongkug Kim, Sojeong Park