Direct Memory Accessing (dma) Patents (Class 710/22)
-
Patent number: 11455264Abstract: During a memory reallocation process, it is determined that a set of memory pages being reallocated are each enabled for a Direct Memory Access (DMA) operation. Prior to writing initial data to the set of memory pages, a pre-access delay is performed concurrently for each memory page in the set of memory pages.Type: GrantFiled: August 10, 2020Date of Patent: September 27, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jaime Jaloma, Mark Rogers, Arnold Flores, Gaurav Batra
-
Patent number: 11422861Abstract: A data processing method implemented by a computer device, includes generating a target task including a buffer application task or a buffer release task, when the target task is the buffer application task, a first buffer corresponding to the buffer application task is used when the second task is executed, or when the target task is the buffer release task, a second buffer corresponding to the buffer release task is used when the first task is executed, obtaining a buffer entry corresponding to the target task after a preceding task of the target task is executed and before a successive task of the target task is executed, where the buffer entry includes a memory size of a buffer corresponding to the target task, a memory location of the buffer, and a memory address of the buffer, and executing the target task to apply for or release the buffer.Type: GrantFiled: November 30, 2020Date of Patent: August 23, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Xiong Gao, Wei Li, Ming Zheng, Hou Fun Lam
-
Patent number: 11411885Abstract: A user can set or modify operational parameters of a data volume stored on a network-accessible storage device in a data center. For example, the user may be provided access to a data volume and may request a modification to the operational parameters of the data volume. Instead of modifying the existing data volume, the data center can provision a new data volume and migrate data stored on the existing data volume to the new data volume. While the data migration takes place, the existing data volume may block input/output (I/O) requests and the new data volume may handle such requests instead. Once the data migration is complete, the data center may deallocate the data blocks of the existing data volume such that the data blocks can be reused by other data volumes.Type: GrantFiled: October 22, 2019Date of Patent: August 9, 2022Assignee: Amazon Technologies, Inc.Inventors: Pieter Kristian Brouwer, Marc Stephen Olson, Nachiappan Arumugam, Michael Thacker, Vijay Prasanth Rajavenkateswaran, Arpit Tripathi, Danny Wei
-
Patent number: 11409636Abstract: The present disclosure discloses a debug unit, comprising: a write register configured to store kernel write data written by a kernel of a processor, wherein the processor is communicatively coupled to a debugger configured to read the kernel write data, wherein the kernel write data is associated with a kernel write flag bit to indicate data validity of the kernel write data; and a control unit including circuitry configured to control access to the write register by the kernel of the processor and the debugger based on data validity indicated by the kernel write flag bit. The present disclosure further discloses a corresponding processor including the debug unit, a corresponding debugger communicatively coupled to the processor, and a corresponding debug system including the processor coupled to the debugger.Type: GrantFiled: March 18, 2020Date of Patent: August 9, 2022Assignee: Alibaba Group Holding LimitedInventors: Taotao Zhu, Chen Chen
-
Patent number: 11410264Abstract: Examples described herein relate to a graphics processing system that includes one or more integrated graphics systems and one or more discrete graphics systems. In some examples, an operating system (OS) or other software supports switching between image display data being provided from either an integrated graphics system or a discrete graphics system by configuring a multiplexer at runtime to output image data to a display. In some examples, a multiplexer is not used and interface supported messages are used to transfer image data from an integrated graphics system to a discrete graphics system and the discrete graphics system generates and outputs image data to a display. In some examples, interface supported messages are used to transfer image data from a discrete graphics system to an integrated graphics system and the integrated graphics system uses an overlay process to generate a composite image for output to a display.Type: GrantFiled: September 27, 2019Date of Patent: August 9, 2022Assignee: Intel CorporationInventors: James E. Akiyama, John Howard, Murali Ramadoss, Gary K. Smith, Todd M. Witter, Satish Ramanathan, Zhengmin Li
-
Patent number: 11379394Abstract: A hardware based block moving controller of an active device such as an implantable medical device that provides electrical stimulation reads a parameter data from a block of memory and then writes the parameter data to a designated register set of a component that performs an active function. The block of memory may include data that specifies a size of the block of memory to be moved to the register set. The block of memory may also include data that indicates a number of triggers to skip before moving a next block of memory to the register set. A trigger that causes the block moving controller to move the data from the block of memory to the register set may be generated in various ways such as through operation of the component having the register set or by a separate timer.Type: GrantFiled: August 24, 2020Date of Patent: July 5, 2022Assignee: MEDTRONIC, INC.Inventors: Robert W. Hocken, Wesley A. Santa, Christopher M. Arnett, Jalpa S. Shah, Joel E. Sivula
-
Patent number: 11379595Abstract: Masking a data rate of transmitted data is disclosed. As data is transmitted from a production site to a secondary site, the data rate is masked. Masking the data rate can include transmitting at a fixed rate, a random rate, or an adaptive rate. Each mode of data transmission masks or obscures the actual data rate and thus prevents others from gaining information about the data or the data owner from the data transfer rate.Type: GrantFiled: January 16, 2020Date of Patent: July 5, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Amos Zamir, Jehuda Shemer, Kfir Wolfson
-
Patent number: 11373013Abstract: Technologies for secure I/O include a compute device having a processor, a memory, an input/output (I/O) device, and a filter logic. The filter logic is configured to receive a first key identifier from the processor, wherein the first key identifier is indicative of a shared memory range includes a shared key identifier range to be used for untrusted I/O devices and receive a transaction from the I/O device, wherein the transaction includes a second key identifier and a trust device ID indicator associated with the I/O device. The filter logic is further configured to determine whether the transaction is asserted with the trust device ID indicator indicative of whether the I/O device is assigned to a trust domain and determine, in response to a determination that the transaction is not asserted with the trust device ID indicator, whether the second key identifier matches the first key identifier.Type: GrantFiled: December 28, 2018Date of Patent: June 28, 2022Assignee: INTEL CORPORATIONInventors: Luis Kida, Krystof Zmudzinski, Reshma Lal, Pradeep Pappachan, Abhishek Basak, Anna Trikalinou
-
Patent number: 11372645Abstract: Deferred command execution by a command processor (CP) may be performed based on a determination that at least one command of a primary buffer is located between a first link of the primary buffer and a second link of the primary buffer. The first link and the second link may be to one or more secondary buffers that includes a set of commands. The CP may initiate, before executing, a fetch of a first set of commands in the set of commands based on the first link, a fetch of the at least one command of the primary buffer, and a fetch of a second set of commands in the set of commands based on the second link. After initiating the fetch of the second set of commands, the CP may execute the first set of commands, the at least one command of the primary buffer, and the second set of commands.Type: GrantFiled: June 12, 2020Date of Patent: June 28, 2022Assignee: QUALCOMM IncorporatedInventors: Nigel Poole, Joohi Mittal
-
Patent number: 11366444Abstract: To enable acquisition of operation information of CNC corresponding to periodic operation of PLC, even when the CNC is unable to respond due to the timing of machining, the loading status, etc. The PLC device includes: a special instruction control unit that sets, to a special instruction for acquiring operation information indicating an operation state of a control device from the control device controlling an industrial machine, a cyclic time for causing the control device to periodically acquire and retain the operation information in a case in which the control device is unable to respond, and transmits to the control device the special instruction in which the cyclic time is set; and an acquisition unit that acquires the operation information acquired on the basis of the cyclic time from the control device.Type: GrantFiled: May 11, 2020Date of Patent: June 21, 2022Assignee: FANUC CORPORATIONInventors: Nao Onose, Mitsuru Mochizuki
-
Patent number: 11354260Abstract: Autonomous memory access (AMA) controllers and related systems, methods, and devices are disclosed. An AMA controller includes waveform circuitry configured to autonomously retrieve waveform data stored in a memory device and pre-process the waveform data without intervention from a processor. The AMA controller is configured to provide the pre-processed waveform data to one or more peripheral devices.Type: GrantFiled: August 12, 2020Date of Patent: June 7, 2022Assignee: Microchip Technology IncorporatedInventor: Jacob Lunn Lassen
-
Patent number: 11354244Abstract: Memory modules and associated devices and methods are provided using a memory copy function between a cache memory and a main memory that may be implemented in hardware. Address translation may additionally be provided.Type: GrantFiled: November 24, 2015Date of Patent: June 7, 2022Assignee: Intel Germany GmbH & Co. KGInventors: Ritesh Banerjee, Jiaxiang Shi, Ingo Volkening
-
Patent number: 11321249Abstract: Embodiments of the present invention include a drive-to-drive storage system comprising a host server having a host CPU and a host storage drive, one or more remote storage drives, and a peer-to-peer link connecting the host storage drive to the one or more remote storage drives. The host storage drive includes a processor and a memory, wherein the memory has stored thereon instructions that, when executed by the processor, causes the processor to transfer data from the host storage drive via the peer-to-peer link to the one or more remote storage drives when the host CPU issues a write command.Type: GrantFiled: April 19, 2018Date of Patent: May 3, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Oscar P. Pinto, Robert Brennan
-
Patent number: 11281610Abstract: Embodiments of the present disclosure relate to a method, a device, and a computer program product for managing data transfer. A method for managing data transfer is provided, including: if determining that a request to transfer a data block between a memory and a persistent memory of a data storage system is received, obtaining a utilization rate of a central processing unit of the data storage system; and determining, from a first transfer technology and a second transfer technology and at least based on the utilization rate of the central processing unit, a target transfer technology for transferring a data block between the memory and the persistent memory, the first transfer technology transferring data through direct access to the memory, and the second transfer technology transferring data through the central processing unit. Therefore, the embodiments of the present disclosure can improve the data transfer performance of the storage system.Type: GrantFiled: October 7, 2020Date of Patent: March 22, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Shuguang Gong, Long Wang, Tao Chen, Bing Liu
-
Patent number: 11275600Abstract: Distributed I/O virtualization includes receiving, at a first physical node in a plurality of physical nodes, an indication of a request to transfer data from an I/O device on the first physical node to a set of guest physical addresses. An operating system is executing collectively across the plurality of physical nodes. It further includes writing data from the I/O device to one or more portions of physical memory local to the first physical node. It further includes mapping the set of guest physical addresses to the written one or more portions of physical memory local to the first physical node.Type: GrantFiled: November 9, 2018Date of Patent: March 15, 2022Assignee: TidalScale, Inc.Inventors: Leon Dang, Keith Reynolds, Isaac R. Nassi
-
Patent number: 11269888Abstract: A data storage system implements techniques to efficiently store and retrieve structured data. For example, structured data is transformed into correlated segments, which are then redundancy coded and archived in a correlated fashion. The characteristics of the redundancy code used enable flexible handling of the archived data without excessive latency.Type: GrantFiled: November 28, 2016Date of Patent: March 8, 2022Assignee: Amazon Technologies, Inc.Inventors: Umar Farooq, Rishabh Animesh
-
Patent number: 11262926Abstract: A computing system may generate a directed graph to access data stored in multiple locations or blocks of a data storage device or system. Cost values may be determined for each of multiple paths between nodes, representing the blocks or subsets of data. In some cases, nodes having a cost value between them that is less than a threshold may be combined into a single node. A master path, linking at least two of the multiple paths, between a start node and an end node, may be generated by iteratively selecting paths with a lowest cost. The number of paths considered for determining the lowest path cost may be limited by a complexity parameter, so as to optimize the path to access the data without introducing unbeneficial computational complexity.Type: GrantFiled: March 26, 2019Date of Patent: March 1, 2022Assignee: Amazon Technologies, Inc.Inventors: Rishabh Animesh, Jan Dean Larroza Catarata, Siddharth Shah
-
Patent number: 11256459Abstract: This invention provides a data processing apparatus operable to execute processing requested by an application, where the apparatus comprises a processing unit configured to, if there is an instruction for processing, execute the processing in accordance with a command list indicated by the instruction; and a control unit configured to, upon receiving a request for processing from the application, generate a command list corresponding to the request and instruct the processing unit to perform the processing, wherein the processing unit comprises a switching unit configured to, upon receiving, from the control unit, a second instruction during execution of a command list for a first instruction, switch to execution of a command list for the second instruction at a timing of execution of a command that is a control point preset in the command list for the first instruction.Type: GrantFiled: December 16, 2019Date of Patent: February 22, 2022Assignee: Canon Kabushiki KaishaInventor: Tadayuki Ito
-
Patent number: 11249461Abstract: The present invention allows a communication mode in which to communicate with a detection section to be easily and conveniently changed in a motor control device. A slave device (90) includes: a network communication section (120) configured to communicate with a PLC (100) via a communication network; an FB signal obtaining section configured to obtain an FB signal from a detection section; and a setting communication section (140) configured to receive communication mode information of an FB through another communication path different from the communication network. The FB signal obtaining section includes a reconfigurable device and is capable of changing a communication mode of the FB signal obtaining section by reconfiguring the reconfigurable device. The slave device (90) reconfigures the reconfigurable device in accordance with the communication mode information in a case where the setting communication section (140) receives the communication mode information.Type: GrantFiled: January 23, 2019Date of Patent: February 15, 2022Assignee: OMRON CORPORATIONInventor: Takeshi Kiribuchi
-
Patent number: 11243714Abstract: A Solid State Drive (SSD) is disclosed. The SSD may include flash memory storage to store data, a volatile memory storage, and a host interface layer to receive requests from a host machine. An SSD controller may manage reading data from and writing data to the flash memory storage, with a flash translation layer to translate between Logical Block Addresses and Physical Block Addresses, a flash memory controller to access the flash memory storage, a volatile memory controller to access the volatile memory storage, and an orchestrator to send instructions to a Data Movement Interconnect (DMI). The DMI may include at least two kernels, a Buffer Manager, a plurality of ring agents associated with the kernels and the Buffer Manager to handle messaging, a Data Movement Manager (DMM) to manage data movement, at least two data rings to move data between the ring agents, and a control ring to share commands and acknowledgments between the ring agents and the DMM.Type: GrantFiled: July 11, 2019Date of Patent: February 8, 2022Inventors: Ramdas P. Kachare, Jimmy K. Lau
-
Patent number: 11237970Abstract: A computing system, method and apparatus to cache a portion of a data block. A processor can access data using memory addresses in an address space. A first memory can store a block of data at a block of contiguous addresses in the space of memory address. A second memory can cache a first portion of the block of data identified by an item selection vector. For example, response to a request to cache the block of data stored in the first memory, the computing system can communicate the first portion of the block of data from the first memory to the second memory according to the item selection vector without accessing a second portion of the block of data. Thus, different data blocks in the first memory of a same size can be each cached in different cache blocks of different sizes in the second memory.Type: GrantFiled: November 7, 2018Date of Patent: February 1, 2022Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 11238940Abstract: Methods, systems, and devices for initialization techniques for memory devices are described. A memory system may include a memory array on a first die and a controller on a second die, where the second die is coupled with the first die. The controller may perform an initialization procedure based on operating instructions stored within the memory system. For example, the controller may read a first set of operating instructions from read-only memory on the second die. The controller may obtain a second set of operating instructions stored at a memory block of the memory array on the first die, with the memory block indicated by the first set of operating instructions. The controller may complete or at least further the initialization procedure based on the second set of operating instructions.Type: GrantFiled: November 19, 2020Date of Patent: February 1, 2022Assignee: Micron Technology, Inc.Inventors: Antonino Pollio, Giuseppe Vito Portacci, Mauro Luigi Sali, Alessandro Magnavacca
-
Patent number: 11232053Abstract: A direct memory access (DMA) system can include a memory configured to store a plurality of host profiles, a plurality of interfaces, wherein two or more of the plurality of interfaces correspond to different ones of a plurality of host processors, and a plurality of data engines coupled to the plurality of interfaces. The plurality of data engines are independently configurable to access different ones of the plurality of interfaces for different flows of a DMA operation based on the plurality of host profiles.Type: GrantFiled: June 9, 2020Date of Patent: January 25, 2022Assignee: Xilinx, Inc.Inventors: Chandrasekhar S. Thyamagondlu, Darren Jue, Ravi Sunkavalli, Akhil Krishnan, Tao Yu, Kushagra Sharma
-
Patent number: 11231987Abstract: A debugging tool, such as may take the form of a software daemon running in the background, can provide for the monitoring of utilization of access mechanisms, such as Direct Memory Access (DMA) mechanisms, for purposes such as debugging and performance improvement. Debugging tools can obtain and provide DMA utilization data, as may include statistics, graphs, predictive analytics, or other such information. The data can help to pinpoint issues that have arisen, or may arise, in the system, and take appropriate remedial or preventative action. Data from related DMAs can be aggregated intelligently, helping to identify bottlenecks where the individual DMA data might not. A debugging tool can store state information as snapshots, which may be beneficial if the system is in a state where current data is not accessible. The statistics and predictive analytics can also be leveraged to optimize system-performance.Type: GrantFiled: June 28, 2019Date of Patent: January 25, 2022Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Benita Bose, Ron Diamant, Georgy Zorik Machulsky, Alex Levin
-
Patent number: 11210218Abstract: A method for memory address mapping in a disaggregated memory system includes receiving an indication of one or more ranges of host physical addresses (HPAs) from a compute node of a plurality of compute nodes, the one or more ranges of HPAs including a plurality of memory addresses corresponding to different allocation slices of the disaggregated memory pool that are allocated to the compute node. The one or more ranges of HPAs are converted into a contiguous range of device physical addresses (DPAs). For each DPA, a target address decoder (TAD) is identified based on a slice identifier and a slice-to-TAD index. Each DPA is mapped to a media-specific physical element of a physical memory unit of the disaggregated memory pool based on the TAD.Type: GrantFiled: September 3, 2020Date of Patent: December 28, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Siamak Tavallaei, Ishwar Agarwal, Vishal Soni
-
Patent number: 11210393Abstract: A technology for mutually isolating accessors of a shared electronic device from leakage of context data after a context switch comprises: on making the shared electronic device available to the plurality of accessors, establishing a portion of storage as an indicator location for the shared electronic device; when a first accessor requests use of the shared electronic device, writing at least one device-reset-required indicator to the indicator location; on switching context to a new context, after context save, when a second accessor requests use of the shared electronic device, resetting context data of the shared electronic device to a known state and reconciling the first device-reset-required indicator and a second device-reset-required indicator for the new context.Type: GrantFiled: April 6, 2017Date of Patent: December 28, 2021Assignee: ARM IP LIMITEDInventors: Milosch Meriac, Alessandro Angelino
-
Patent number: 11199992Abstract: The present disclosure generally relates to a method and device for detecting patterns in host command pointers. When a new command is received by a storage device from a host computer, host command pointers sent to the storage device are analyzed to detect any patterns within the host command pointers. If a pattern is detected, the storage device can store the host command pointers in a reduced pointer storage structure. Thereafter, the storage device can perform the command indicated by the host command pointers using the reduced pointer storage structure.Type: GrantFiled: July 15, 2019Date of Patent: December 14, 2021Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Elkana Richter, Shay Benisty
-
Patent number: 11194745Abstract: One example method includes receiving an IO request from an application, determining if an affinity policy applies to the application that transmitted the IO request, when an affinity policy applies to the application, directing the IO request to a specified site of a replication system, when no affinity policy applies to the application, determining if a lag in replication of the IO request from a primary site to a replication site is acceptable, if a lag in replication of the IO request is acceptable, processing the IO request using performance based parameters and/or load balancing parameters, and if a lag in replication of the IO request is not acceptable, either directing the IO request to a most up to date replica site, or requesting a clone copy of a volume to which the IO request was initially directed and directing the IO request to the cloned copy.Type: GrantFiled: October 28, 2020Date of Patent: December 7, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Mohamed Abdullah Gommaa Sohail, Said Tabet
-
Patent number: 11188486Abstract: The present disclosure relates to the technical field of a multi-chip system, and provides a master chip, a salve chip, and an inter-chip DMA transmission system. The master chip is connected to the slave chip through at least one first transmission channel (17) and a second transmission channel (18). The master chip includes a DMA controller (2) and an MCU (3). For each of the first transmission channels, when it is detected that any first transmission channel (17) is in an idle state, the MCU (3) configures one of a plurality of first peripherals (12) of the slave chip into a DMA mode. The DMA controller (2) is configured to receive, through the first transmission channel (17), a DMA request (req_s_0-req_s_N) generated by the first peripheral (12) in the DMA mode, and obtain a DMA data of the first peripheral (12) through the second transmission channel (18).Type: GrantFiled: November 26, 2019Date of Patent: November 30, 2021Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.Inventors: Zhibing Liang, Yifan Li, Zekai Chen
-
Patent number: 11182092Abstract: The present disclosure provides a new and innovative system, methods and apparatus for PRI overhead reduction for virtual machine migration. In an example, a system includes a memory and a hypervisor. The memory includes a plurality of memory addresses on a source host. The hypervisor is configured to generate a migration page table associated with the memory. The hypervisor is also configured to receive a migration command to copy data from a portion of the memory to a destination host. A first range of memory addresses includes data copied from the portion of the memory and a second range of memory addresses includes data that is not copied. The hypervisor is also configured to modify the migration page table to include a page table entry associated with the first range of memory addresses being migrated from the source host to the destination host.Type: GrantFiled: July 14, 2020Date of Patent: November 23, 2021Assignee: Red Hat, Inc.Inventors: Michael Tsirkin, Amnon Ilan
-
Patent number: 11176064Abstract: Methods and apparatus for reducing bus overhead with virtualized transfer rings. The Inter-Processor Communications (IPC) bus uses a ring buffer (e.g., a so-called Transfer Ring (TR)) to provide Direct Memory Access (DMA)-like memory access between processors. However, performing small transactions within the TR inefficiently uses bus overhead. A Virtualized Transfer Ring (VTR) is a null data structure that doesn't require any backing memory allocation. A processor servicing a VTR data transfer includes the data payload as part of an optional header/footer data structure within a completion ring (CR).Type: GrantFiled: September 30, 2019Date of Patent: November 16, 2021Assignee: Apple Inc.Inventors: Karan Sanghi, Saurabh Garg, Vladislav V. Petkov
-
Patent number: 11169926Abstract: A memory system, a memory controller and an operating method of the memory controller. The memory controller may include a host interface configured to communicate with a host; a memory interface configured to communicate with a memory device; and a control circuit configured to control an operation of the memory device. The control circuit may selectively determine to use a cache for an operation indicated by a command received from the host, depending on a number of memory dies, of a plurality of memory dies in the memory device, detected to be in an activated state.Type: GrantFiled: October 23, 2019Date of Patent: November 9, 2021Assignee: SK hynix Inc.Inventors: Seung-Gu Ji, Byeong-Gyu Park
-
Patent number: 11163913Abstract: Technologies for secure I/O include a compute device having a processor, a memory, an input/output (I/O) device, and a filter logic. The filter logic is configured to receive a first key identifier from the processor, wherein the first key identifier is indicative of a shared memory range includes a shared key identifier range to be used for untrusted I/O devices and receive a transaction from the I/O device, wherein the transaction includes a second key identifier and a trust device ID indicator associated with the I/O device. The filter logic is further configured to determine whether the transaction is asserted with the trust device ID indicator indicative of whether the I/O device is assigned to a trust domain and determine, in response to a determination that the transaction is not asserted with the trust device ID indicator, whether the second key identifier matches the first key identifier.Type: GrantFiled: December 28, 2018Date of Patent: November 2, 2021Assignee: INTEL CORPORATIONInventors: Luis Kida, Krystof Zmudzinski, Reshma Lal, Pradeep Pappachan, Abhishek Basak, Anna Trikalinou
-
Patent number: 11163513Abstract: An image forming apparatus includes a first processor, a second processor, a data transfer portion, a consistency determination portion, an abnormality determination portion, a re-transfer control portion, and an abnormality processing portion. The data transfer portion transfers data via a bus between a storage medium and the second processor. The consistency determination portion determines whether or not there is consistency between data before and after a transfer by the data transfer portion. The abnormality determination portion, upon determination that there is consistency, determines whether or not there is abnormality in the data transfer process. The re-transfer control portion, upon determination that there is no consistency, causes the data transfer portion to re-transfer the data.Type: GrantFiled: September 17, 2020Date of Patent: November 2, 2021Assignee: KYOCERA Document Solutions Inc.Inventors: Yuichi Sugiyama, Hideo Tanii
-
Patent number: 11138132Abstract: Technologies for secure I/O data transfer with an accelerator device include a computing device having a processor and an accelerator. The processor establishes a trusted execution environment. The trusted execution environment may generate an authentication tag based on a memory-mapped I/O transaction, write the authentication tag to a register of the accelerator, and dispatch the transaction to the accelerator. The accelerator performs a cryptographic operation associated with the transaction, generates an authentication tag based on the transaction, and compares the generated authentication tag to the authentication tag received from the trusted execution environment. The accelerator device may initialize an authentication tag in response to a command from the trusted execution environment, transfer data between host memory and accelerator memory, perform a cryptographic operation in response to transferring the data, and update the authentication tag in response to transferrin the data.Type: GrantFiled: December 26, 2018Date of Patent: October 5, 2021Assignee: INTEL CORPORATIONInventors: Reshma Lal, Alpa Narendra Trivedi, Luis Kida, Pradeep M. Pappachan, Soham Jayesh Desai, Nanda Kumar Unnikrishnan
-
Patent number: 11134021Abstract: Techniques and apparatus for processor queue management are described. In one embodiment, for example, an apparatus to provide queue congestion management assistance may include at least one memory and logic for a queue manager, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to determine queue information for at least one queue element (QE) queue storing at least one QE, compare the queue information to at least one queue threshold value, and generate a queue notification responsive to the queue information being outside of the queue threshold value. Other embodiments are described and claimed.Type: GrantFiled: December 29, 2016Date of Patent: September 28, 2021Assignee: INTEL CORPORATIONInventors: Jonathan Kenny, Niall D. McDonnell, Andrew Cunningham, Debra Bernstein, William G. Burroughs, Hugh Wilkinson
-
Patent number: 11132319Abstract: Aspects of the embodiments are directed to systems, methods, and devices for controlling power management entry. A PCIe root port controller can be configured to receive, at a downstream port of the root port controller, from an upstream switch port, a first power management entry request; reject the first power management entry request; transmit a negative acknowledgement message to the upstream switch port; initiate a timer for at least 20 microseconds; during the 20 microseconds, ignore any power management entry requests received from the upstream switch port; receive, after the expiration of the 20 microseconds, a subsequent power management entry request; accept the subsequent power management entry request; and transmit an acknowledgement of the acceptance of the subsequent power management entry request to the upstream switch port.Type: GrantFiled: January 12, 2018Date of Patent: September 28, 2021Assignee: Intel CorporationInventors: Christopher Wing Hong Ngau, Hooi Kar Loo, Poh Thiam Teoh, Shashitheren Kerisnan, Maxim Dan, Chee Siang Chow
-
Patent number: 11126522Abstract: An interconnect offload component arranged to operate in an offloading mode, and a memory access component for enabling access to a memory element for functional data transmitted over a debug network of a signal processing device. In the offloading mode the interconnect offload component is arranged to receive functional data from an interconnect client component for communication to a destination component, and forward at least a part of the received functional data to a debug network for communication to the destination component via the debug network. The memory access component is arranged to receive a debug format message transmitted over the debug network, extract functional data from the received debug format message, said functional data originating from an interconnect client component for communication to a memory element, and perform a direct memory access to the memory element comprising the extracted functional data.Type: GrantFiled: June 18, 2013Date of Patent: September 21, 2021Assignee: NXP USA, Inc.Inventors: Benny Michalovich, Ron Bar, Eran Glickman, Dmitriy Shurin
-
Patent number: 11119704Abstract: In one embodiment, a flash sharing controller is to enable a plurality of components of a platform to share a flash memory. The flash sharing controller may include: a flash sharing class layer including a configuration controller to configure the plurality of components to be flash master devices and configure a flash sharing slave device for the flash memory; and a physical layer coupled to the flash sharing class layer to communicate with the plurality of components via a bus. Other embodiments are described and claimed.Type: GrantFiled: March 28, 2019Date of Patent: September 14, 2021Assignee: Intel CorporationInventors: Zhenyu Zhu, Mikal Hunsaker, Karthi R. Vadivelu, Rahul Bhatt, Kenneth P. Foust, Rajesh Bhaskar, Amit Kumar Srivastava
-
Patent number: 11106622Abstract: An operating system (OS) may communicate with a basic input/output system (BIOS) at OS runtime to inform the BIOS of a firmware update storage location. A method may begin with receiving, by an OS, an update for a firmware of an information handling system. The OS may select a memory for storage of the firmware update and may store the firmware update in the selected memory. The OS may then store a location of the firmware update in a portion of a memory accessible by both the OS and the BIOS.Type: GrantFiled: May 10, 2019Date of Patent: August 31, 2021Assignee: Dell Products L.P.Inventors: Krishnakumar Narasimhan, Santosh Hanamant Gore, Reveendra Babu Madala
-
Patent number: 11093276Abstract: Embodiments of the present disclosure provides systems and methods for batch accessing. The system includes a plurality of buffers configured to store data; a plurality of processor cores that each have a corresponding buffer of the plurality of buffers; a buffer controller configured to generate instructions for performing a plurality of buffer transactions on at least some buffers of the plurality of buffers; and a plurality of data managers communicatively coupled to the buffer controller, each data manager is coupled to a corresponding buffer of the plurality of buffers and configured to execute a request for a buffer transaction at the corresponding buffer according to an instruction received from the buffer controller.Type: GrantFiled: January 14, 2019Date of Patent: August 17, 2021Assignee: ALIBABA GROUP HOLDING LIMITEDInventors: Qinggang Zhou, Lingling Jin
-
Patent number: 11093180Abstract: A RAID storage multi-operation command system includes a RAID storage controller device that generates a multi-operation command including a multi-operation command role and a plurality of addresses, and transmits the multi-operation command, and also includes a RAID storage device that is coupled to the RAID storage controller device. The RAID storage device receives the multi-operation command from the RAID storage controller device, and identifies a plurality of operations that are associated in a database with the multi-operation command role included in the multi-operation command. The RAID storage device then performs the plurality of operations using the plurality of addresses included in the multi-operation command, which may include retrieving first data located in a first address, retrieving second data located in a second address, performing an XOR operation on the first and second data to produce third data, and writing the third data to one or more third addresses.Type: GrantFiled: September 27, 2019Date of Patent: August 17, 2021Assignee: Dell Products L.P.Inventors: Gary Benedict Kotzur, William Emmett Lynn, Kevin Thomas Marks, Chandrashekar Nelogal, James Peter Giannoules
-
Patent number: 11082367Abstract: A circuit includes a buffer configured to receive a first Flexible Ethernet (FlexE) frame having 66b blocks including 66b overhead blocks and 66b data blocks, wherein the buffer is configured to accumulate the 66b overhead blocks and the 66b data blocks; a mapping circuit configured to map four x 66b overhead blocks from the buffer into a 257b overhead block and to map a sequence of four x 66b data blocks from the buffer into a 257b data block; and a transmit circuit configured to transmit a second FlexE frame having 257b blocks from the mapping circuit. The mapping circuit can be configured to accumulate four 66b blocks of a same kind from the buffer for mapping into a 257b block, where the same kind is one of overhead and a particular calendar slot n where n=0-19.Type: GrantFiled: May 10, 2019Date of Patent: August 3, 2021Assignee: Ciena CorporationInventors: Sebastien Gareau, Eric S. Maniloff
-
Patent number: 11080190Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to monitor one or more processing threads of a storage device. Each of the one or more processing threads includes two or more cache states. The at least one processor also updates one or more data structures to indicate a subject cache state of each of the one or more processing threads and detect an event that disrupts at least one of the one or more processing threads. Further, the processor determines a cache state of the at least one of the one or more processing threads contemporaneous to the disruption event using the one or more data structures and performs a recovery process for the disrupted at least one of the one or more processing threads.Type: GrantFiled: July 10, 2019Date of Patent: August 3, 2021Assignee: EMC IP Holding Company LLCInventors: Kaustubh Sahasrabudhe, Steven Ivester
-
Patent number: 11068423Abstract: Provided is a control device that includes: a communication unit; one or more functional units; and a communication line connecting the communication unit and the one or the plurality of functional units. The communication unit includes: a computation processing unit in which a processor executes one or more tasks; a communication circuit which handles the transmission and reception of communication frames via the communication line; and a control circuit connected to the computation processing unit and the communication circuit. The control circuit includes: a first Direct Memory Access (DMA) core for accessing the computation processing unit; a second DMA core for accessing the communication circuit; and a controller which, in response to a trigger from the computation processing part, provides sequential commands to the first DMA core and the second DMA core in accordance with a predefined descriptor table.Type: GrantFiled: November 20, 2017Date of Patent: July 20, 2021Assignee: OMRON CorporationInventors: Masaichi Takai, Yasuhiro Nishimura
-
Patent number: 11055174Abstract: Disclosed are devices, systems and methods for improving performance of a block of a memory device. In an example, performance is improved by implementing soft chipkill recovery to mitigate bitline failures in data storage devices. An exemplary method includes encoding each horizontal row of cells of a plurality of memory cells of a memory block to generate each of a plurality of codewords, and generating a plurality of parity symbols, each of the plurality of parity symbols based on diagonally positioned symbols spanning the plurality of codewords.Type: GrantFiled: December 17, 2019Date of Patent: July 6, 2021Assignee: SK hynix Inc.Inventors: Naveen Kumar, Chenrong Xiong, Aman Bhatia, Yu Cai, Fan Zhang
-
Patent number: 11016912Abstract: A memory controller according to example embodiments of the inventive concept includes a system bus, a first direct memory access (DMA) engine configured to write data in a buffer memory through the system bus, a snooper configured to output notification information indicating whether the data is stored in the buffer memory by snooping around the system bus, and a second direct memory access (DMA) engine configured to transmit the data written in the buffer memory to a host in response to the notification information from the snooper.Type: GrantFiled: October 21, 2019Date of Patent: May 25, 2021Assignee: Samsung Electronics Co.,, Ltd.Inventor: JunBum Park
-
Patent number: 11003606Abstract: A direct memory access (DMA) controller, includes circuitry configured to load a DMA transfer descriptor configured to define which memory elements within a contiguous block of n memory elements are to be included in a given DMA transfer. The circuitry is further configured to, based on the DMA transfer descriptor, determine whether each memory element within the contiguous block of n memory elements is to be included in the given DMA transfer, including a determination that two or more non-contiguous sub-blocks of memory elements within the contiguous block of n memory elements are to be transferred. The circuitry is further configured to, based on the determination of whether each memory element within the contiguous block of n memory elements is to be included in the given DMA transfer, perform the DMA transfer of memory elements determined to be included within the given DMA transfer.Type: GrantFiled: June 19, 2020Date of Patent: May 11, 2021Assignee: Microchip Technology IncorporatedInventors: Laurentiu Birsan, Manish Patel, Joseph Triece
-
Patent number: 10997496Abstract: A method, computer program product, and system perform computations using a sparse convolutional neural network accelerator. Compressed-sparse data is received for input to a processing element, wherein the compressed-sparse data encodes non-zero elements and corresponding multi-dimensional positions. The non-zero elements are processed in parallel by the processing element to produce a plurality of result values. The corresponding multi-dimensional positions are processed in parallel by the processing element to produce destination addresses for each result value in the plurality of result values. Each result value is transmitted to a destination accumulator associated with the destination address for the result value.Type: GrantFiled: March 14, 2017Date of Patent: May 4, 2021Assignee: NVIDIA CorporationInventors: William J. Dally, Angshuman Parashar, Joel Springer Emer, Stephen William Keckler, Larry Robert Dennison
-
Patent number: 10969983Abstract: A method for implementing NVMe over fabrics includes generating, by a terminal, a NVMe instruction, where the NVMe instruction indicates a data read operation or a data write operation. The method further includes sending, by the terminal by using remote direct memory access (RDMA), the NVMe instruction to a submission queue (SQ) that is stored in a server. When the NVMe instruction indicates the data read operation, the method includes receiving, by the terminal by using the RDMA, to-be-read data sent by the server. Alternatively, when the NVMe instruction indicates the data write operation, the method includes sending, by the terminal, to-be-written data to the server by using the RDMA. The method further includes receiving, by the terminal, an NVMe completion instruction sent by using the RDMA by the server; and writing, by the terminal, the NVMe completion instruction into a completion queue (CQ) that is set in the terminal.Type: GrantFiled: November 3, 2017Date of Patent: April 6, 2021Assignee: Huawei Technologies Co., Ltd.Inventors: Shiping Deng, Hongguang Liu, Haitao Guo, Xin Qiu