Direct Memory Accessing (dma) Patents (Class 710/22)
  • Patent number: 10270470
    Abstract: Embodiments of the present invention provide a Polar code decoding method and decoder. The decoding method includes: segmenting a first Polar code having a length of N into m mutually coupled second Polar codes, where a length of each second Polar code is N/m, N and m are integer powers of 2, and N>m; independently decoding the m second Polar codes to acquire decoding results of the m second Polar codes; and obtaining a decoding result of the first Polar code according to the decoding results of the m second Polar codes. In the embodiments of the present invention, a Polar code having a length of N is segmented into multiple segments of mutually coupled Polar codes; the segmented Polar codes are independently decoded; and results of the independent decoding are jointly processed to obtain a decoding result of an original Polar code.
    Type: Grant
    Filed: September 4, 2015
    Date of Patent: April 23, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Bin Li, Hui Shen
  • Patent number: 10263119
    Abstract: A novel semiconductor device is provided. The semiconductor device includes a programmable logic device including a programmable logic element, a control circuit, and a detection circuit. The programmable logic device includes a plurality of contexts. The control circuit is configured to control selection of the contexts. The detection circuit is configured to output a signal corresponding to the amount of radiation. The control circuit is configured to switch between a first mode and a second mode in accordance with the signal corresponding to the amount of radiation. The first mode is a mode in which the programmable logic device performs processing by a multi-context method, and the second mode is a mode in which the programmable logic device performs processing using a majority signal of signals output from the logic element multiplexed by the plurality of contexts.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: April 16, 2019
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventors: Takashi Nakagawa, Yoshiyuki Kurokawa, Munehiro Kozuma
  • Patent number: 10261926
    Abstract: A multi-core processor manages contention amongst its cores for access to a shared resource using a semaphore that maintains separate access-request queues for different cores and uses a selectable scheduling algorithm to grant pending requests, one at a time. The semaphore signals the core whose request is granted by sending it an interrupt signal using a dedicated core line that is not part of the system bus. The granted request is then de-queued, and the core accesses the shared resource in response to receiving the interrupt signal. The use of dedicated core lines for transmitting interrupt signals from the semaphore to the cores alleviates the need for repeated polling of the semaphore on the system bus. The use of the scheduling algorithm prevents a potential race condition between contending cores.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: April 16, 2019
    Assignee: NXP USA, INC.
    Inventors: Liang Jia, Zhijun Chen, Zhiling Sui
  • Patent number: 10248589
    Abstract: An integrated circuit coupled to an external serial bus is presented. A method for prefetching data from an external serial bus is presented. The integrated circuit comprises a serial interface, a data cache, and a prefetch control unit. The serial interface detects a data address on the serial bus and reads data elements from data storage units. The data storage units may be internal or external to the integrated circuit. The data cache is coupled to the serial interface via an internal bus. The prefetch control unit instructs the serial interface to prefetch a data element associated with the data address by reading the data element from a target data storage unit associated with the data address. The data element and the data address are written to the data cache. When a read request is detected, the data element can be quickly accessed from the data cache.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: April 2, 2019
    Assignee: Dialog Semiconductor (UK) Limited
    Inventors: Olivier Girard, Joao Paulo Trierveiler Martins, Daniele Giorgetti, Philip Todd
  • Patent number: 10241926
    Abstract: A computer-implemented method for migrating a buffer used for direct memory access (DMA) may include receiving a request to perform a DMA data transfer between a first partitionable endpoint and a buffer of a first memory in a system having two or more processor chips. Each processor chip may have an associated memory and one or more partitionable endpoints. The buffer from the first memory may be migrated to a second memory based on whether the first memory is local or remote to the first partitionable endpoint, and based on a DMA data transfer activity level. A memory is local to a partitionable endpoint when the memory and the partitionable endpoint are associated with a same processor chip. The DMA data transfer may then be performed.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: March 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mehulkumar J. Patel, Venkatesh Sainath
  • Patent number: 10237571
    Abstract: An apparatus includes a first circuit and a second circuit. The first circuit may be configured to (i) fetch a reference samples from a memory to slots in a buffer, (ii) generate motion vectors by motion estimating inter-prediction candidates of a current picture relative to the reference samples in the buffer, (iii) snoop the fetches from the memory to determine if the reference samples fetched for a non-zero motion vector type of the inter-prediction candidates includes the reference samples for a zero motion vector type of the inter-prediction candidates and (iv) avoid duplication of the fetches for the zero motion vector type of the inter-prediction candidates where the snoop determines that the reference samples have already been fetched. The second circuit may be configured to evaluate the reference samples in the buffer based on the motion vectors to select a prediction sample unit made of the reference samples.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: March 19, 2019
    Assignee: Ambarella, Inc.
    Inventors: Leslie D. Kohn, Peter Verplaetse
  • Patent number: 10229072
    Abstract: The disclosure is directed to a system and method of managing memory resources in a communication channel. According to various embodiments, incoming memory slices associated with a plurality of data sectors are de-interleaved and transferred sequentially through a buffer to a decoder for further processing. To prevent buffer overflow or degraded decoder performance, the memory availability of the buffer is monitored, and transfers are suspended when the memory availability of the buffer is below a threshold buffer availability.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: March 12, 2019
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventors: Ku Hong Jeong, Qi Zuo, Shaohua Yang, Kaitlyn T. Nguyen
  • Patent number: 10200472
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for improved coordination between sender and receiver nodes in a one-sided memory access to a PGAS in a distributed computing environment. The system may include a transceiver module configured to receive a message over a network, the message comprising a data portion and a data size indicator and an offset handler module configured to calculate a destination address from a base address of a memory buffer and an offset counter. The transceiver module may further be configured to write the data portion to the memory buffer at the destination address; and the offset handler module may further be configured to update the offset counter based on the data size indicator.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: February 5, 2019
    Assignee: Intel Corporation
    Inventors: Mario Flajslik, James Dinan
  • Patent number: 10185680
    Abstract: Examples are disclosed for establishing a secure destination address range responsive to initiation of a direct memory access (DMA) operation. The examples also include allowing decrypted content obtained as encrypted content from a source memory to be placed at a destination memory based on whether destination memory addresses for the destination memory fall within the secure destination address range.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: January 22, 2019
    Assignee: INTEL CORPORATION
    Inventors: Jayant Mangalampalli, Venkat R. Gokulrangan
  • Patent number: 10185675
    Abstract: Peripheral devices may implement multiple reporting modes for signal interrupts to a host system. Different reporting modes may be determined for interrupts generated at a host system. Reporting modes may be programmatically configured for various operations at the peripheral device. Reporting modes may indicate a reporting technique for transmitting an indication of the interrupt and may indicate a priority assigned to reporting the interrupt. An interrupt controller for the peripheral device may report generated interrupts according to the reporting mode determined for the interrupts.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: January 22, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Kiran Kalkunte Seshadri, Thomas A. Volpe, Carlos Javier Cabral, Steven Scott Larson, Asif Khan
  • Patent number: 10185684
    Abstract: A system interconnect is provided which includes a first channel configured to transmit a plurality of control signals based on a first clock, and a second channel configured to transmit a plurality of data signals which correspond to the control signals based on a second clock. The first channel and the second channel allows a predetermined range of out-of-orderness, and the predetermined range of the out-of-orderness indicates that an order of the control signals is different from an order of the data signals which correspond to the control signals.
    Type: Grant
    Filed: February 9, 2015
    Date of Patent: January 22, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jun Hee Yoo, Jaegeun Yun, Bub-chul Jeong, Dongsoo Kang
  • Patent number: 10169256
    Abstract: A method includes receiving a plurality of requests to perform accesses for associated DMA channels and arbitrating the requests. The arbitration includes selectively granting a given request of the plurality of requests based at least in part on an associated fixed priority of the request and an associated priority weighting of the request. The priority weighting regulates which request or requests of the plurality of requests are considered at a given time.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: January 1, 2019
    Assignee: Silicon Laboratories Inc.
    Inventors: Timothy E. Litch, Paul Zucker, William G. Durbin
  • Patent number: 10162641
    Abstract: This invention addresses implements a range of interesting technologies into a single block. Each DSP CPU has a streaming engine. The streaming engines include: a SE to L2 interface that can request 512 bits/cycle from L2; a loose binding between SE and L2 interface, to allow a single stream to peak at 1024 bits/cycle; one-way coherence where the SE sees all earlier writes cached in system, but not writes that occur after stream opens; full protection against single-bit data errors within its internal storage via single-bit parity with semi-automatic restart on parity error.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: December 25, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Timothy D. Anderson, Joseph Zbiciak, Duc Quang Bui, Abhijeet A. Chachad, Kai Chirca, Naveen Bhoria, Matthew D. Pierson, Daniel Wu, Ramakrishnan Venkatasubramanian
  • Patent number: 10162557
    Abstract: Methods of accessing memory cells, methods of distributing memory requests, systems, and memory controllers are described. In one such method, where memory cells are divided into at least a first region of memory cells and a second region of memory cells, memory cells in the first region are accessed according to a first address definition and memory cells in the second region are accessed according to a second address definition that is different from the first address definition. Additional embodiments are described.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: December 25, 2018
    Assignee: Micron Technology, Inc.
    Inventor: Robert Walker
  • Patent number: 10162775
    Abstract: A system and method for cross-controller data storage operations comprises interconnecting a responding storage controller and an owning storage controller with a direct memory access (DMA) capable fabric, the responding storage controller and the owning storage controller each comprising an interface from a data bus connected to the DMA capable fabric, configuring and implementing a shared DMA address space in accordance with the DMA capable fabric, the shared DMA address space including memory on the responding storage controller and the owning storage controller, the shared DMA address space being one of a symmetric or asymmetric address space, and exposing one or more local buffers of the responding storage controller and one or more local buffers of the owning storage controller through the shared DMA address space.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: December 25, 2018
    Assignee: Futurewei Technologies, Inc.
    Inventors: Mark Kampe, Can Chen, Jinshui Liu, Wei Zhang
  • Patent number: 10127172
    Abstract: A system and method communicates with one of two or more secure digital input output (SDIO) units that only one SDIO unit responds when it is being addressed. The SDIO unit has an SDIO clock input port, an SDIO data bus output port, and an SDIO bidirectional command port. Each SDIO unit has an address indicator within it associated with each SDIO unit. An SDIO unit will not respond to an SDIO command unless an SDIO unit address encoded in the SDIO command matches its address indicator.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: November 13, 2018
    Assignee: QUALCOMM TECHNOLOGIES INTERNATIONAL, LTD.
    Inventors: Victor Szeto, Steven McBirnie
  • Patent number: 10129808
    Abstract: To perform an inter-technology handoff, an indicator in a service request message is received by a mobile switching center (MSC). The indicator is to indicate to the MSC that an inter-technology handoff from a packet-data wireless access network to a circuit wireless access network has been requested. The behavior of the MSC is modified in response to the indicator to reduce the communication silence during the inter-technology handoff.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: November 13, 2018
    Assignee: Apple Inc.
    Inventors: Mark A. Stegall, Marvin Bienn, Jing Chen, Gary Stephens
  • Patent number: 10120709
    Abstract: A guest OS detects a DMA write request for a device assigned to the guest OS to perform a DMA write to a shared page of memory that has a write protection attribute to cause a protection page fault upon an attempt to write to the shared page of memory. The guest OS reads a portion of the shared page of memory from a location of that page, determines the value of the portion, and executes an atomic instruction that writes the value back to the location of the shared page of memory to trigger the page protection fault. Upon executing the atomic instruction, the guest OS sends the DMA write request to the device to cause the device to write to a writeable copy of the shared page of memory.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: November 6, 2018
    Assignee: Red Hat Israel, Ltd.
    Inventors: Michael Tsirkin, Andrea Arcangeli
  • Patent number: 10120820
    Abstract: A direct memory access (DMA) transmission control method and apparatus, where the method includes selecting a target channel for the target DMA task according to a priority corresponding to the target DMA task when a DMA transmission request for transmitting data of a target DMA task is received, querying a task type and a priority of another DMA task that has occupied a channel and a task type of the target DMA task when the other DMA task exists on the DMA channel, comparing the task type and the priority of the other DMA task that has occupied the channel with the task type and the priority of the target DMA task, and controlling data transmission on the DMA channel according to a comparison result. Hence, the urgent DMA task can be preferentially processed.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: November 6, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Hao Chen, Huifeng Xu, Haitao Guo
  • Patent number: 10108565
    Abstract: In an example, a method of fetching direct memory access (DMA) descriptors for commands to a non-volatile semiconductor storage device includes storing the commands among a plurality of queues in a command random access memory (RAM). The method further includes processing one or more of the commands from the plurality of queues and issuing requests to read from or write into the non-volatile semiconductor storage device according to the processing. The method further includes fetching DMA descriptors from the host system for the processed commands according to a real-time fetch quota. The method further includes pre-fetching DMA descriptors from the host system for queued commands that are not being processed according to a pre-fetch quota. The method further includes storing fetched and pre-fetched DMA descriptors in a descriptor RAM.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: October 23, 2018
    Assignee: Toshiba Memory Corporation
    Inventors: Sancar Kunt Olcay, Dishi Lai
  • Patent number: 10108566
    Abstract: A device for virtualizing a network interface includes, a virtualization information unit configured to store virtual network interface card (NIC) information for implementation of a plurality of predetermined virtual NICs on the memory of the computer, and a controller configured to output the control signal for controlling the I/O buffer unit, the I/O unit, the DMA I/O unit and the virtualization information unit based on the storage notification signal and the NIC virtualization information. Accordingly, multiple virtual NICs may be created using one physical NIC.
    Type: Grant
    Filed: December 30, 2014
    Date of Patent: October 23, 2018
    Assignee: GURUMNETWORKS, INC.
    Inventor: Sung Min Kim
  • Patent number: 10102161
    Abstract: A microcomputer includes: a central processing unit (CPU); a data transfer apparatus (DTC); and a storage apparatus (RAM). The data transfer apparatus includes a plurality of register files each including a mode register storing the transfer mode information, an address register to which the address information is transferred, and a status register (SR) representing information that specifies the transfer information set. The data transfer apparatus checks the information of the status register, to determine whether to use the transfer information set held in the register files or to read the transfer information set from the storage apparatus and to rewrite a prescribed one of the register files. The data transfer apparatus performs data transfer based on the transfer information set stored in one of the register files.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: October 16, 2018
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Naoki Mitsuishi, Seiji Ikari
  • Patent number: 10101977
    Abstract: A method and system for a command processor for efficient processing of a program multi-processor core system with a CPU and GPU. The multi-core system includes a general purpose CPU executing commands in a CPU programming language and a graphic processing unit (GPU) executing commands in a GPU programming language. A command processor is coupled to the CPU and CPU. The command processor sequences jobs from a program for processing by the CPU or the GPU. The command processor creates commands from the jobs in a state free command format. The command processor generates a sequence of commands for execution by either the CPU or the GPU in the command format. A compiler running a meta language converts program data for the commands into a first format readable by the CPU programming language and a second format readable by the GPU programming language.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: October 16, 2018
    Assignee: Oxide Interactive, LLC
    Inventor: Daniel K. Baker
  • Patent number: 10102382
    Abstract: An Initialization Unit (IU) initiates an initial secure connection with an Intrinsic Use Control (IUC) Chip based on very large random numbers (VLRNs). The IUC Chip in turn initiates a secondary secure connection between it and one or more Use Controlled Components (UCCs). Polling by the IU allows confirmation of an ongoing secure connection, and also allows the IUC Chip to confirm the secondary secure connection to the UCCs. Removal or improper polling response from one of the UCCs results in a response from the IUC Chip that may include notification of tampering, or temporary or permanent discontinued operation of the offending UCC. Permanent discontinued operation may include destruction of the offending UCC, and cascaded discontinued operation of all other UCCs secured by the IUC Chip. A UCC may in turn be another nested layer of IUC Chips, controlling a corresponding layer of UCCs, ad infinitum.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: October 16, 2018
    Assignee: Lawrence Livermore National Security, LLC
    Inventor: Mark Miles Hart
  • Patent number: 10094872
    Abstract: An apparatus is described for burn-in and/or functional testing of microelectronic circuits of unsingulated wafers. A large number of power, ground, and signal connections can be made to a large number of contacts on a wafer. The apparatus has a cartridge that allows for fanning-in of electric paths. A distribution board has a plurality of interfaces that are strategically positioned to provide a dense configuration. The interfaces are connected through flexible attachments to an array of first connector modules. Each one of the first connector modules can be independently connected to a respective one of a plurality of second connector modules, thereby reducing stresses on a frame of the apparatus. Further features include for example a piston that allows for tight control of forces exerted by terminals onto contacts of a wafer.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: October 9, 2018
    Assignee: AEHR TEST SYSTEMS
    Inventors: Donald P. Richmond, II, Kenneth W. Deboe, Frank O. Uher, Jovan Jovanovic, Scott E. Lindsey, Thomas T. Maenner, Patrick M. Shepherd, Jeffrey L. Tyson, Mark C. Carbone, Paul W. Burke, Doan D. Cao, James F. Tomic, Long V. Vu
  • Patent number: 10079044
    Abstract: A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: September 18, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Nuwan S. Jayasena, Gabriel H. Loh, Bradford M. Beckmann, James M. O'Connor, Lisa R. Hsu
  • Patent number: 10078606
    Abstract: A multiprocessor architecture utilizing direct memory access (DMA) processors that execute programmed code to feed data to one or more processor cores in advance of those cores requesting data. Stalls of the processor cores are minimized by continually feeding new data directly into the data registers within the cores. When different data is needed, the processor cores can redirect a DMA processor to execute a different feeder program, or to jump to a different point in the feeder program it is already executing. The DMA processors can also feed executable instructions into the instruction pipelines of the processor cores, allowing the feeder program to orchestrate overall processor operations.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: September 18, 2018
    Assignee: KnuEdge, Inc.
    Inventors: Douglas A. Palmer, Jerome Vincent Coffin, Andrew Jonathan White, Ramon Zuniga
  • Patent number: 10078551
    Abstract: This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence including plural nested loops is specified by a control register. The streaming engine includes an address generator producing addresses of data elements and a steam head register storing data elements next to be supplied as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer. Parity bits are formed upon storage of data in the stream buffer which are stored with the corresponding data. Upon transfer to the stream head register a second parity is calculated and compared with the stored parity. The streaming engine signals a parity fault if the parities do not match. The streaming engine preferably restarts fetching the data stream at the data element generating a parity fault.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: September 18, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Joseph Zbiciak, Timothy Anderson
  • Patent number: 10074154
    Abstract: A display controller comprises a plurality of channels for fetching data from a memory, a plurality of buffers coupled to the channels for receiving the fetched data from the channels, a buffer controller for controlling the buffers and the channels, and a processing unit coupled to the buffers, the display and buffer controller for receiving the data from the buffers, outputting a control signal to the display based on the received data, and controlling the buffer controller, respectively. Each buffer has a respective fixed memory capacity for storing the fetched data. The processing unit activates layers in the output image for displaying an output image on the display. The channels correspond to associated layers. The buffer controller adds to the respective fixed memory capacity of a particular buffer associated to an activated layer, one further fixed memory capacity of at least one further buffer associated to an inactive layer.
    Type: Grant
    Filed: May 12, 2015
    Date of Patent: September 11, 2018
    Assignee: NXP USA, Inc.
    Inventors: Vincent Aubineau, Eric Eugene Bernard Depons, Michael Andreas Staudenmaier
  • Patent number: 10061675
    Abstract: This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence is specified by a control register. The streaming engine fetches stream data ahead of use by a central processing unit and stores it in a stream buffer. Upon occurrence of a fault reading data from memory, the streaming engine identifies the data element triggering the fault preferably storing this address in a fault address register. The streaming engine defers signaling the fault to the central processing unit until this data element is used as an operand. If the data element is never used by the central processing unit, the streaming engine never signals the fault. The streaming engine preferably stores data identifying the fault in a fault source register. The fault address register and the fault source register are preferably extended control registers accessible only via a debugger.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: August 28, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Joseph Zbiciak, Timothy D. Anderson, Duc Bui, Kai Chirca
  • Patent number: 10061712
    Abstract: In some embodiments, a memory overlay system comprises a translation lookaside buffer (TLB) that includes an entry that specifies a virtual address range that is a subset of a virtual address range specified by another entry. In response to an indication from the TLB that both of the entries are TLB hits for the same memory operation, a selection circuit is configured to select, based on one or more selection criteria, one of the two entries. The selection circuit may then cause the selected TLB entry including the corresponding physical address information and memory attributes to be provided to a memory interface.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: August 28, 2018
    Assignee: Oracle International Corporation
    Inventors: John R. Rose, Patrick McGehearty
  • Patent number: 10031871
    Abstract: A direct memory access (DMA) control device including: a basic-function setting register used to perform DMA operation; and a scatter-gather setting register in which a value indicating that a task is executed through setting of a directly defined value for data to be written to the basic-function setting register without reading the data from a memory through a bus is set.
    Type: Grant
    Filed: December 4, 2015
    Date of Patent: July 24, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Kentaro Kawakami
  • Patent number: 10019288
    Abstract: A hypervisor hosted by a computing system performs a method to allocate a contiguous physical memory space to a device. A given region of physical memory is marked as migratable. From an operating system (OS) kernel, the hypervisor receives a request for memory allocation to the device, the request indicating a first set of available virtualized pages in a virtualized memory. In response to the request, the hypervisor identifies a set of contiguous frames in the given region to be allocated to the device. The set of contiguous frames are mapped to a second set of virtualized pages. The hypervisor disables the mapping for the first set of available virtualized pages and the second set of virtualized pages. Then one or more occupied frames in the set of contiguous frames are migrated out of the given region to allow for allocation of the set of contiguous frames to the device.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: July 10, 2018
    Assignee: MediaTek, Inc.
    Inventors: Ching-Fu Kung, Sheng-Yu Chiu
  • Patent number: 10013388
    Abstract: Provided are systems, methods, and computer-program products for enabling peer-to-peer communications between peripheral devices in a computing system. In various implementations, a host device in the computing system can read an address from a peripheral device included in the computing system. The host device can further configure an emulated peripheral device corresponding to the peripheral device, including writing the address to an emulated register of the emulated peripheral device. The host device can further initiate a virtual machine, including reading the address from the emulated register, initializing a page table for the virtual machine, and initiating a guest operating system. The guest operating system can be operable to use the address to access the physical device.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: July 3, 2018
    Assignee: Amazon Technologies, Inc.
    Inventor: Wei Wang
  • Patent number: 10013372
    Abstract: An input/output apparatus according to the present invention has an indication unit and an execution unit. The indication unit indicates that each of a plurality of data blocks between a main memory and a buffer memory is to be transferred. The execution unit transfers one data block relating to a transfer indication sent from the indication unit. After the completion of the transfer, in order to send completion information relating to the one data block to the indication unit, the execution unit determines whether transfers of all of the plurality of data blocks are completed or not based on management information for managing progresses of the transfers of the plurality of data blocks. Once determining that all of the transfers are completed, the execution unit sends, to the indication unit, total completion information showing that all of the transfers are completed.
    Type: Grant
    Filed: April 9, 2014
    Date of Patent: July 3, 2018
    Assignee: HITACHI, LTD.
    Inventors: Takafumi Maruyama, Megumu Hasegawa
  • Patent number: 10013373
    Abstract: In an embodiment of the invention, a method for to use a two level linked list descriptor mechanism to pass information among flash, memory, and IO controller modules is presented. The method includes creating a first level data structure for one or more first level descriptors; creating a second level data structure for one or more second level descriptors, each second level descriptor having a pointer to tracking information that includes start information, running information, and rewind information for a data DMA; using the one or more second level descriptors, the one or more first level descriptors, and the tracking information for a data DMA; updating the tracking information during the data DMA; and updating the tracking information at the end of the data DMA.
    Type: Grant
    Filed: November 6, 2016
    Date of Patent: July 3, 2018
    Assignee: BiTMICRO Networks, Inc.
    Inventors: Ricardo H. Bruce, Bernard Sherwin Leung Chiw, Margaret Anne Nadonga Somera
  • Patent number: 10007545
    Abstract: A method, system and computer program product are provided for implementing dynamic altering of a Single Root Input/Output Virtualization (SRIOV) virtual function (VF) resources including direct memory access (DMA) windows without bringing down the VF in a virtualized system. A request to alter VF resources is received, such as a dynamic request based on usage statistics or change in need of the user. Pending DMA requests are completed for the VF resources to be altered. Responsive to completing the DMA requests, new buffers are allocated for the resized DMA windows without bringing down the VF in a virtualized system.
    Type: Grant
    Filed: March 15, 2016
    Date of Patent: June 26, 2018
    Assignee: International Business Machines Corporation
    Inventors: Charles S. Graham, Rama K. Hazari, Sakethan R. Kotta, Kumaraswamy Sripathy, Nuthula Venkatesh
  • Patent number: 9996262
    Abstract: Systems and methods are disclosed to abort a command at a data storage controller, in accordance with certain embodiments of the present disclosure. In some embodiments, an apparatus may comprise a data storage controller configured to receive an abort indicator from a host device, generate an abort tracking indicator at a receiving unit configured to receive commands from the host device, monitor to determine when the selected command is received at the receiving unit based on the abort tracking indicator, and abort the selected command when the selected command is received at the receiving unit. In some embodiments, the data storage controller may generate an abort tracking indicator at a completion unit configured to notify the host device of completed commands, and monitor for the selected command at the completion unit based on the abort tracking indicator.
    Type: Grant
    Filed: November 9, 2015
    Date of Patent: June 12, 2018
    Assignee: Seagate Technology LLC
    Inventors: Shashank Nemawarkar, Chris Randall Stone, Balakrishnan Sundararaman
  • Patent number: 9977619
    Abstract: A computer system processes instructions including an instruction code, source type, source address, destination type, and destination address. The source and destination type may indicate a memory device in which case data is read from the memory device at the source address and written to the destination address. One or both of the source type and destination type may include a transfer descriptor flag, in which case a transfer descriptor identified by the source or destination address is executed. A transfer descriptor referenced by a source address may be executed to obtain an intermediate result that is used for performing the operation indicated by the instruction code. The transfer descriptor referenced by a destination address may be executed to determine a location at which the result of the operation will be stored.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: May 22, 2018
    Assignee: Vivante Corporation
    Inventor: Mankit Lo
  • Patent number: 9977754
    Abstract: A electronic system includes: an integrated circuit including: an internal data path, configured to drive a functional output, a universal streaming and logging interface, coupled to the internal data path, to generate a trace data bus, and a direct memory access (DMA) controller, coupled to the universal streaming and logging interface, to manage the storage of the trace data bus; a support circuit, coupled to the integrated circuit, configured to receive the trace data bus; and a support processor chip, coupled to the support circuit, configured to analyze the trace data bus for identifying a failure mode of the integrated circuit.
    Type: Grant
    Filed: September 4, 2014
    Date of Patent: May 22, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Jinsoo Kim
  • Patent number: 9965416
    Abstract: A digital signal processor (DSP) includes a CPU, and a DMA controller. The DMA controller transfers data from a source to a destination as a function of an initialization command from the CPU. The DMA controller has a logic unit that performs filter operations and other arithmetic operations on-the-fly on a data stream transferred therethrough. The filter operations include multiplication by filter coefficients and addition, without processing by the CPU. The DMA controller may have subsets of hardware configurations that can perform different operations that are selectable as a function of the initialization command.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: May 8, 2018
    Assignee: NXP USA, INC.
    Inventors: Michael Galda, Wangsheng Mei, Martin Mienkina
  • Patent number: 9959227
    Abstract: Apparatus and methods are disclosed herein for reducing I/O latency when accessing data using a direct memory access (DMA) engine with a parser. A DMA descriptor indicating memory buffer location can be stored in cache. A DMA descriptor read command is generated and can include a prefetch command. A descriptor with the indicator can be communicated to the DMA engine in response to the read. A second parser can detect the descriptor communication, parse the descriptor, and can prefetch data from memory to cache while the descriptor is being communicated to the DMA engine and/or parsed by the DMA engine parser. When the DMA engine parses the descriptor, data can be accessed from cache rather than memory, to decrease latency.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: May 1, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Ron Diamant, Georgy Machulsky, Adi Habusha
  • Patent number: 9952991
    Abstract: In an embodiment of the invention, a method comprises: fetching a first set of descriptors from a memory device and writing the first set of descriptors to a buffer; retrieving the first set of descriptors from the buffer and processing the first set of descriptors to permit a Direct Memory Access (DMA) operation; and if space is available in the buffer, fetching a second set of descriptors from the memory device and writing the second set of descriptors to the buffer during or after the processing of the first set of descriptors.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: April 24, 2018
    Assignee: BiTMICRO Networks, Inc.
    Inventors: Ricardo H. Bruce, Marlon B. Verdan, Rowenah Michelle Jago-on
  • Patent number: 9952980
    Abstract: Systems and methods for deferring registration for Direct Memory Access (DMA) operations. An example method may comprise: receiving a memory region registration request identifying a memory region for a direct memory access (DMA) operation; generating a local key for the memory region; receiving a DMA work request referencing the local key; and responsive to determining that an amount of pinned memory is below a threshold, registering the memory region for DMA transfer.
    Type: Grant
    Filed: May 18, 2015
    Date of Patent: April 24, 2018
    Assignee: Red Hat Israel, Ltd.
    Inventors: Michael Tsirkin, Marcel Apfelbaum
  • Patent number: 9946561
    Abstract: A method including mapping a first portion of a virtual memory containing code of an operating system for access by a processor; receiving a call for an entry point of the operating system; and mapping, after receiving the call, a second portion of the virtual memory containing data for executing entry point code associated with the entry point for access by the processor. The processor executing the operating system code is permitted to access only data from the first and second portions of the virtual memory.
    Type: Grant
    Filed: November 18, 2014
    Date of Patent: April 17, 2018
    Assignee: WIND RIVER SYSTEMS, INC.
    Inventors: Thierry Preyssler, Mati Sauks
  • Patent number: 9946659
    Abstract: Embodiments include a near-memory acceleration method for offloading data traversal operations from a processing element. The method is implemented at a near-memory accelerator configured to interact with each of the processing element and a memory used by the processing element. The accelerator performs the data traversal operations to chase pointers, in order to identify a pointer to data to be processed by the processing element. The data traversal operations are performed based on indications from the processing element. In addition, data needed to perform the data traversal operations are fetched by the near-memory accelerator, from the memory. The present invention is further directed to a near-memory accelerator and a computerized system comprising such an accelerator, as well as a computer program product.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: April 17, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Gero Dittmann
  • Patent number: 9939882
    Abstract: Techniques to control power and processing among a plurality of asymmetric processing elements are disclosed. In one embodiment, one or more asymmetric processing elements are power managed to migrate processes or threads among a plurality of processing elements according to the performance and power needs of the system.
    Type: Grant
    Filed: July 31, 2013
    Date of Patent: April 10, 2018
    Assignee: Intel Corporation
    Inventors: Herbert Hum, Eric Sprangle, Doug Carmean, Rajesh Kumar
  • Patent number: 9934160
    Abstract: The invention provides the data flow communication control between the source (flash/IO) and destination (IO/flash) cores. The source and destination cores are started simultaneously instead of serially and get instructions from the descriptors provided and set-up by the processor. Each source and destination core's descriptors1 are correlated or tied with each other by the processor by providing information to the hardware assist mechanism. The hardware assist mechanism responsible for moderating the data transfer from source to destination. The flow tracker guarantees that data needed by destination exists. By applying the invention to the prior approach/solution, the data latency between the flash & IO bus will be reduced. Processor interrupts will be minimized while data transfer between the flash & IO bus is ongoing.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: April 3, 2018
    Assignee: BiTMICRO LLC
    Inventors: Cyrill C. Ponce, Marizonne Operio Fuentes, Gianico Geonzon Noble
  • Patent number: 9916268
    Abstract: A data processing apparatus includes a number of processor cores, a shared processor cache, a bus unit and a bus controller. The shared processor cache is connected to each of the processor cores and to a main memory. The bus unit is connected to the shared processor cache by a bus controller for transferring data to/from an I/O device. In order to achieve further improvements to the data transfer rate between the processor cache and I/O devices, the bus controller is configured, in response to receiving a descriptor from a processor core, to perform a direct memory access to the shared processor cache for transferring data according to the descriptor from the shared processor cache to the I/O device via the bus unit.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: March 13, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Norbert Hagspiel, Sascha Junghans, Matthias Klein, Joerg Walter
  • Patent number: 9910798
    Abstract: Methods and structure for managing cache memory for a storage controller. One exemplary embodiment a Redundant Array of Independent Disks (RAID) storage controller. The storage controller includes an interface operable to receive Input/Output (I/O) requests from a host, a Direct Memory Access (DMA) module, a memory comprising cache data for a logical volume, and a control unit. The control unit is able to generate Scatter Gather Lists (SGLs) that indicate the location of cache data for incoming read requests. Each SGL is stored in the memory, and at least one SGL points to cache data that is no longer indexed by the cache. The control unit is also able to service an incoming read request based on the SGL, by directing the DMA module to transfer the cache data that is no longer indexed to the host.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: March 6, 2018
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Horia Cristian Simionescu, Timothy E. Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan