By Command Chaining Patents (Class 710/24)
-
Patent number: 7617334Abstract: In the host, an IP issues CCW, and a CH encodes the CCW and a CCW chain by the encode program to create a code including the description of controlling a conditional branch with the DKC and transmits the code to a PORT in the DKC. In the DKC, the PORT decodes the code by the decode program, and a CP sequentially processes each command obtained by the decoding and returns a return code representing the end state of the processing. The host receives the return code to recognize the end state of the processing.Type: GrantFiled: June 20, 2008Date of Patent: November 10, 2009Assignee: Hitachi, Ltd.Inventors: Junichi Muto, Isamu Kurokawa, Shinichi Hiramatsu, Takuya Ichikawa
-
Patent number: 7606961Abstract: A computer system according to an example of the invention comprises SPEs and a global memory. The SPEs include a running SPE and an idling SPE. The running SPE and the idling SPE each have a processor core, local memory and DMA module. The local memory of the idling SPE stores data stored in the global memory and used by the processor core of the running SPE, before the data is used by the processor core of the running SPE. The DMA module of the running SPE reads the data from the local memory of the idling SPE, and transfers the data to the processor core of the running SPE.Type: GrantFiled: June 28, 2006Date of Patent: October 20, 2009Assignee: Kabushiki Kaisha ToshibaInventor: Hidenori Matsuzaki
-
Patent number: 7603490Abstract: A direct memory access (DMA) device includes a barrier and interrupt mechanism that allows interrupt and mailbox operations to occur in such a way that ensures correct operation, but still allows for high performance out-of-order data moves to occur whenever possible. Certain descriptors are defined to be “barrier descriptors.” When the DMA device encounters a barrier descriptor, it ensures that all of the previous descriptors complete before the barrier descriptor completes. The DMA device further ensures that any interrupt generated by a barrier descriptor will not assert until the data move associated with the barrier descriptor completes. The DMA controller only permits interrupts to be generated by barrier descriptors. The barrier descriptor concept also allows software to embed mailbox completion messages into the scatter/gather linked list of descriptors.Type: GrantFiled: January 10, 2007Date of Patent: October 13, 2009Assignee: International Business Machines CorporationInventors: Giora Biran, Luis E. De la Torre, Bernard C. Drerup, Jyoti Gupta, Richard Nicholas
-
Patent number: 7577774Abstract: The present invention provides for independent source-read and destination-write functionality for Enhanced Direct Memory Access (EDMA). Allowing source read and destination write pipelines to operate independently makes it possible for the source pipeline to issue multiple read requests and stay ahead of the destination write for fully pipelined operation. The result is that fully pipelined capability may be achieved and utilization of the full DMA bandwidth and maximum throughput performance are provided.Type: GrantFiled: May 13, 2005Date of Patent: August 18, 2009Assignee: Texas Instruments IncorporatedInventors: Sanjive Agarwala, Kyle Castille, Quang-Dieu An, Hung Ong
-
Patent number: 7571284Abstract: A method and apparatus for implementing out-of-order memory transactions in a multithreaded, multicore processor. In the present invention, circular queue comprising a plurality of queue buffers is used to store load data returned by a memory unit in response to a request issued by a processing module, such as a stream processing unit, in a processing core. As requests are issued, a destination queue buffer ID tag is transmitted as part of the request. When the request is returned, that destination number is reflected back and is used to control which queue within the circular queue will be used to store the retuned load data. Separate pointers are used to indicate the order of the queues to be read and the order of the queues to be written. The method and apparatus implemented by the present invention allows out-of-order data to be processed efficiently, thereby improving the performance of a fine grain multithreaded, multi-core processor.Type: GrantFiled: June 30, 2004Date of Patent: August 4, 2009Assignee: Sun Microsystems, Inc.Inventors: Christopher H. Olson, Manish Shah
-
Patent number: 7568055Abstract: The image processing apparatus (data processing apparatus) stores data in a storing unit (storing means), inputs and outputs the data to and from the storing unit via a storage control unit (input-output means) and processes the data outputted from the storing unit with a control unit (processing means). The storage control unit inputs and outputs image data to and from the storing unit by the DMA method through a path via a DMA control unit and inputs and outputs other data such as a control instruction to and from the storing unit by the PIO method through a path via a PIO control unit. Image data to be inputted and outputted to and from the storing unit by the DMA method is encrypted in an input operation and decrypted in an output operation by an encryption/decryption unit provided on the input-output path for DMA method.Type: GrantFiled: April 21, 2005Date of Patent: July 28, 2009Assignee: Sharp Kabushiki KaishaInventors: Yoshiyuki Nakai, Koichi Sumida, Takao Yamanouchi, Yohichi Shimazawa
-
Patent number: 7565462Abstract: A direct memory access system utilizing a local memory that stores a plurality of DMA command lists, each comprising at least one DMA command. A command queue can hold a plurality of entries, each entry comprising a pointer field and a sequence field. The pointer field points to one of the DMA command lists. The sequence field holds a sequence value. A DMA engine accesses an entry in the command queue and then accesses the DMA commands of the DMA command list pointed to by the pointer field of the accessed entry. The DMA engine performs the DMA operations specified by the accessed DMA commands. The DMA engine makes available the sequence value held in the sequence field of the accessed entry when all of the DMA commands in the accessed command list have been performed. In one embodiment, the command queue is part of the DMA engine.Type: GrantFiled: November 27, 2007Date of Patent: July 21, 2009Assignee: Broadcom CorporationInventor: Alexander G. MacInnis
-
Patent number: 7548996Abstract: In an information processing system which has plurality of modules including a processor, a main memory and a plurality of I/O devices, a data transfer switch for performing data transfer operations between the processor, main memory and I/O devices comprises a request bus which has a request bus arbiter for receiving read and write requests from each one of the plurality of modules. A processor memory bus is configured to receive address and data information from a predetermined number of modules, including the processor. The processor memory bus has a data bus arbiter for receiving data read and write requests from each one of the predetermined number of modules which are coupled to the processor memory bus. An internal memory bus is configured to receive address and data information from a predetermined number of modules, including the memory and the I/O devices.Type: GrantFiled: September 12, 2005Date of Patent: June 16, 2009Assignees: Hitachi, Ltd., Equator Technologies, Inc.Inventors: David Baker, Christopher Basoglu, Benjamin Cutler, Gregorio Gervasio, Woobin Lee, Yatin Mundkur, Toru Nojiri, John O'Donnell, John Poole, legal representative, Ashok Raman, Eric Rehm, Radhika Thekkath, David Poole
-
Patent number: 7546393Abstract: The present invention provides for a system comprising a DMA queue configured to receive a DMA command comprising a tag, wherein the tag belongs to one of a plurality of tag groups. A counter couples to the DMA queue and is configured to increment a tag group count of the tag group to which the tag belongs upon receipt of the DMA command by the DMA queue and to decrement the tag group count upon execution of the DMA command. A tag group count status register couples to the counter and is configured to store the tag group count for each of the plurality of tag groups. And the tag group count status register is further configured to receive a request for a tag group status and to respond to the request for the tag group status.Type: GrantFiled: April 2, 2007Date of Patent: June 9, 2009Assignee: International Business Machines CorporationInventors: Michael Norman Day, Harm Peter Hofstee, Charles Ray Johns, Peichum Peter Liu, Thuong Quang Truong, Takeshi Yamazaki
-
Publication number: 20090138627Abstract: A method and apparatus for high performance volatile disk drive (VDD) memory access using an integrated direct memory access (DMA) engine. In one embodiment, the method includes the detection of a data access request to VDD memory implemented within volatile system memory. Once a data access request is detected, a VDD driver may issue a DMA data request to perform the data access request from the VDD. Accordingly, in one embodiment, the job of transferring data to/from a VDD memory implemented within an allocated portion of volatile system memory is offloaded to a DMA engine, such as, for example, an integrated DMA engine within a memory controller hub (MCH). Other embodiments are described and claimed.Type: ApplicationFiled: January 27, 2009Publication date: May 28, 2009Applicant: INTEL CORPORATIONInventors: Shrikant M. Shah, Chetan J. Rawal
-
Patent number: 7539790Abstract: To communicate over a SCSI protocol, a first device allocates buffers for a dummy SCSI read command and sends the dummy SCSI read command to a second device. This dummy SCSI read command is not a request by the first device to read data from the second device but instead is an indication that the first device is ready to receive data from the second device. In response, the second device stores the dummy SCSI read command to a command queue until the second device wishes to send data to the first device. At that time, the second device removes the dummy SCSI read command from the command queue and sends a response to the dummy SCSI read command to the first device. This response includes data that the second device wishes to send to the first device. The first device then delivers the received data to a higher layer process.Type: GrantFiled: November 15, 2005Date of Patent: May 26, 2009Assignee: 3PAR, Inc.Inventor: Douglas J. Cameron
-
Patent number: 7533197Abstract: A multi-node computer system with a plurality of interconnected processing nodes, including a method of using DMA engines without page locking by the operating system. The method includes a sending node with a first virtual address space and a receiving node with a second virtual address space. Performing a DMA data transfer operation between the first virtual address space on the sending node and the second virtual address space on the receiving node via a DMA engine, and if the DMA operation refers to a virtual address within the second virtual address space that is not in physical memory, causing the DMA operation to fail. The method includes causing the receiving node to map the referenced virtual address within the second virtual address space to a physical address, and causing the sending node to retry the DMA operation, wherein the retried DMA operation is performed without page locking.Type: GrantFiled: November 8, 2006Date of Patent: May 12, 2009Assignee: SiCortex, Inc.Inventors: Judson S. Leonard, David Gingold, Lawrence C. Stewart
-
Patent number: 7523228Abstract: A direct memory access (DMA) device is structured as a loosely coupled DMA engine (DE) and a bus engine (BE). The DE breaks the programmed data block moves into separate transactions, interprets the scatter/gather descriptors, and arbitrates among channels. The DE and BE use a combined read-write (RW) command that can be queued between the DE and the BE. The bus engine (BE) has two read queues and a write queue. The first read queue is for “new reads” and the second read queue is for “old reads,” which are reads that have been retried on the bus at least once. The BE gives absolute priority to new reads, and still avoids deadlock situations.Type: GrantFiled: September 18, 2006Date of Patent: April 21, 2009Assignee: International Business Machines CorporationInventors: Giora Biran, Luis E. De la Torre, Bernard C. Drerup, Jyoti Gupta, Richard Nicholas
-
Publication number: 20090094388Abstract: An apparatus and a computer program product are provided for completing a plurality of (direct memory access) DMA commands in a computer system. It is determined whether the DMA commands are chained together as a list DMA command. Upon a determination that the DMA commands are chained together as a list DMA command, it is also determined whether a current list element of the list DMA command is fenced. Upon a determination that the current list element is not fenced, a next list element is fetched and processed before the current list element has been completed.Type: ApplicationFiled: December 10, 2008Publication date: April 9, 2009Inventors: Matthew Edward King, Peichum Peter Liu, David Mui, Takeshi Yamazaki
-
Patent number: 7512722Abstract: A method, an apparatus, and a computer program product are provided for completing a plurality of (direct memory access) DMA commands in a computer system. It is determined whether the DMA commands are chained together as a list DMA command. Upon a determination that the DMA commands are chained together as a list DMA command, it is also determined whether a current list element of the list DMA command is fenced. Upon a determination that the current list element is not fenced, a next list element is fetched and processed before the current list element has been completed.Type: GrantFiled: July 31, 2003Date of Patent: March 31, 2009Assignee: International Business Machines CorporationInventors: Matthew Edward King, Peichum Peter Liu, David Mui, Takeshi Yamazaki
-
Publication number: 20090031055Abstract: Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.Type: ApplicationFiled: July 27, 2007Publication date: January 29, 2009Inventors: Charles J Archer, Michael A. Blocksome
-
Patent number: 7484016Abstract: A method and apparatus for high performance volatile disk drive (VDD) memory access using an integrated direct memory access (DMA) engine. In one embodiment, the method includes the detection of a data access request to VDD memory implemented within volatile system memory. Once a data access request is detected, a VDD driver may issue a DMA data request to perform the data access request from the VDD. Accordingly, in one embodiment, the job of transferring data to/from a VDD memory implemented within an allocated portion of volatile system memory is offloaded to a DMA engine, such as, for example, an integrated DMA engine within a memory controller hub (MCH). Other embodiments are described and claimed.Type: GrantFiled: June 30, 2004Date of Patent: January 27, 2009Assignee: Intel CorporationInventors: Shrikant M. Shah, Chetan J. Rawal
-
Publication number: 20090019189Abstract: A system capable of efficiently transferring a command set for controlling an image forming apparatus to the image forming apparatus from a host apparatus. A command separate/storage unit separates an image forming command set into a context command set and an object command set, and allocates both command sets in a main memory device. A command read instruction transmission unit transmits a command read instruction having a transfer size and a storage address of each of the allocated context command set and object command set, to the memory access controller. The memory access controller compares the storage address of the context command set included in the received command read instruction with a previous storage address, and reads the context command set from the main memory device only when both storage addresses differ from each other.Type: ApplicationFiled: June 18, 2008Publication date: January 15, 2009Inventor: Junichi Tamai
-
Publication number: 20090006667Abstract: A memory apparatus and method of operation therefore includes control by a memory controller which, in one embodiment, is configured to configure a host sector application flag table in the memory array, the flag table associating each flag value with an address in the memory array where information associated with that flag value is stored. In a second embodiment the controller is configured to (a) write at least one page of information to the memory, each page having a plurality of sectors, each of the at least one pages including a page header having a flag value associated with information written to the page, and (b) configure an exception block in memory, the exception block including exception entries, each exception entry having at least an exception flag value and address information identifying an address range in the memory array to which the exception flag value applies.Type: ApplicationFiled: June 29, 2007Publication date: January 1, 2009Inventor: Jason T. Lin
-
Publication number: 20080320179Abstract: In an AV-data transfer system, AV data stored in a RAID embedded in an AV server is supplied to a client personal computer connected to a network such as the Internet or an intranet by way of the network, and AV data output by the client personal computer is transmitted to the AV server through the network to be stored in the RAID. The AV server makes accesses to the RAID to write and read out data into and from the RAID. In addition to the AV server, the AV-data transfer system also includes another personal computer for exchanging AV data with the client personal computer and receiving a variety of commands from the client personal computer by way of the network in accordance with an FTP (File Transfer Protocol). As a result, it is possible to fast handle access requests made by a larger number of client personal computers.Type: ApplicationFiled: August 22, 2008Publication date: December 25, 2008Inventor: Tomohisa SHIGA
-
Publication number: 20080313363Abstract: A method and a device for exchanging data. The method includes: requesting the processor, by the data transfer controller, to initiate a transfer of multiple data chunks from the second memory unit to the Virtual FIFO data structure, in response to a status of the virtual FIFO data structure; sending the data transfer controller, by the processor a request acknowledgment and an indication about a size of a group of data chunks to be transferred to the virtual FIFO data structure; updating the state of the virtual FIFO data structure; transferring, by the second level DMA controller, the group of data chunks from the second memory unit to the virtual FIFO data structure; sending, by the processor a DMA completion acknowledgment indicating that the group of data chunks was written to the virtual FIFO data structure; and transferring, by a first level DMA controller, a data chunk from the virtual FIFO data structure to the hardware FIFO memory unit.Type: ApplicationFiled: February 20, 2006Publication date: December 18, 2008Applicant: Freescale Semiconductor, Inc.Inventors: Yoram Granit, Adi Katz, Gil Lidji
-
Patent number: 7464197Abstract: A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory.Type: GrantFiled: January 14, 2005Date of Patent: December 9, 2008Assignee: Intel CorporationInventors: Kumar Ganapathy, Ruban Kanapathippillai, Saurin Shah, George Moussa, Earle F. Philhower, III, Ruchir Shah
-
Patent number: 7444435Abstract: A DMA controller (DMAC) for handling a list DMA command in a computer system is provided. The computer system has at least one processor and a system memory, the list DMA command relates to an effective address (EA) of the system memory, and the at least one processor has a local storage. The DMAC includes a DMA command queue (DMAQ) coupled to the local storage and configured to receive the list DMA command from the local storage and to enqueue the list DMA command. An issue logic is coupled to the DMAQ and configured to issue an issue request to the DMAQ. A request interface logic (RIL) is coupled to the DMAQ and configured to read the list DMA command based on the issue request. The RIL is further coupled to the local storage and configured to send a fetch request to the local storage to initiate a fetch of a list element of the list DMA command from the local storage to the DMAQ.Type: GrantFiled: March 14, 2007Date of Patent: October 28, 2008Assignee: International Business Machines CorporationInventors: Matthew Edward King, Peichum Peter Liu, David Mui, Takeshi Yamazaki
-
Patent number: 7444642Abstract: The present disclosure describes a method comprising issuing a plurality of commands to a controller, wherein the commands are issued in a first order, and wherein the completion status of commands is written to the memory in a second order, and wherein the second order may be different from the first order. Also described is an apparatus comprising a controller adapted to accept a plurality of commands, wherein the commands are issued in a first order, and completion status of commands is written to the memory in a second order, and wherein the second order may be different from the first order.Type: GrantFiled: November 15, 2001Date of Patent: October 28, 2008Assignee: Intel CorporationInventors: Linden Minnick, Roy Callum, Patrick L. Connor
-
Patent number: 7444441Abstract: A device for attachment to a host for serial data communication including means for transferring to the host a predetermined data structure indicating whether or not the device supports direct memory access.Type: GrantFiled: October 1, 2003Date of Patent: October 28, 2008Assignee: Nokia CorporationInventors: Richard Petrie, Jan Gundorf
-
Patent number: 7437727Abstract: The present invention implements an I/O task architecture in which an I/O task requested by the storage manager, for example a stripe write, is decomposed into a number of lower-level asynchronous I/O tasks that can be scheduled independently. Resources needed by these lower-level I/O tasks are dynamically assigned, on an as-needed basis, to balance the load and use resources efficiently, achieving higher scalability. A hierarchical order is assigned to the I/O tasks to ensure that there is a forward progression of the higher-level I/O task and to ensure that resources do not become deadlocked.Type: GrantFiled: March 21, 2002Date of Patent: October 14, 2008Assignee: Network Appliance, Inc.Inventors: James Leong, Rajesh Sundaram, Douglas P. Doucette, Stephen H. Strange, Srinivasan Viswanathan
-
Publication number: 20080244114Abstract: A runtime integrity check may be implemented for a chain or execution path. When the chain or execution path calls other functions, the correctness of an entity called from the execution path is verified. As a result, attacks by malicious software that attempt to circumvent interrupt handlers can be combated.Type: ApplicationFiled: March 29, 2007Publication date: October 2, 2008Inventors: Travis T. Schluessler, David Durham, Hormuzd Khosravi
-
Patent number: 7428603Abstract: The disclosure is directed to a device including a memory interface. The memory interface includes a data interface, a first state machine and a second state machine. The first state machine includes a first chip select interface and a first ready/busy interface. The first state machine is configured to select and monitor a first memory device via the first chip select interface and the first ready/busy interface, respectively, when the first memory device is coupled to the data interface. The second state machine includes a second chip select interface and a second ready/busy inter-face. The second state machine is configured to select and monitor a second memory device via the second chip select interface and the second ready/busy interface, respectively, when the second memory device is coupled to the data interface.Type: GrantFiled: June 30, 2005Date of Patent: September 23, 2008Assignee: Sigmatel, Inc.Inventors: Matthew Henson, David Cureton Baker
-
Publication number: 20080228960Abstract: Provided are an information processing apparatus and a command multiplicity control method that enable easy and proper control of command multiplicity assigned to each host. The information processing apparatus, which executes processing in accordance with a command sent from each of plural hosts, dynamically determines each host's command multiplicity with respect to the information processing apparatus in accordance with command issue frequency of each host, and sets the determined multiplicity for the host. Accordingly, an information processing apparatus that enables easy and proper control of the command multiplicity assigned to each host can be realized.Type: ApplicationFiled: January 14, 2008Publication date: September 18, 2008Applicant: Hitachi Ltd.Inventor: Daiki NAKATSUKA
-
Patent number: 7404015Abstract: Methods and apparatus are disclosed for processing packets, for example, using a high performance massively parallel packet processing architecture, distributing packets or subsets thereof to individual packet processors and gathering the processed packet or subsets and forwarding the resultant modified or otherwise processed packets, accessing packet processing resources across a shared resource network, accessing packet processing resources using direct memory access techniques, and/or storing one overlapping portion of a packet in a global packet memory while providing a second overlapping portion to a packet processor. In one implementation, the processing of the packet includes accessing one or more processing resources across a resource network shared by multiple packet processing engines. In one implementation, a global packet memory is one of these resources. In one implementation, these resources are accessed using direct memory access (DMA) techniques.Type: GrantFiled: August 24, 2002Date of Patent: July 22, 2008Assignee: Cisco Technology, Inc.Inventors: Rami Zemach, Vitaly Sukonik, William N. Eatherton, John H. W. Bettink, Moshe Voloshin
-
Publication number: 20080147909Abstract: Machine-readable media, methods, apparatus and system are described. In some embodiments, a host platform may receive a first USB command from a client platform, wherein the first USB command notifies the host platform that a USB device is plugged into the client platform. The host platform may further create a virtual USB device as a virtualization of the USB device in response to the first USB command, and may establish a USB device driver in response to the creation of the virtual USB device. The USB device driver may control the USB device in the client platform.Type: ApplicationFiled: December 18, 2006Publication date: June 19, 2008Inventors: Winters Zhang, Hongbing Zhang, Thomas Wang, Minqiang Wu
-
Publication number: 20080133788Abstract: A plurality of independent cache units and nonvolatile memory units are provided in a disk controller located between a host (central processing unit) and a magnetic disk drive. A plurality of channel units for controlling the data transfer to and from the central processing unit and a plurality of control units for controlling the data transfer to and from the magnetic disk drive are independently connected to the cache units and the nonvolatile memory units through data buses and access lines.Type: ApplicationFiled: December 17, 2007Publication date: June 5, 2008Inventor: Yasuo Inoue
-
Publication number: 20080126605Abstract: A method for handling multiple data transfer requests from one application within a computer system is disclosed. In response to the receipt of multiple data transfer requests from an application, a data definition (DD) chain is generated for all the data transfer requests. The DD chain is then divided into multiple DD sub-blocks. The DD sub-blocks are subsequently loaded into a set of available direct memory access (DMA) engines. Each of the available DMA engines performs data transfers on a corresponding DD sub-block until the entire DD chain has been completed.Type: ApplicationFiled: September 20, 2006Publication date: May 29, 2008Inventors: Lucien Mirabeau, Tiep Q. Pham
-
Patent number: 7380115Abstract: A direct memory access (DMA) engine has virtually all control in connection with data transfers that can involve one or both of primary and secondary controllers. The DMA engine receives a command related to a data transfer from a processor associated with the primary controller. This command causes the DMA engine to access processor memory to obtain metadata therefrom. In performing a DMA operation, the metadata enables the DMA engine to conduct data transfers between local memory and remote memory. In performing exclusive OR operations, the DMA engine is involved with conducting data transfers using local memory.Type: GrantFiled: November 7, 2002Date of Patent: May 27, 2008Assignee: Dot Hill Systems Corp.Inventor: Gene Maine
-
Publication number: 20080120443Abstract: A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to interface the memory circuits and the system for reducing command scheduling constraints of the memory circuits.Type: ApplicationFiled: October 30, 2007Publication date: May 22, 2008Inventors: Suresh Natarajan Rajan, Keith R. Schakel, Michael John Sebastian Smith, David T. Wang, Frederick Daniel Weber
-
Patent number: 7376762Abstract: A system and method for providing direct memory access is disclosed. In a particular embodiment, a direct memory access module is disclosed that includes a memory, a first interface coupled to a processor, and a second interface coupled to a peripheral module. A first instruction received from the first interface is stored in the memory. The first instruction includes a number of programmed input/output words to be provided to the peripheral module via the second interface. The direct memory access module also includes an instruction execution unit to process the first instruction.Type: GrantFiled: October 31, 2005Date of Patent: May 20, 2008Assignee: SigmaTel, Inc.Inventors: Matthew Henson, David Cureton Baker
-
Patent number: 7376950Abstract: The invention features a method for transferring data to programming engines using multiple memory channels, parsing data over at most two channels in the memory channels, and establishing at most two logical states to signal completion of a memory transfer operation.Type: GrantFiled: May 8, 2002Date of Patent: May 20, 2008Assignee: Intel CorporationInventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein, Myles J. Wilde
-
Patent number: 7376768Abstract: A method for writing to multiple recording devices where at least two of the multiple recording devices are configured to respond dissimilarly to a command associated with the writing is provided. The method initiates with establishing a plurality of independent write threads configured to read data from a circular buffer composed of an initial amount of buffer elements. Then, each one of the plurality of independent write threads are associated with one of the multiple recording devices. Next, detection of when a write thread associated with a fastest one of the multiple recording devices is reading a last available buffer element occurs. In response to this detection the method includes adding at least one additional buffer element to the circular buffer. A computer readable medium and a system configured to write to multiple recording devices simultaneously are also provided.Type: GrantFiled: December 19, 2003Date of Patent: May 20, 2008Assignee: Sonic Solutions, Inc.Inventor: Gianluca Macciocca
-
Publication number: 20080109573Abstract: The invention relates to a RDMA system for sending commands from a source node to a target node. These commands are locally executed at the target node. One aspect of the invention is a multi-node computer system having a plurality of interconnected processing nodes. The computer system issues a direct memory access (DMA) command from a first node to be executed by a DMA engine at a second node. Commands are transferred and executed by forming, at a first node, a packet having a payload containing the DMA command. The packets are sent to the second node via the interconnection topology, where the second node receives the packet and validating that the packet complies with a predefined trust relationship. The command is then processed by the DMA engine at the second node.Type: ApplicationFiled: November 8, 2006Publication date: May 8, 2008Inventors: Judson S. Leonard, Lawrence C. Stewart, David Gingold
-
Patent number: 7359077Abstract: A CPU monitors the state of a printing apparatus, and controls transmission, to the host, of a control signal for controlling the data reception timing from the host.Type: GrantFiled: April 23, 2003Date of Patent: April 15, 2008Assignee: Canon Kabushiki KaishaInventors: Akira Kuronuma, Masafumi Wataya, Toru Nakayama, Takuji Katsu
-
Patent number: 7330914Abstract: The present invention is a DMA controller that accesses a transfer source and a transfer destination of a DMA transfer via a bus, that chains a plurality of data segments in the transfer source according to an instruction by an external initiator, and that performs burst-transfer to the transfer destination, and when a boundary data, that is a remaining data after dividing in the bus width units and data less than the bus width, is generated, the boundary data is stored in a boundary data buffer in the DMA controller, the data to be read from the transfer source by the next DMA command and the previously stored boundary data are merged, and the data is burst-transferred to the transfer destination.Type: GrantFiled: December 30, 2004Date of Patent: February 12, 2008Assignee: Fujitsu LimitedInventor: Masato Inogai
-
Patent number: 7302503Abstract: A direct memory access system utilizing a local memory that stores a plurality of DMA command lists, each comprising at least one DMA command. A command queue can hold a plurality of entries, each entry comprising a pointer field and a sequence field. The pointer field points to one of the DMA command lists. The sequence field holds a sequence value. A DMA engine accesses an entry in the command queue and then accesses the DMA commands of the DMA command list pointed to by the pointer field of the accessed entry. The DMA engine performs the DMA operations specified by the accessed DMA commands. The DMA engine makes available the sequence value held in the sequence field of the accessed entry when all of the DMA commands in the accessed command list have been performed. In one embodiment, the command queue is part of the DMA engine.Type: GrantFiled: April 1, 2003Date of Patent: November 27, 2007Assignee: Broadcom CorporationInventor: Alexander G. MacInnis
-
Patent number: 7299313Abstract: A system for implementing a memory subsystem command interface, the system including a cascaded interconnect system including one or more memory modules, a memory controller and a memory bus. The memory controller generates a data frame that includes a plurality of commands. The memory controller and the memory module are interconnected by a packetized multi-transfer interface via the memory bus and the frame is transmitted to the memory modules via the memory bus.Type: GrantFiled: October 29, 2004Date of Patent: November 20, 2007Assignee: International Business Machines CorporationInventors: Kevin C. Gower, Warren E. Maule
-
Patent number: 7293119Abstract: The present invention relates to a method and system for performing a data transfer between a shared memory (16) of a processor device (10) and a circuitry (20) connected to the processor device (10), wherein the data transfer is performed by triggering a DMA transfer of the data to the processor device, adding the DMA transfer to a transaction log, and providing the transaction log to the processor device, when the transaction log has reached a predetermined depth limit. The processor device is then informed of the DMA transfer of the transaction log, so as to be able to validate the transferred data. Thereby, significant background data movement can be provided without introducing high core overheads at the processor device (10).Type: GrantFiled: December 27, 2001Date of Patent: November 6, 2007Assignee: Nokia CorporationInventor: John Beale
-
Patent number: 7287101Abstract: Machine-readable media, methods, and apparatus are described for transferring data. In some embodiments, an operating system may allocate pages to a buffer and may build a memory descriptor list that references the pages allocated to the buffer. A direct memory access (DMA) controller may process the memory descriptor list and transfer data between a buffer defined by the memory descriptor list and another location per the memory descriptor list. The DMA controller may further support data transfers that involve buffers defined by scatter gather lists and/or chained DMA descriptors built by a device driver.Type: GrantFiled: August 5, 2003Date of Patent: October 23, 2007Assignee: Intel CorporationInventors: William T. Futral, Jie Ni
-
Patent number: 7259876Abstract: A first storage stores input image data. A second storage stores image data read from the first storage. A control part determines, with respect to a timing at which data transfer of image data into the first storage, a data transfer of the said image data from the first storage to the second storage, based on a rate of data transfer and writing of the image data into the first storage and rate of data transfer and writing of the image data into the second storage from the first storage.Type: GrantFiled: June 28, 2002Date of Patent: August 21, 2007Assignee: Ricoh Company, Ltd.Inventors: Yuriko Obata, Norio Michiie, Kiyotaka Moteki, Hiromitsu Shimizu, Takao Okamura, Yasuhiro Hattori
-
Patent number: 7219169Abstract: In one embodiment, a direct memory access (DMA) disk controller used in hardware-assisted data transfer operations includes command receiving logic to receive a data transfer command issued by a processor. The data transfer command identifies one or more locations in memory and multiple distinct regions on one or more disks accessible to the DMA disk controller. The DMA disk controller further includes data manipulation logic to transfer data between the memory locations and the distinct regions on the disks according to the data transfer command.Type: GrantFiled: September 30, 2002Date of Patent: May 15, 2007Assignee: Sun Microsystems, Inc.Inventors: Whay Sing Lee, Raghavendra Rao, Satyanarayana Nishtala
-
Patent number: 7203811Abstract: A method and an apparatus are provided for handling a list DMA command in a computer system. The list DMA command relates to an effective address (EA) of a system memory. At least one processor in the system has a local storage. The list DMA command is queued in a DMA queue (DMAQ). A list element is fetched from the local storage to the DMAQ. The list DMA command is read from the DMAQ. A bus request is issued for the list element. If the bus request is a last request, it is determined whether a current list element is a last list element. If the current list element is not the last list element, it is determined whether the current list element is fenced. If the current list element is not fenced, a next list element is fetched regardless of whether all outstanding requests are completed.Type: GrantFiled: July 31, 2003Date of Patent: April 10, 2007Assignee: International Business Machines CorporationInventors: Matthew Edward King, Peichum Peter Liu, David Mui, Takeshi Yamazaki
-
Patent number: 7200689Abstract: A method and an apparatus are provided for loading data to a local store of a processor in a computer system having a direct memory access (DMA) mechanism. A transfer of data is performed from a system memory of the computer system to the local store. The data is fetched from the system memory to a cache of the processor. A DMA load request is issued to request data. It is determined whether the requested data is found in the cache. Upon a determination that the requested data is found in the cache, the requested data is loaded directly from the cache to the local store.Type: GrantFiled: July 31, 2003Date of Patent: April 3, 2007Assignee: International Business Machines CorporationInventor: James Allan Kahle
-
Patent number: 7200688Abstract: The present invention provides for asynchronous DMA command completion notification in a computer system. A command tag, associated with a plurality DMA command is generated. A DMA data movement command having the command tag is grouped with another DMA data movement command having the command tag. DMA commands belonging to the same tag group are monitored to see whether all DMA commands of the same tag group are completed.Type: GrantFiled: May 29, 2003Date of Patent: April 3, 2007Assignee: International Business Machines CorporationInventors: Michael Norman Day, Harm Peter Hofstee, Charles Ray Johns, Peichum Peter Liu, Thuong Quang Truong, Takeshi Yamazaki