Direct Memory Access (e.g., Dma) Patents (Class 710/308)
  • Publication number: 20140365707
    Abstract: Apparatuses, systems, methods, and computer program products are disclosed for providing a memory device with volatile and non-volatile media. A volatile memory medium is on a circuit board configured to be installed on a memory bus of a processor. A non-volatile memory medium is on the same circuit board. A mapping module is configured to selectively store data in either the volatile memory medium or the non-volatile memory medium. The data is provided by way of one or more commands from the processor.
    Type: Application
    Filed: August 21, 2014
    Publication date: December 11, 2014
    Inventors: Nisha Talagala, David Flynn
  • Publication number: 20140365705
    Abstract: A data processing device includes: a processing block which is connected to a common bus and which processes a plurality of data, which is inputted simultaneously, in parallel; a memory which is consisted of address space which has a plurality of banks; and a common bus arbitration unit which arbitrates a request for access to the memory outputted from the processing block, and controls exchange of data via the common bus between the processing block whose access request has been accepted and the memory. The processing block includes a data transfer control device which changes an order of access to the bank of the memory corresponding to the respective data, unifies the respective data into an exchange data, and exchanges the exchange data with the memory when the processing block performs exchanging of the data to be processed in parallel with the memory via the common bus.
    Type: Application
    Filed: May 23, 2014
    Publication date: December 11, 2014
    Applicant: OLYMPUS CORPORATION
    Inventors: Yoshinobu Tanaka, Hironobu Tomita, Akira Ueno
  • Publication number: 20140365706
    Abstract: A data-processing apparatus includes: a plurality of processing blocks which is connected to a common bus; a memory which includes an address space having a plurality of banks; and a common bus arbitrating section which arbitrates an access request to access the memory, and controls data delivery through the common bus that receives the access request and is provided between the plurality of processing blocks and the memory. At least one processing block among the plural processing blocks is an exchange-processing block that performs exchange of an access order to access the banks in the memory when the communication of the data is performed between the memory and the processing block through the common bus. The exchange-processing block includes a data transfer control device that performs the exchange of the access order to access the banks by controlling the order of the data.
    Type: Application
    Filed: May 29, 2014
    Publication date: December 11, 2014
    Applicant: OLYMPUS CORPORATION
    Inventors: Yoshinobu Tanaka, Hironobu Tomita, Akira Ueno
  • Publication number: 20140365704
    Abstract: Systems described herein enable PCIe device components to be used with multiple PCIe topologies and with host systems of varying configurations. In some cases, a number of varying PHYs and PCIe cores are utilized to increase the number of applications and/or specifications that may be satisfied with a host interface design. Further, some systems described herein may include a number of synchronizers, clock multiplier units, and selectors to create a host interface that can be configured for a number of applications. Despite increasing the flexibility of the usage of systems disclosed herein, costs can be reduced by using the systems of the present disclosure for PCIe based devices.
    Type: Application
    Filed: September 13, 2013
    Publication date: December 11, 2014
    Applicant: Western Digital Technologies, Inc.
    Inventor: FAROOQ YOUSUF
  • Publication number: 20140359191
    Abstract: A device comprises a central processing unit (CPU) and a memory configured for storing memory descriptors. The device also includes an analog-to-digital converter controller (ADC controller) configured for managing an analog-to-digital converter (ADC) using the memory descriptors. In addition, the device includes a direct memory access system (DMA system) configured for autonomously sequencing conversion operations performed by the ADC without CPU intervention by transferring the memory descriptors directly between the memory and the ADC controller for controlling the conversion operations performed by the ADC.
    Type: Application
    Filed: July 7, 2014
    Publication date: December 4, 2014
    Inventors: Frode Milch Pedersen, Romain Oddoart, Cedric Favier
  • Publication number: 20140359178
    Abstract: A microcontroller for a control unit or a vehicle control unit, includes a central processing unit (CPU), at least one interface-unspecific input module, at least one interface-unspecific output module, at least one routing unit and at least one arithmetic unit for processing interface-specific information. The microcontroller is configurable so that the at least one interface-unspecific input module, the at least one interface-unspecific output module, the at least one routing unit and the at least one arithmetic unit for processing interface-specific information fulfill the functions corresponding to one of multiple serial interfaces, in particular of SPI, UART, LIN, CAN, PSI5, FlexRay, SENT or Ethernet. In addition, the arithmetic unit is configured to generate an entire output message frame from the second payload data as output data and to transmit the same to the interface-unspecific output module.
    Type: Application
    Filed: May 27, 2014
    Publication date: December 4, 2014
    Applicant: Robert Bosch GmbH
    Inventors: Axel AUE, Eugen BECKER
  • Publication number: 20140359192
    Abstract: Memory system controllers can include hardware masters, first buffers, and a switch coupled to the hardware masters and to the first buffers. The switch can include second buffers and a buffer allocation management (BAM) circuit. The BAM circuit can include a buffer tag pool. The buffer tag pool can include tags, each identifying a respective first buffer or a respective second buffer. The BAM circuit can be configured to allocate a tag to a hardware master in response to an allocation request from the hardware masters. The BAM circuit can be configured to prioritize allocation of a tag identifying a second buffer over a tag identifying a first buffer.
    Type: Application
    Filed: July 14, 2014
    Publication date: December 4, 2014
    Inventors: Douglas A. Larson, Joseph M. Jeddeloh
  • Publication number: 20140344498
    Abstract: A method for data storage includes, in a system that includes a host having a host memory and a memory controller that is separate from the host and stores data for the host in a non-volatile memory including multiple analog memory cells, storing in the host memory information items relating to respective groups of the analog memory cells of the non-volatile memory. A command that causes the memory controller to access a given group of the analog memory cells is received from the host. In response to the command, a respective information item relating to the given group of the analog memory cells is retrieved from the host memory by the memory controller, and the given group of the analog memory cells is accessed using the retrieved information item.
    Type: Application
    Filed: August 6, 2014
    Publication date: November 20, 2014
    Inventors: Dotan Sokolov, Barak Rotbard
  • Publication number: 20140337557
    Abstract: Data storage systems and methods for storing data in computing nodes of a super computer or compute cluster are described herein. The super computer storage may be coupled with a primary storage system. In addition to a CPU and memory, non-volatile memory is included with the computing nodes as local storage. The super computer includes a plurality of computing groups, each including a plurality of computing nodes. There is one burst buffer fabric per group and one input/output node per group. When data bursts occur, data may be stored by a first computing node on the local storage of a second computing node in the computing group through the burst buffer fabric without interrupting the CPU in the second computing node. Further, the local storage of other computing nodes may be used to store redundant copies of data from a first computing node to make the super computer data resilient.
    Type: Application
    Filed: May 9, 2014
    Publication date: November 13, 2014
    Applicant: DataDirect Networks, Inc.
    Inventors: Paul Nowoczynski, Michael Vildibill, Jason Cope, Pavan Uppu
  • Publication number: 20140331000
    Abstract: A computer system and a method are provided for accessing a peripheral component interconnect express (PCIe) endpoint device. The computer system includes: a processor, a PCIe bus, and an access proxy. The access proxy connects to the processor and the PCIe endpoint device; the processor acquires an operation instruction, where the operation instruction instructs the processor to access the PCIe endpoint device through the access proxy, and send an access request to the access proxy according to the operation instruction; and the access proxy sends a response message of the access request to the processor after receiving the access request sent by the processor. Because the processor does not directly access the PCIe endpoint device to be accessed but completes access through the access proxy, thereby avoiding an MCE reset for the processor.
    Type: Application
    Filed: June 6, 2014
    Publication date: November 6, 2014
    Inventor: Ge Du
  • Patent number: 8880768
    Abstract: A method of operation of a storage controller system includes: accessing a first controller having a synchronization bus; accessing a second controller, by the first controller, through the synchronization bus; and receiving a first transaction layer packet by the first controller including performing a multi-cast transmission between the first controller and the second controller through the synchronization bus.
    Type: Grant
    Filed: May 20, 2011
    Date of Patent: November 4, 2014
    Assignee: Promise Technology, Inc.
    Inventors: Manoj Mathew, Jin-Lon Hon
  • Publication number: 20140325114
    Abstract: Disclosed herein is a multi-channel direct memory access (DMA) controller. The DMA controller includes: a register which stores control information and an operation state of each of a plurality of direct memory access (DMA) channels; a transmission processor which controls flow of transmission and reception of data such that all of the DMA channels requesting DMA transmission cyclically repeat unit transmission with reference to the register; and a transmission sequence control unit which controls the transmission processor such that the transmission sequence of each of the DMA channels is determined in a circulation cycle of unit transmission by reflecting priority information of the respective DMA channels stored in the register.
    Type: Application
    Filed: April 23, 2014
    Publication date: October 30, 2014
    Applicant: CORE LOGIC INC.
    Inventor: SUK-KYU SONG
  • Publication number: 20140317332
    Abstract: A semiconductor device may include: a storage unit configured to store program codes provided through control of a processor core; and a control unit configured to perform a control operation on a semiconductor memory device according to the program codes.
    Type: Application
    Filed: April 16, 2014
    Publication date: October 23, 2014
    Applicant: SK hynix Inc.
    Inventors: Hyung-Gyun YANG, Hyung-Dong LEE, Yong-Kee KWON, Young-Suk MOON, Hong-Sik KIM
  • Publication number: 20140317333
    Abstract: A direct memory access (DMA) controller stores a set of DMA instructions in a list, where each entry in the list includes a bit field that identifies the type of the entry. Based on the bit field, the DMA controller determines whether each DMA instruction is a buffer pointer or a jump pointer. If a DMA instruction is identified as a buffer pointer, the DMA controller transfers data to or from the location specified by the buffer pointer. If a DMA instruction is identified as a jump pointer, the DMA controller jumps to the location in the list specified by the jump pointer. A subset of the list of DMA instructions may be cached, and the DMA controller executes the cache entries sequentially. If a jump pointer is encountered in the cache, the DMA controller flushes the cache and reloads it from main memory based on the jump pointer.
    Type: Application
    Filed: April 16, 2014
    Publication date: October 23, 2014
    Inventors: Jeffrey R. Dorst, Xiang Liu
  • Publication number: 20140310443
    Abstract: An interface unit configured to perform transfers between a processor and one or more peripheral devices is disclosed. A system includes a processor, a number of devices (e.g., peripheral devices), and an interface unit coupled therebetween. The interface unit includes FIFOs for storing data transmitted to or received from the devices by the processor. The interface unit may access data from a device responsive to a request from the processor. The data may be loaded into a FIFO according to transfer parameters controlled by the device. After the data has been received by the FIFO, the interface unit may generate an interrupt to the processor. Data may then be transferred from the interface unit to the processor according to transfer parameters controlled by the processor. The interface unit may thus homogenize a processor interface to a number of different devices.
    Type: Application
    Filed: April 11, 2013
    Publication date: October 16, 2014
    Applicant: Apple Inc.
    Inventor: Gilbert H. Herbeck
  • Publication number: 20140304449
    Abstract: A multi-core processor having a cache, an interconnect system selectively connecting the cache to individual cores, and a interconnect control whereby selected cores are disabled.
    Type: Application
    Filed: June 11, 2014
    Publication date: October 9, 2014
    Applicant: PACT XPP TECHNOLOGIES AG
    Inventors: Martin Vorbach, Robert Munch
  • Publication number: 20140297914
    Abstract: A bus system for transferring data between parts of a multiprocessor system. The bus system is divided into a plurality of segments. Each segment is controlled by a table providing routing information. The bus system establishes communication between a sender and a receiver according to data where the data includes an identifier that identifying the source of the data transfer and/or the target of the data transfer.
    Type: Application
    Filed: March 31, 2014
    Publication date: October 2, 2014
    Applicant: PACT XPP TECHNOLOGIES AG
    Inventor: Martin Vorbach
  • Publication number: 20140289441
    Abstract: The present invention relates to a multilevel memory bus system for transferring information between at least one DMA controller and at least one solid-state semiconductor memory device, such as NAND flash memory devices or the like. This multilevel memory bus system includes at least one DMA controller coupled to an intermediate bus; a flash memory bus; and a flash buffer circuit between the intermediate bus and the flash memory bus. This multilevel memory bus system may be disposed to support: an n-bit wide bus width, such as nibble-wide or byte-wide bus widths; a selectable data sampling rate, such as a single or double sampling rate, on the intermediate bus; a configurable bus data rate, such as a single, double, quad, or octal data sampling rate; CRC protection; an exclusive busy mechanism; dedicated busy lines; or any combination of these.
    Type: Application
    Filed: June 6, 2014
    Publication date: September 25, 2014
    Inventors: Ricardo H. Bruce, Elsbeth Lauren Tagayo Villapana, Joel Alonzo Baylon
  • Publication number: 20140281101
    Abstract: In managing incoming bus traffic storage for store cell memory (SCM) in a sequential-write, random-read system, a priority encoder system can be used to find a next empty cell in the sequential-write step. Each cell in the SCM has a bit that indicates whether the cell is full or empty. The priority encoder encodes the next empty cell using these bits and the current write pointer. The priority encoder can also find next group of empty cells by being coupled to AND operators that are coupled to each group of cells. Further, a cell locator selector selects a next empty cell location among priority encoders for cell groups of various sizes according to an opcode by appending ‘0’s to cell locations outputs from priority encoders that are smaller than the size of the SCM.
    Type: Application
    Filed: May 27, 2014
    Publication date: September 18, 2014
    Applicant: STMicroelectronics International N.V.
    Inventor: Sandeep ROHILLA
  • Publication number: 20140281098
    Abstract: Some embodiments relate to a Direct Memory Access (DMA) controller. The DMA controller includes a bus controller having a system bus interface and configured to read a pattern from a memory location via the system bus interface. Pattern comparison logic compares the read pattern to at least one predetermined pattern. Control logic induces the bus controller to process a first conditional link over the system bus interface if the read pattern differs from the predetermined pattern, and induces the bus controller to process a second conditional link over the system bus interface if the read pattern differs from the predetermined pattern.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: Infineon Technologies AG
    Inventors: Frank Hellwig, Simon Cottam, Harald Zweck
  • Publication number: 20140281100
    Abstract: Embodiments include a method for bypassing data in an active memory device. The method includes a requestor determining a number of transfers to a grantor that have not been communicated to the grantor, requesting to the interconnect network that the bypass path be used for the transfers based on the number of transfers meeting a threshold and communicating the transfers via the bypass path to the grantor based on the request, the interconnect network granting control of the grantor in response to the request. The method also includes the interconnect network requesting control of the grantor based on an event and communicating delayed transfers via the interconnect network from other requestors, the delayed transfers being delayed due to the grantor being previously controlled by the requestor, the communicating based on the control of the grantor being changed back to the interconnect network.
    Type: Application
    Filed: August 14, 2013
    Publication date: September 18, 2014
    Applicant: International Business Machines Corporation
    Inventors: Bruce M. Fleischer, Thomas W. Fox, Hans M. Jacobson, Ravi Nair, Martin Ohmacht, Krishnan Sugavanam
  • Publication number: 20140281099
    Abstract: This application relates to systems and methods for controlling the flow of transport layer packets (TLP) in a peripheral component interconnect express (PCIe)-based environment. In an exemplary embodiment, an arbiter in a PCIe device determines the amount of data, if any, that should be expected in response to transmission of a particular TLP. If a receive buffer of the PCIe device has enough available space for storing the expected data, the arbiter permits transmission of the particular TLP. If the receive buffer does not have enough available space for storing the expected data, the arbiter suppresses transmission of the particular TLP until the receive buffer has enough available space. The exemplary embodiment may improve data flow through the PCIe environment by reducing fragmented transfers of data.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: Broadcom Corporation
    Inventors: Refeal AVEZ, Danny Kopelev
  • Patent number: 8838782
    Abstract: In a network protocol processing system in which variables of each of TCP transmission processing and TCP reception processing depend on each other, asynchronous parallel processing is realized between a transmission processing block and a reception processing block for updated protocol processing. Specifically, the system includes a high priority queue for transferring control data to be processed with high priority, a low priority queue for control data other than the above control data, and priority control means for distributing the control data to two kinds of queues. When a request for session establishment and the session disconnection of a new TCP session is issued from an application during transmission of TCP data, data related with the session establishment and the session disconnection is notified preferentially through the high priority queue, and other control data is transferred through the low priority queue.
    Type: Grant
    Filed: July 2, 2009
    Date of Patent: September 16, 2014
    Assignee: NEC Corporation
    Inventors: Masato Yasuda, Kiyohisa Ichino
  • Publication number: 20140223069
    Abstract: Methods, controllers, and systems for managing data transfer, such as those in solid state drives (SSDs), are described. In some embodiments, the data transfer between a host and a memory is monitored and then assessed to provide an assessment result. A number of storage units of the memory allocated to service another data transfer is adjusted based on the assessment result. Additional methods and systems are also described.
    Type: Application
    Filed: April 4, 2014
    Publication date: August 7, 2014
    Applicant: Micron Technology, Inc.
    Inventor: Joe M. Jeddeloh
  • Publication number: 20140223068
    Abstract: Systems, among other embodiments, include topologies (data and/or control/address information) between an integrated circuit buffer device (that may be coupled to a master, such as a memory controller) and a plurality of integrated circuit memory devices. For example, data may be provided between the plurality of integrated circuit memory devices and the integrated circuit buffer device using separate segmented (or point-to-point link) signal paths in response to control/address information provided from the integrated circuit buffer device to the plurality of integrated circuit buffer devices using a single fly-by (or bus) signal path. An integrated circuit buffer device enables configurable effective memory organization of the plurality of integrated circuit memory devices. The memory organization represented by the integrated circuit buffer device to a memory controller may be different than the actual memory organization behind or coupled to the integrated circuit buffer device.
    Type: Application
    Filed: August 30, 2013
    Publication date: August 7, 2014
    Applicant: Rambus Inc.
    Inventors: Ian Shaeffer, Ely Tsern, Craig Hampel
  • Publication number: 20140207991
    Abstract: A device and method for communicating, via a memory-mapped communication path, between a host processor and a cellular-communication modem are disclosed. The method includes providing logical channels over the memory-mapped communication path and transporting data organized according to one or more cellular communication protocols over at least one of the logical channels. In addition, the method includes acknowledging when data transfer occurs between the host processor and the cellular-communication modem, issuing commands between the host processor and the cellular-communication modem, and communicating and managing a power state via one or more of the logical channels.
    Type: Application
    Filed: January 24, 2014
    Publication date: July 24, 2014
    Applicant: Qualcomm Innovation Center, Inc.
    Inventors: Vinod H. Kaushik, Igor Malamant, Sergio Kolor
  • Publication number: 20140207992
    Abstract: An information processor includes a central processing unit core and a direct memory access unit connected to the central processing unit core. The information processor further includes at least one tightly coupled smart memory unit connected to the central processing unit core. The at least one tightly coupled smart memory unit includes a memory unit, and a local processing unit adapted to process data stored in the memory unit, wherein the memory unit is adapted to be accessed by the central processing unit core and the local processing unit, and the local processing unit is separate from the central processing unit core and the direct memory access unit.
    Type: Application
    Filed: March 20, 2014
    Publication date: July 24, 2014
    Applicant: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.
    Inventor: Shyh-An CHI
  • Publication number: 20140207993
    Abstract: A memory hub includes first and second link interfaces for coupling to respective data busses, a data path coupled to the first and second link interfaces and through which data is transferred between the first and second link interfaces, and further includes a write bypass circuit coupled to the data path to couple write data on the data path and temporarily store the write data to allow read data to be transferred through the data path while the write data is temporarily stored. A method for writing data to a memory location in a memory system is provided which includes accessing read data in the memory system, providing write data to the memory system, and coupling the write data to a register for temporary storage. The write data is recoupled to the memory bus and written to the memory location following provision of the read data.
    Type: Application
    Filed: March 24, 2014
    Publication date: July 24, 2014
    Applicant: Micron Technology, Inc.
    Inventors: Douglas A. Larson, Jeffrey J. Cronin
  • Publication number: 20140201417
    Abstract: A system can include a host processor connected to memory via a system memory bus; and at least one offload processor module, including at least one offload processor mounted on the offload processor module, and configured to execute operations on data received over the system memory bus, and to output context data to memory, and read context data from the memory, and hardware scheduling logic mounted on the module and configured to control operations of the at least one offload processor.
    Type: Application
    Filed: June 8, 2013
    Publication date: July 17, 2014
    Inventors: Parin Bhadrik Dalal, Stephen Paul Belair
  • Publication number: 20140201416
    Abstract: A method can include receiving write data over a system memory bus via an in-line module connector, the write data including a metadata portion identifying a processing to be performed on at least a portion of the write data; performing the processing on at least a portion of the write data with at least one offload processor mounted on a module having the in-line module connector to generate processed data; and transmitting the processed data over the system memory bus; wherein the system memory bus is further connected to at least one processor connector configured to receive at least one host processor different from the at least one offload processor.
    Type: Application
    Filed: June 8, 2013
    Publication date: July 17, 2014
    Inventors: Parin Bhadrik Dalal, Stephen Paul Belair
  • Patent number: 8782317
    Abstract: A computer system and a method are provided for accessing a peripheral component interconnect express PCIe endpoint device. The computer system includes: a processor, a PCIe bus, and an access proxy. The access proxy connects to the processor and the PCIe endpoint device; the processor acquires an operation instruction, where the operation instruction instructs the processor to access the PCIe endpoint device through the access proxy, and send an access request to the access proxy according to the operation instruction; and the access proxy sends a response message of the access request to the processor after receiving the access request sent by the processor. Because the processor does not directly access the PCIe endpoint device to be accessed but completes access through the access proxy, thereby avoiding an MCE reset for the processor.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: July 15, 2014
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Ge Du
  • Patent number: 8780168
    Abstract: DMA transfer of audio and video data. The audio and video data may be received over a serial bus. A DMA engine may provide audio and video data packets to data storage logic based on the audio and video data. The DMA engine may provide each of the audio data packets with a first, same destination address of a first memory and may provide each of the video data packets with a second, same destination address of the first memory. The data storage logic may maintain first and second pointers that indicate a next current memory location for audio data in a first buffer and video data in a second buffer in the first memory, respectively. The data storage logic may receive and store the audio and video data packets at respective locations in the first and second buffers based on current values of the first and second pointers.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: July 15, 2014
    Assignee: Logitech Europe S.A.
    Inventors: Robert A. Corley, Patrick R. McKinnon, Stefan F. Slivinski
  • Patent number: 8775694
    Abstract: A device comprises a central processing unit (CPU) and a memory configured for storing memory descriptors. The device also includes an analog-to-digital converter controller (ADC controller) configured for managing an analog-to-digital converter (ADC) using the memory descriptors. In addition, the device includes a direct memory access system (DMA system) configured for autonomously sequencing conversion operations performed by the ADC without CPU intervention by transferring the memory descriptors directly between the memory and the ADC controller for controlling the conversion operations performed by the ADC.
    Type: Grant
    Filed: September 21, 2012
    Date of Patent: July 8, 2014
    Assignee: Atmel Corporation
    Inventors: Frode Milch Pedersen, Romain Oddoart, Cedric Favier
  • Patent number: 8775711
    Abstract: The inventive concept relates to a user system including a solid state disk. The user system may include a main memory for storing data processed by a central processing unit; and a solid state disk for storing the selected data among data stored in the main memory. The main memory and the solid state disk form a single memory hierarchy. Thus, the user system of the inventive concept can rapidly process data.
    Type: Grant
    Filed: February 10, 2011
    Date of Patent: July 8, 2014
    Assignee: Industry-Academic Cooperation Foundation, Yonsei University
    Inventors: Eui-Young Chung, Kwanhu Bang
  • Publication number: 20140189186
    Abstract: Memory bus attached Input/Output (‘I/O’) subsystem management in a computing system, the computing system including an I/O subsystem communicatively coupled to a memory bus, including: detecting, by an I/O subsystem device driver, a hibernation request; setting, by the I/O subsystem device driver, a predetermined memory address to a value indicating that the I/O subsystem is not to service system requests; detecting, by the I/O subsystem device driver, that the I/O subsystem device driver has been restarted; and setting, by the I/O subsystem device driver, the predetermined memory address to a value indicating that the I/O subsystem can resume servicing system requests.
    Type: Application
    Filed: December 28, 2012
    Publication date: July 3, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: JIMMY G. FOSTER, SR., SUMEET KOCHAR, RANDOLPH S. KOLVICK, MAKOTO ONO
  • Publication number: 20140164666
    Abstract: A server provides a sharing method for a peripheral component interconnect express (PCIe) interface to one or more servers. The server receives an accessing request from a virtual machine to access a sharing unit, and transmits a model number of the sharing unit with the PCIe interface and a memory address of a PCIe base address register (BAR). The server establishes a first window in a storage device of the virtual machine, and maps the first window to a memory of the PCIe BAR of the sharing unit. The server further establishes a second window in a storage device of the server, and maps the second window to the storage device of the virtual machine.
    Type: Application
    Filed: May 23, 2013
    Publication date: June 12, 2014
    Applicant: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventor: JIA-RU YANG
  • Publication number: 20140164667
    Abstract: Memory system architectures, memory modules, processing systems and methods are disclosed. In various embodiments, a memory system architecture includes a source configured to communicate signals to a memory device. At least one memory cube may coupled to the source by a communications link having more than one communications path. The memory cube may include a memory device operably coupled to a routing switch that selectively communicates the signals between the source and the memory device.
    Type: Application
    Filed: February 17, 2014
    Publication date: June 12, 2014
    Applicant: Micron Technology, Inc.
    Inventor: David R. Resnick
  • Publication number: 20140156903
    Abstract: Techniques are generally described related to a scalable storage system. One example scalable storage system may include a first storage channel including a first storage node, a second storage node, and a first serial link. The first storage node is coupled with the second storage node via the first serial link. The scalable storage system may include a multi-channel interface including a first input-channel coupled with the first storage node and a first output-channel coupled with the second storage node. For a first request transmitted from a computer system and received by the multi-channel interface, the multi-channel interface is configured to direct the first request via the first input-channel to the first storage node of the first storage channel. The first storage node is configured to process the first request.
    Type: Application
    Filed: November 15, 2012
    Publication date: June 5, 2014
    Applicant: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Hui Huang Chang
  • Publication number: 20140156902
    Abstract: According to some embodiments, a method and apparatus are provided to receive a first data burst associated with a first data line and a second data burst associated with a second data line, determine a first one or more stuff bits to be transmitted after the first data burst and a second one or more stuff bits to be transmitted after the second data burst, and output data comprising the first data burst and the first one or more stuff bits and the second data burst and the second one or more stuff bits.
    Type: Application
    Filed: March 30, 2012
    Publication date: June 5, 2014
    Inventors: William Dawson Kesling, Howard S. David, Michael Williams
  • Patent number: 8745306
    Abstract: A multiprocessor system comprises at least one processing module, at least one I/O module, and an interconnect network to connect the at least one processing module with the at least one input/output module. In an example embodiment, the interconnect network comprises at least two bridges to send and receive transactions between the input/output modules and the processing module. The interconnect network further comprises at least two crossbar switches to route the transactions over a high bandwidth switch connection. Using embodiments of the interconnect network allows high bandwidth communication between processing modules and I/O modules. Standard processing module hardware can be used with the interconnect network without modifying the BIOS or the operating system. Furthermore, using the interconnect network of embodiments of the present invention is non-invasive to the processor motherboard. The processor memory bus, clock, and reset logic all remain intact.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: June 3, 2014
    Assignee: Intel Corporation
    Inventors: Linda J. Rankin, Paul R. Pierce, Gregory E. Dermer, Wen-Hann Wang, Kai Cheng, Richard H. Hofsheier, Nitin Y. Borkar
  • Patent number: 8745299
    Abstract: A removable electronic circuit card having both a memory module with a non-volatile mass storage memory and a separate input-output module so that data transfers may be made through the input-output module directly to and from the mass storage memory in a direct memory access (DMA) type transfer when the card is inserted into the host system but without having to pass the data through the host system. Once the host gives a DMA command, the data transfer is accomplished independently of the host system, except for the host supplying power and possibly a clock signal and other like support, during such a data transfer directly with card. The data for the transfer can be communicated between the input-output module and the exterior device through either wireless or an electrical connection means.
    Type: Grant
    Filed: October 7, 2011
    Date of Patent: June 3, 2014
    Assignee: SanDisk Technologies Inc.
    Inventors: Aviad Zer, Yosi Pinto, Micky Holtzman, Yoram Cedar
  • Publication number: 20140149625
    Abstract: A DMA optimization circuit transfers data from a single source device to a plurality of destination devices on a computer bus. A first DMA control circuit is configured to transfer a payload of data from the source device to a first destination device where the payload of data divided into a plurality of chunks of data. A second DMA control circuit is configured to transfer the payload of data from the source device to a second destination device, and is further configured to perform a logical operation on the data transferred to the second destination device. A synchronization controller is configured to control each DMA control circuit to independently transfer the chunk of data, and receives a signal indicating that both DMA control circuits have finished transferring the corresponding chunk of data. The synchronization controller then transfers of a next chunk of data only when both DMA control circuits have finished transferring the corresponding chunk of data.
    Type: Application
    Filed: February 14, 2013
    Publication date: May 29, 2014
    Inventors: Tal Sharifie, Shay Benisty, Yair Baram
  • Publication number: 20140149626
    Abstract: The present invention discloses a receiver and a method for data processing. The receiver includes a system on chip and a memory, where the system on chip is connected to the memory through an external buffer bus; the system on chip includes an LLR subsystem, a controller, a rate matching module, an incremental redundancy IR reconstructing module, and a combiner, where the LLR subsystem is connected to the controller and the rate matching module respectively; the controller is connected to the IR reconstructing module, and the rate matching module and the IR reconstructing module are connected to the combiner respectively; and the controller stores LLR data currently corresponding to a data block demodulated by the LLR subsystem into a memory, and read LLR data historically corresponding to the data block and stored in the memory into the IR reconstructing module when the data block is a retransmitted data block.
    Type: Application
    Filed: November 22, 2013
    Publication date: May 29, 2014
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yu LIU, Ying Liu
  • Patent number: 8738833
    Abstract: A collaborative bus arbitration multiplex architecture includes of a main memory, a bus, a plurality of BMPDs, and a BAM. Arbitration can be done according to the following steps of awaiting whether any of the BMPDs renders any request for access; B) identifying whether the access authority of the bus is being fetched by any other BMPDs; C) identifying whether the main memory to which the request for access corresponds have any record that the corresponding BMPD needs special treatment; D) identifying whether all of the BMPDs have rendered the requests for access; E) according to a generic arbitration principle, identifying whether the corresponding BMPDs indicated in the steps C) and D) win the access authority; F) yielding the access authority of the bus to the BMPDs winning the access authority as indicated in the step E); and G) accessing the main memory.
    Type: Grant
    Filed: May 17, 2012
    Date of Patent: May 27, 2014
    Assignee: An Chen Computer Co., Ltd.
    Inventor: Sung-Jung Wang
  • Publication number: 20140143470
    Abstract: Embodiments of a multi-processor array are disclosed that may include a plurality of processors, local memories, configurable communication elements, and direct memory access (DMA) engines, and a DMA controller. Each processor may be coupled to one of the local memories, and the plurality of processors, local memories, and configurable communication elements may be coupled together in an interspersed arrangement. The DMA controller may be configured to control the operation of the plurality of DMA engines.
    Type: Application
    Filed: March 8, 2013
    Publication date: May 22, 2014
    Applicant: COHERENT LOGIX, INCORPORATED
    Inventors: Carl S. Dobbs, Michael R. Trocino, Keith M. Bindloss
  • Publication number: 20140136748
    Abstract: An apparatus may include a processor and first logic operable on the processor to output a direct memory access (DMA) activity indicator to indicate a current state of activity of direct memory access data transfer operations. The apparatus may further include second logic operable on the processor to determine scheduled DMA activity to be performed; and third logic operable on the processor to output a pre-wake indicator to a controller before the scheduled DMA activity is to be performed, to satisfy both Quality of Service (QOS) and Power saving needs. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: October 3, 2012
    Publication date: May 15, 2014
    Inventors: Choon Gun Por, Sern Hong Phan
  • Publication number: 20140129749
    Abstract: A structure and method of allocating read buffers among multiple bus agents requesting read access in a multi-processor computer system. The number of outstanding reads a requestor may have based on the current function it is executing is dynamically limited, instead of based on local buffer space available or a fixed allocation, which improves the overall bandwidth of the requestors sharing the buffers. A requesting bus agent may control when read data may be returned from shared buffers to minimize the amount of local buffer space allocated for each requesting agent, while maintaining high bandwidth output for local buffers. Requests can be made for virtual buffers by oversubscribing the physical buffers and controlling the return of read data to the buffers.
    Type: Application
    Filed: November 5, 2012
    Publication date: May 8, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian Mitchell Bass, Kenneth Anthony Lauricella
  • Publication number: 20140122634
    Abstract: Methods, apparatus, and computer platforms and architectures employing node aware network interfaces are disclosed. The methods and apparatus may be implemented on computer platforms such as those employing a Non-uniform Memory Access (NUMA) architecture including a plurality of nodes, each node comprising a plurality of components including a processor having at least one level of memory cache and being operatively coupled to system memory and operatively coupled to a NUMA aware Network Interface Controller (NIC). Under one method, a packet is received from a network at a first NIC comprising a component of a first node, and a determination is made that packet data for the packet is to be forwarded to a second node including a second NIC. The packet data is then forwarded from the first NIC to the second NIC via a NIC-to-NIC interconnect link. Upon being received at the second NIC, processing of the packet (data) is handled as if the packet was received from the network at the second NIC.
    Type: Application
    Filed: October 29, 2012
    Publication date: May 1, 2014
    Inventors: Patrick Conner, Chris Pavlas, Elizabeth M. Kappler, Matthew A. Jared, Duke C. Hong, Scott P. Dubal
  • Publication number: 20140122765
    Abstract: A PCIe fabric includes at least one PCIe switch. The fabric may be used to connect multiple hosts. The PCIe switch implements security and segregation measures for host-to-host message communication. A management entity defines a Virtual PCIe Fabric ID (VPFID). The VPFID is used to enforce security and segregation. The fabric ID may be extended to be used in switch fabrics with other point-to-point protocols.
    Type: Application
    Filed: October 25, 2012
    Publication date: May 1, 2014
    Applicant: PLX TECHNOLOGY, INC.
    Inventors: Nagarajan SUBRAMANIYAN, Jack REGULA, Jeffrey Michael DODSON
  • Publication number: 20140108696
    Abstract: Embodiments provide access to a memory over a high speed serial link at slower speeds than the high speed serial links regular operation. An embodiment may comprise a memory apparatus with a differential receiver coupled to a protocol recognition circuit, a low speed receiving circuit that has a first receiver coupled with a first input of the differential receiver and a second receiver coupled with a second input of the differential receiver, wherein the low speed receiving circuit is coupled with the protocol recognition circuit, allowing the first and second receivers to access the protocol recognition block at a different frequency than the differential receiver.
    Type: Application
    Filed: December 18, 2013
    Publication date: April 17, 2014
    Inventors: David J. Zimmerman, Michael W. Williams