Direct Memory Access (e.g., Dma) Patents (Class 710/308)
  • Patent number: 10417151
    Abstract: A method of real-time data acquisition in a processing component using chained direct memory access (DMA) channels includes receiving a DMA event signal in a DMA controller of the processing component, and executing, responsive to the DMA event signal, DMAs to read at least one data sample from a peripheral device. A last DMA performs a write operation to acknowledge completion of the DMA event.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: September 17, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventor: Sreeram Subramanian
  • Patent number: 10365830
    Abstract: A method, device, and system for implementing hardware acceleration processing, where the method includes memory mapping input/output (MMIO) processing being performed on a data buffer address of a hardware acceleration processor in order to obtain an address in addressing space of a central processing unit (CPU). In addition, a network adapter has a remote direct memory access (RDMA) or a direct memory access (DMA) function. Alternatively, a network adapter of a hardware acceleration device can directly send received data on which the hardware acceleration processing is to be performed to a hardware acceleration processor. In this way, resource consumption is reduced when the CPU of a computer device receives and forwards the data on which the hardware acceleration processing is to be performed, and in addition, storage space of a memory of the computer device is saved.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: July 30, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jian He, Xiaoke Ni, Yu Liu, Jinshui Liu
  • Patent number: 10365824
    Abstract: Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: July 30, 2019
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Wade K. Smith, Anthony Asaro
  • Patent number: 10346323
    Abstract: A data transfer device includes a buffer unit temporarily storing transfer data to be transferred to a common bus, a write control unit writing input data as the transfer data to the buffer unit and outputs a notification signal, a read control unit reading the transfer data from the buffer unit, an interface unit transferring the transfer data to the common bus according to a predetermined bus protocol, and a band-smoothing unit which smoothes a band of the common bus by delaying the notification signal, generating a read control signal, and outputting the read control signal to the read control unit, wherein when previous transfer data which has been read from the buffer unit already exists and a DMA transfer receipt ACK responding to a DMA transfer request REQ is received by the interface unit, the read control unit reads current transfer data from the buffer unit.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: July 9, 2019
    Assignee: OLYMPUS CORPORATION
    Inventors: Yoshinobu Tanaka, Akira Ueno
  • Patent number: 10331588
    Abstract: Ensuring the appropriate utilization of system resources using weighted workload based, time-independent scheduling, including: receiving an I/O request associated with an entity; determining whether an amount of system resources required to service the I/O request is greater than an amount of available system resources in a storage system; responsive to determining that the amount of system resources required to service the I/O request is greater than the amount of available system resources in the storage system: queueing the I/O request in an entity-specific queue for the entity; detecting that additional system resources in the storage system have become available; and issuing an I/O request from an entity-specific queue for an entity that has a highest priority, where a priority for each entity is determined based on the amount of I/O requests associated with the entity and a weighted proportion of resources designated for use by the entity.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: June 25, 2019
    Assignee: Pure Storage, Inc.
    Inventors: Yuval Frandzel, Kiron Vijayasankar
  • Patent number: 10332574
    Abstract: An embedded memory includes a memory interface circuit, a cell array, and a peripheral circuit. The memory interface circuit receives at least a clock signal, a non-clock signal, and a setup-hold time control setting, and includes a programmable path delay circuit that is used to set a path delay of at least one of a clock path and a non-clock path according to the setup-hold time control setting. The clock path is used to deliver the clock signal, and the non-clock path is used to deliver the non-clock signal. The peripheral circuit is used to access the cell array according to at least the clock signal provided from the clock path and the non-clock signal.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: June 25, 2019
    Assignee: MEDIATEK INC.
    Inventor: Chia-Wei Wang
  • Patent number: 10289344
    Abstract: Managing input/output (‘I/O’) queues in a data storage system, including: receiving, by a host that is coupled to a plurality of storage devices via a storage network, a plurality of I/O operations to be serviced by a target storage device; determining, for each of a plurality of paths between the host and the target storage device, a data transfer maximum associated with the path; determining, for one or more of the plurality of paths, a cumulative amount of data to be transferred by I/O operations pending on the path; and selecting a target path for transmitting one or more of the plurality of I/O operations to the target storage device in dependence upon the cumulative amount of data to be transferred by I/O operations pending on the path and the data transfer maximum associated with the path.
    Type: Grant
    Filed: April 29, 2018
    Date of Patent: May 14, 2019
    Assignee: Pure Storage, Inc.
    Inventors: Ronald Karr, John Mansperger
  • Patent number: 10289329
    Abstract: A method, data processing system and program product utilize dynamic logical storage volume sizing for burst buffers or other local storage for computing nodes to optimize job stage in, execution and/or stage out.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: May 14, 2019
    Assignee: International Business Machines Corporation
    Inventors: Thomas M. Gooding, David L. Hermsmeier, Jin Ma, Gary J. Mincher, Bryan S. Rosenburg
  • Patent number: 10241951
    Abstract: A method of transferring data between a host and a PCI device is disclosed. The method comprises mapping a fixed memory-mapping control block in a host memory of the host to a control register of a memory-mapping unit of the PCI device; mapping a dynamic data-access memory block in the host memory to a default memory block in a memory of the PCI device, wherein the memory-mapping unit translates an address between the dynamic data-access memory block and a memory block in the memory of the PCI device; and dynamically modifying a value in the control register of the memory-mapping unit through the fixed memory-mapping control block such that an address of the dynamic data-access memory block in the host memory is translated to a different address in the memory of the PCI device based on the modified value in the control register of the memory-mapping unit.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: March 26, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Hani Ayoub, Adi Habusha, Ronen Shitrit
  • Patent number: 10235195
    Abstract: In accordance with embodiments of the present disclosure, an information handling system may include a processor subsystem having access to a memory subsystem and a device communicatively coupled to the processor subsystem, the device having an endpoint assigned for access by an operating system executing on the processor subsystem such that the endpoint appears to the operating system as a logical hardware adapter, wherein the device is configured to discover a private device coupled to the device, enumerate the private device as a managed device of the device, and map a portion of a virtual address space of an operating system executing on the processor subsystem to the private device, such that the private device is abstracted to the operating system as a virtual memory address of the operating system.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: March 19, 2019
    Assignee: Dell Products L.P.
    Inventors: Shyam T. Iyer, Gaurav Chawla, Duk M. Kim, Srikrishna Ramaswamy
  • Patent number: 10133694
    Abstract: Embodiments of the present disclosure use vendor defined messages (VDMs) to send high priority information (e.g., cache writebacks) on a designated channel that is separate from a channel used for other commands (e.g., normal memory write commands). By using VDMs and a designated channel to send cache writebacks, the cache writebacks will not be blocked by normal memory write commands. For example, an endpoint device may encode cache writebacks as VDMs to be sent to a root complex. The root complex may store the VDMs in a dedicated VDM buffer and send the VDMs on a dedicated VDM channel.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: November 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: Eric N. Lais, Adalberto G. Yanes
  • Patent number: 10083347
    Abstract: Automated facial recognition is performed by operation of a convolutional neural network including groups of layers in which the first, second, and third groups include a convolution layer, a max-pooling layer, and a parametric rectified linear unit activation function layer. A fourth group of layers includes a convolution layer and a parametric rectified linear unit activation function layer.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: September 25, 2018
    Assignee: NTech lab LLC
    Inventors: Artem Kuharenko, Sergey Ovcharenko, Alexander Uldin
  • Patent number: 10067706
    Abstract: Providing memory bandwidth compression using compression indicator (CI) hint directories in a central processing unit (CPU)-based system is disclosed. In this regard, a compressed memory controller provides a CI hint directory comprising a plurality of CI hint directory entries, each providing a plurality of CI hints. The compressed memory controller is configured to receive a memory read request comprising a physical address of a memory line, and initiate a memory read transaction comprising a requested read length value. The compressed memory controller is further configured to, in parallel with initiating the memory read transaction, determine whether the physical address corresponds to a CI hint directory entry in the CI hint directory. If so, the compressed memory controller reads a CI hint from the CI hint directory entry of the CI hint directory, and modifies the requested read length value of the memory read transaction based on the CI hint.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: September 4, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan
  • Patent number: 10025745
    Abstract: A computer system and a method are provided for accessing a peripheral component interconnect express (PCIe) endpoint device. The computer system includes: a processor, a PCIe bus, and an access proxy. The access proxy connects to the processor and the PCIe endpoint device; the processor acquires an operation instruction, where the operation instruction instructs the processor to access the PCIe endpoint device through the access proxy, and send an access request to the access proxy according to the operation instruction; and the access proxy sends a response message of the access request to the processor after receiving the access request sent by the processor. Because the processor does not directly access the PCIe endpoint device to be accessed but completes access through the access proxy, thereby avoiding an MCE reset for the processor.
    Type: Grant
    Filed: June 6, 2014
    Date of Patent: July 17, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Ge Du
  • Patent number: 9998558
    Abstract: Various example embodiments herein disclose methods of enabling access to a storage device remotely over the network. According to at least one example embodiment, a method comprising initializing a Non-Volatile Memory Express (NVMe) controller, by a network device coupled to a server, configuring, by the network device, a NVMe queue pair for handling a remote device discovery process, receiving, at the network device, a request from the remote device to access the storage device controlled by the NVMe controller maintained at the server, initiating, by the network device, the discovery process for locating the remote device; and establishing, by the network device, a connection with the remote device by mapping the NVMe queue pair with a Remote Direct Memory Access (RDMA) queue pair, once the remoted device is discovered.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: June 12, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Sandeep Anandkumar Sammatshetti
  • Patent number: 9891863
    Abstract: Systems and methods for handling Shingled Magnetic Recording (SMR) drives in a tiered storage system. In some embodiments, an Information Handling System (IHS) may include a processor; and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to: identify, among data stored in a first storage medium, a data subset that has a selected access pattern, wherein the selected access pattern is indicative of how often data is updated; and move the data subset from the first storage medium to one or more SMR drives.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: February 13, 2018
    Assignee: Dell Products, L.P.
    Inventors: William Price Dawkins, Kevin Thomas Marks
  • Patent number: 9846551
    Abstract: A processor system (10) includes: a first memory controller (16) that controls writing/reading data to/from a first memory (60); a second memory controller (17) that controls writing/reading data to/from a second memory (70); a first processor (13) that inputs and outputs the data from and to the first memory through a bus (14); a second processor (11) that inputs and outputs processed data from and to the second memory through the bus; and a management unit 32 that deallocates an address range corresponding to the second memory from the first process and allocates the address range to the second processor.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: December 19, 2017
    Assignee: Renesas Electronics Corporation
    Inventors: Tetsuji Tsuda, Yoshiyuki Ito
  • Patent number: 9804982
    Abstract: An interface module has at least a configuration connection, a reset connection, a transmission connection and a reception connection. The interface module also has at least a first interface processing unit and a second interface processing unit which differs from the first interface processing unit and the connections of which can be connected to the connections of the interface module via a multiplexer. Only one set of interface connections needs to be provided on the interface module.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: October 31, 2017
    Assignee: Continental Automotive GmbH
    Inventors: Klaus-Dieter Schneider, Bernhard Hauck
  • Patent number: 9781209
    Abstract: Various embodiments are generally directed to techniques for improving the efficiency of exchanging packets between pairs of VMs within a communications server. An apparatus may include a processor component; a network interface to couple the processor component to a network; a virtual switch to analyze contents of at least one packet of a set of packets to be exchanged between endpoint devices through the network and the communications server, and to route the set of packets through one or more virtual servers of multiple virtual servers based on the contents; and a transfer component of a first virtual server of the multiple virtual servers to determine whether to route the set of packets to the virtual switch or to transfer the set of packets to a second virtual server of the multiple virtual servers in a manner that bypasses the virtual switch based on a routing rule.
    Type: Grant
    Filed: August 20, 2015
    Date of Patent: October 3, 2017
    Assignee: INTEL CORPORATION
    Inventors: Mesut A. Ergin, Jr-Shian Tsai, Janet Tseng, Ren Wang, Jun Nakajima, Tsung-Yuan Tai
  • Patent number: 9767054
    Abstract: An image processing module input/output port in a DMAC includes an input part which receives second address information and an addressing request signal from an image processing module and an output part which outputs a reply signal indicating valid reception of the second address information to the image processing module. The image processing module input/output port can perform signal input/output control processing of returning, in response to the addressing request signal, the reply signal indicating confirmation of valid reception of the second address information to the image processing module when valid reception of the second address information is confirmed. A memory access controller performs memory access processing of accessing a storage area to be accessed in a memory based on the first address information (=the second address information) received via the image processing module input/output port.
    Type: Grant
    Filed: March 17, 2015
    Date of Patent: September 19, 2017
    Assignee: MegaChips Corporation
    Inventor: Takashi Mori
  • Patent number: 9760455
    Abstract: A Peripheral Component Interconnect Express (PCIe) network system with fail-over capability and an operation method thereof are provided. The PCIe network system includes a management host, a PCIe switch, a first non-transparent bridge, and a second non-transparent bridge. The upstream port of the PCIe switch is electrically coupled to the management host. The first non-transparent bridge is disposed in the PCIe switch for electrically coupling to the first PCIe port of a calculation host. The first non-transparent bridge can couple the first PCIe port of the calculation host to the management host. The second non-transparent bridge is disposed in the PCIe switch for electrically coupling to the second PCIe port of the calculation host. The second non-transparent bridge can couple the second PCIe port of the calculation host to the management host.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: September 12, 2017
    Assignee: Industrial Technology Research Institute
    Inventors: Chao-Tang Lee, Cheng-Chun Tu, Tzi-Cker Chiueh
  • Patent number: 9760297
    Abstract: Managing input/output (‘I/O’) queues in a data storage system, including: receiving, by a host that is coupled to a plurality of storage devices via a storage network, a plurality of I/O operations to be serviced by a target storage device; determining, for each of a plurality of paths between the host and the target storage device, a data transfer maximum associated with the path; determining, for one or more of the plurality of paths, a cumulative amount of data to be transferred by I/O operations pending on the path; and selecting a target path for transmitting one or more of the plurality of I/O operations to the target storage device in dependence upon the cumulative amount of data to be transferred by I/O operations pending on the path and the data transfer maximum associated with the path.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: September 12, 2017
    Assignee: Pure Storage, Inc.
    Inventors: Ronald Karr, John Mansperger
  • Patent number: 9753873
    Abstract: Various embodiments of systems and methods to interleave high priority key-value transactions together with lower priority transactions, in which both types of transactions are communicated over a shared input-output medium. In various embodiments, a central-processing-unit (CPU) initiates high priority key-value transactions by communicating via the shared input-output medium to a key-value-store. In various embodiments, a medium controller blocks or delays lower priority transactions such that the high priority transactions may proceed without interruption. In various embodiments, both of the types of transactions are packet-based, and the system interrupts a lower priority transaction at a particular packet, then completes the high priority transaction, then completes the lower priority transaction.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: September 5, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Lior Khermosh, Avner Braverman, Gal Zuckerman
  • Patent number: 9678887
    Abstract: An I/O DMA address may be translated for a flexible number of entries in a translation validation table (TVT) for a partitionable endpoint number, when a particular entry in the TVT is accessed based on the partitionable endpoint number. A presence of an extended mode bit can be detected in a particular TVT entry. Based on the presence of the extended mode bit, an entry in the extended TVT can be accessed and used to translate the I/O DMA address to a physical address.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: June 13, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jesse P. Arroyo, Rama K. Hazari, Sakethan R. Kotta, Kumaraswamy Sripathy
  • Patent number: 9678892
    Abstract: An I/O DMA address may be translated for a flexible number of entries in a translation validation table (TVT) for a partitionable endpoint number, when a particular entry in the TVT is accessed based on the partitionable endpoint number. A presence of an extended mode bit can be detected in a particular TVT entry. Based on the presence of the extended mode bit, an entry in the extended TVT can be accessed and used to translate the I/O DMA address to a physical address.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: June 13, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jesse P. Arroyo, Rama K. Hazari, Sakethan R. Kotta, Kumaraswamy Sripathy
  • Patent number: 9632959
    Abstract: An efficient search key processing method includes writing a first and a second search key data set to a memory, where the search key data sets are written to memory on a word by word basis. Each of the first and second search key data sets includes a header indicating a common lookup operation to be performed and a string of search keys. The header is immediately followed in memory by a search key. The search keys are located contiguously in the memory. At least one word contains search keys from the first and second search key data sets. The memory is read word by word. A first plurality of lookup command messages are sent based on the search keys included in the first search key data set. A second plurality of lookup command messages are sent based on the search keys included in the second search key data set.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: April 25, 2017
    Assignee: Netronome Systems, Inc.
    Inventor: Rick Bouley
  • Patent number: 9594702
    Abstract: A multi-processor includes a shared memory that stores a search key data set including multiple search keys, a processor, a Direct Memory Access (DMA) controller, and an Interlaken Look-Aside (ILA) interface circuit. The processor generates a descriptor that is sent to the DMA controller causing the DMA controller to read the search key data set. The DMA controller selects a single search key from the set and generates a lookup command message that is communicated to the ILA interface circuit. The ILA interface circuit generates an ILA packet that includes the single search key and sends the ILA packet to an external transactional memory device that generates a result data value. The result data value is communicated back to the DMA controller via the ILA interface circuit. The DMA controller stores the result data value in the shared memory and notifies the processor that the DMA process has completed.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: March 14, 2017
    Assignee: Netronome Systems, Inc.
    Inventor: Rick Bouley
  • Patent number: 9594706
    Abstract: A Island-Based Network Flow Processor (IBNFP) includes a memory and a processor located on a first island, a Direct Memory Access (DMA) controller located on a second island, and an Interlaken Look-Aside (ILA) interface circuit and an interface circuit located on a third island. A search key data set including multiple search keys is stored in the memory. A descriptor is generated by the processor and is sent to the DMA controller, which generates a search key data request, receives the search key data set, and selects a single search key. The ILA interface circuit receives the search key, generates and ILA packet including the search key that is sent to an external transactional memory device that generates a result data value. The DMA controller receives the result data value via the ILA interface circuit, writes the result data value to the memory, and sends a DMA completion notification.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: March 14, 2017
    Assignee: Netronome Systems, Inc.
    Inventor: Rick Bouley
  • Patent number: 9477632
    Abstract: A computer system and a method are provided for accessing a peripheral component interconnect express (PCIe) endpoint device. The computer system includes a processor, a PCIe bus, and an access proxy. The access proxy connects to the processor and the PCIe endpoint device; the processor acquires an operation instruction, where the operation instruction instructs the processor to access the PCIe endpoint device through the access proxy, and send an access request to the access proxy according to the operation instruction; and the access proxy sends a response message of the access request to the processor after receiving the access request sent by the processor. Because the processor does not directly access the PCIe endpoint device to be accessed but completes access through the access proxy, thereby avoiding a machine check exception (MCE) reset for the processor.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: October 25, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Ge Du
  • Patent number: 9430411
    Abstract: Apparatus and methods implemented therein are disclosed for communicating with flash memories. The apparatus comprises a flash interface module and a processor in communication with the flash interface module. The flash interface module is configured for communication with a first and second flash bank. The processor is configured to generate a plurality of command sequences in response to receiving a plurality of flash commands from a host system. Each of the plurality of command sequences corresponds to a respective one of the plurality of flash commands. Some of the plurality of command sequences comprises a first portion and a second portion and each of the first portion and second portion are atomic.
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: August 30, 2016
    Assignee: SanDisk Technologies LLC
    Inventors: Gary Lin, Matt Davidson, Milton Barrocas, Aruna Gutta
  • Patent number: 9201824
    Abstract: A block memory device and method of transferring data to a block memory device are described. Various embodiments provide methods for transferring data to a block memory device by adaptive chunking. The data transfer method comprises receiving data in a data chunk. The data transfer method then determines that the data chunk is ready to be transferred to a block memory and transfers the data chunk to the block memory. The transfer occurs over duration, repeating the above steps until the transfer is complete. The data transfer method determines that the data chunk is ready to be transferred to the block memory based on at least in part on a duration of a previous transfer.
    Type: Grant
    Filed: January 22, 2009
    Date of Patent: December 1, 2015
    Assignee: Intel Deutschland GmbH
    Inventor: Karsten Gjoerup
  • Patent number: 9059946
    Abstract: A passive optical network (PON) packet processor for processing PON traffic includes a core processor for executing threads related to the processing of PON traffic and a plurality of hardware (HW) accelerators coupled to the core processor for accelerating the processing of said PON traffic. A memory unit is coupled to the core processor for maintaining program and traffic data. In an embodiment of the present invention, the PON packet processor includes a scheduler that optimizes the execution of PON related tasks.
    Type: Grant
    Filed: February 9, 2006
    Date of Patent: June 16, 2015
    Assignee: Broadcom Corporation
    Inventors: Gil Levy, Eliezer Weitz, Eli Elmoalem, Gal Sitton
  • Patent number: 9053093
    Abstract: One embodiment relates to an integrated circuit with a modular direct memory access system. A read data mover receives data obtained from a source address, and a write data mover for sends the data to a destination address. A descriptor controller provides the source address to the read data mover and the destination address to the write data mover. Another embodiment relates to a method of providing direct memory access. Another embodiment relates to a system which provides direct memory access. Other embodiments and features are also disclosed.
    Type: Grant
    Filed: August 23, 2013
    Date of Patent: June 9, 2015
    Assignee: Altera Corporation
    Inventors: Harry Nguyen, Christopher D. Finan, Philippe Molson
  • Publication number: 20150149682
    Abstract: An in-vehicle sensor (1) connected to a communication bus CAN includes a bus connection connector (40) including external communication terminals T3, T4, and external setting terminals T5, T6 each of which is brought into one of a plurality of connection states; judgment means S1-S7 for judging the connection states of the external terminals for setting T5, T6 when electric power is supplied in a state in which the bus connection connector (40) is connected to the communication bus CAN; identifier generation means S8 for generating an identifier ID of the in-vehicle sensor (1) based on the judged connection states; a nonvolatile storage section (11) for storing the identifier ID; communication means (10) for performing communications through the communication bus CAN using the stored identifier ID; and storing means S9 for storing a first generated initial identifier IDS in the storage section (11) as the identifier ID.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 28, 2015
    Applicant: NGK SPARK PLUG CO., LTD.
    Inventors: Tomonori UEMURA, Chihiro TOMIMATSU
  • Publication number: 20150149680
    Abstract: An information processing apparatus having first and second buses, includes: a read/write command unit transmitting a read command or a write command to the first bus; a read command unit receiving a read command from the second bus; a write command unit receiving a write command from the second bus; and a command unit transmit the read command and the write command to the read/write command unit based on the read and write commands received by the read command unit and the write command unit. Further, the command unit stops transmitting the read command, and, while stopping transmission of the read command, change a transmission order of the read and the write commands so that the read/write command unit transmits the write command with higher priority than the read command.
    Type: Application
    Filed: September 4, 2014
    Publication date: May 28, 2015
    Applicant: Ricoh Company, Ltd.
    Inventor: Yoshimichi KANDA
  • Publication number: 20150149681
    Abstract: A system, method, and computer readable medium for sharing bandwidth among executing application programs across a packetized bus for packets from multiple DMA channels includes receiving at a network traffic management device first and second network packets from respective first and second DMA channels. The received packets are segmented into respective one or more constituent CPU bus packets. The segmented constituent CPU bus packets are interleaved for transmission across a packetized CPU bus.
    Type: Application
    Filed: October 29, 2014
    Publication date: May 28, 2015
    Inventor: Tim S. Michels
  • Publication number: 20150143003
    Abstract: Representative embodiments are disclosed for a rapid and highly parallel configuration process for field programmable gate arrays (FPGAs). In a representative method embodiment, using a host processor, a first configuration bit image for an application is stored in a host memory; one of more FPGAs are configured with a communication functionality such as PCIe using a second configuration bit image stored in a nonvolatile memory; a message is transmitted by the host processor to the FPGAs, usually via PCIe lines, with the message comprising a memory address and also a file size of the first configuration bit image in the host memory; using a DMA engine, each FPGA obtains the first configuration bit image from the host memory and is then configured using the first configuration bit image. Primary FPGAs may further transmit the first configuration bit image to additional, secondary FPGAs, such as via JTAG lines, for their configuration.
    Type: Application
    Filed: January 29, 2015
    Publication date: May 21, 2015
    Inventors: Robert Trout, Jeremy B. Chritz, Gregory M. Edvenson
  • Publication number: 20150143015
    Abstract: A DMA controller (40) comprises a reading start address register (402) storing a reading start address from which reading starts; a reading data size register (403) storing the size of data to be read in a single reading operation; an offset value register (404) storing an offset value for updating the reading start address after the reading operation ends; a repetition upper limit value register (405) storing the upper limit value of the number of times of repetition of the reading operation; and a repetition counter register (406) storing the number of times of repetition of the reading operation. The controller (401) of the DMA controller (40) outputs an interrupt signal indicating that the processing of the DMA controller (40) ends when the value stored in the repetition counter register (406) reaches the value stored in the repetition upper limit value register (405).
    Type: Application
    Filed: January 30, 2015
    Publication date: May 21, 2015
    Inventors: Masanori NAKATA, Noriyuki KUSHIRO, Yoshiaki ITO, Yoshiaki KOIZUMI
  • Publication number: 20150143014
    Abstract: One disclosed computing system comprises a x86 processor, memory, a PCIe root complex (RC), a PCIe bus, and an interconnect chip having a PCIe endpoint (EP) that is connected to the PCIe RC through a PCIe link, the PCIe EP being connected to an AMBA® bus. The interconnect chip may communicate with the IO device via the AMBA® bus in an AMBA® compliant manner and communicate with the host system in a PCIe compliant manner. This communication may include receiving a command from the processor, sending the command to the IO device over the AMBA® bus, receiving a response from the IO device over the AMBA® bus, and sending over the AMBA® bus and the PCIe link one or more DMA operations to the memory. Further communication may include sending an IOAPIC interrupt to the processor of the host system according to PCIe ordering rules.
    Type: Application
    Filed: November 21, 2013
    Publication date: May 21, 2015
    Applicant: Microsoft Corporation
    Inventors: Nhon Quach, Stephen Z. Au, Thomas Zou, Tracy Sharpe
  • Publication number: 20150134871
    Abstract: Methods and systems are provided that execute reduced host data commands. A reduced host data command may be a write command that includes or is received with an indication of host data instead of the host data. The reduced host data command may be executed with a Direct Memory Access (DMA) circuit independently of a processor that executes administrative commands. In the execution of the reduced host data command, host data may be generated, metadata may be generated, and the generated host data and/or metadata may be copied into backend memory with the DMA circuit independently of the processor.
    Type: Application
    Filed: November 8, 2013
    Publication date: May 14, 2015
    Inventors: Shay Benisty, Tal Sharifie, Girish Desai, Oded Karni
  • Publication number: 20150134872
    Abstract: A bus system that has at least two lines. A bus subscriber has at least one connection element that has at least two contacts that can each be connected to one of the lines. An address allocation device can be used to ascertain an address for the bus subscriber in the bus system on the basis of a respective connection state of the contacts with respect to the lines. Also, a method allocates addresses in the bus system.
    Type: Application
    Filed: April 30, 2013
    Publication date: May 14, 2015
    Applicant: AUDI AG
    Inventors: Stephan Krell, Wolf Goetze
  • Patent number: 9032116
    Abstract: A device comprises a central processing unit (CPU) and a memory configured for storing memory descriptors. The device also includes an analog-to-digital converter controller (ADC controller) configured for managing an analog-to-digital converter (ADC) using the memory descriptors. In addition, the device includes a direct memory access system (DMA system) configured for autonomously sequencing conversion operations performed by the ADC without CPU intervention by transferring the memory descriptors directly between the memory and the ADC controller for controlling the conversion operations performed by the ADC.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: May 12, 2015
    Assignee: Atmel Corporation
    Inventors: Frode Milch Pedersen, Romain Oddoart, Cedric Favier
  • Patent number: 9032122
    Abstract: The present disclosure includes a method for migration of a first virtual function of a first device located on a PCI bus and accessible by a device driver using a virtual address. A second virtual function is created on a second device. A base address is determined for the second virtual function as a function of a logical location of the second device within the PCI structure. An offset is determined for the second virtual function as a function of the base address and the virtual address. The device driver is notified that the first virtual function is on hold. The offset is stored in a translation table. The device driver is notified that the hold has been lifted. Accesses to the virtual address and by the device driver to memory of the second virtual function are routed based upon the offset in the translation table.
    Type: Grant
    Filed: December 10, 2013
    Date of Patent: May 12, 2015
    Assignee: International Business Machines Corporation
    Inventors: Brian W. Hart, Liang Jiang, Anil Kalavakolanu, Shannon D. Moore, Robert E. Wallis, Evelyn T. Yeung
  • Publication number: 20150127872
    Abstract: An exemplary computer system includes a server module including a first processor and first memory, a storage module including a second processor, a second memory and a storage device, and a transfer module. The transfer module retrieves a first transfer list including an address of a first storage area, which is set on the first memory for a read command, from the server module. The transfer module retrieves a second transfer list including an address of a second storage area in the second memory, in which data corresponding to the read command read from the storage device is stored temporarily, from the storage module. The transfer module sends the data corresponding to the read command in the second storage area to the first storage area by controlling the data transfer between the second storage area and the first storage area based on the first and second transfer lists.
    Type: Application
    Filed: January 14, 2015
    Publication date: May 7, 2015
    Inventors: Yuki KONDOH, Isao OHARA
  • Publication number: 20150127869
    Abstract: A system and method can support efficient packet processing in a network environment. The system can comprise a thread scheduling engine that operates to assign a thread key to each software thread in a plurality of software threads. Furthermore, the system can comprise a pool of direct memory access (DMA) resources that can be used to process packets in the network environment. Additionally, each said software thread operates to request access to a DMA resource in the pool of DMA resources by presenting an assigned thread key, and a single software thread is allowed to access multiple DMA resources using the same thread key.
    Type: Application
    Filed: November 5, 2013
    Publication date: May 7, 2015
    Applicant: Oracle International Corporation
    Inventors: Arvind Srinivasan, Ajoy Siddabathuni, Elisa Rodrigues
  • Publication number: 20150127870
    Abstract: A semiconductor memory device includes a first global line suitable for inputting/outputting data from/to a first bank, a second global line suitable for inputting/outputting data from/to a second bank, a multi-purpose register (MPR) suitable for loading data having a predetermined value on the first global line in a training mode, a first data input/output (I/O) unit suitable for inputting/outputting data between one of the first and second global lines and a first data pad and selectively transferring data loaded on the first global line to the second global line in response to a bandwidth option in the training mode, and a second data I/O unit enabled in response to the bandwidth option, suitable for inputting/outputting data between the second global line and a second data pad.
    Type: Application
    Filed: December 15, 2013
    Publication date: May 7, 2015
    Applicant: SK hynix Inc.
    Inventor: Choung-Ki SONG
  • Publication number: 20150127871
    Abstract: Disclosed is a system and method for updating IOMMU (Input Output Memory Management Unit) tables for remapping DMA (Direct Memory Access) range for a requested bus device when the device is active.
    Type: Application
    Filed: December 20, 2013
    Publication date: May 7, 2015
    Inventor: Kashyap Dushyantbhai Desai
  • Publication number: 20150120983
    Abstract: Two channels of a main CPU channel and a sub CPU channel each including a reception channel and a transmission channel, and performing a data transfer by a DMA in accordance with a descriptor are provided, a channel switching part selects the main CPU channel or the sub CPU channel in accordance with information set at a mode setting register, and performs a switching of channels at a boundary of a packet to be transferred to thereby enable the switching of channels without interrupting a DMA operation.
    Type: Application
    Filed: August 27, 2014
    Publication date: April 30, 2015
    Inventors: Takashi OKUDA, Satoru OKAMOTO
  • Publication number: 20150120984
    Abstract: According to one embodiment, the host controller includes a register set to issue command, and a direct memory access (DMA) unit and accesses a system memory and a device. First, second, third and fourth descriptors are stored in the system memory. The first descriptor includes a set of a plurality of pointers indicating a plurality of second descriptors. Each of the second descriptors comprises the third descriptor and fourth descriptor. The third descriptor includes a command number, etc. The fourth descriptor includes information indicating addresses and sizes of a plurality of data arranged in the system memory. The DMA unit sets, in the register set, the contents of the third descriptor forming the second descriptor, from the head of the first descriptor as a start point, and transfers data between the system memory and the host controller in accordance with the contents of the fourth descriptor.
    Type: Application
    Filed: January 2, 2015
    Publication date: April 30, 2015
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Akihisa FUJIMOTO
  • Publication number: 20150113195
    Abstract: An electronic device includes: a communication module; an input module; a display; an interface; at least one sensor; a memory; and a processor module. The processor module includes at least one of: at least one dummy chip including at least one Through Silicon Via (TSV); at least one memory bridge including at least one TSV; at least one memory connected to the at least one dummy chip and the at least one memory bridge and that can exchange an electric signal through the at least one dummy chip and the at least one memory bridge; or at least one processor. The at least one processor may be configured to exchange an electric signal through the at least one memory bridge, and to transmit an electric signal to at least one of the communication module, input module, display, interface, at least one sensor, or first memory.
    Type: Application
    Filed: October 20, 2014
    Publication date: April 23, 2015
    Inventor: Seijin KIM