Patents Assigned to Cavium, Inc.
-
Patent number: 9952979Abstract: Systems and methods for a direct memory access (DMA) operation are provided. The method includes receiving a host memory address by a device coupled to a computing device; storing the host memory address at a device memory by a DMA engine; receiving a packet at the device for the computing device; instructing the DMA engine by a device processor to retrieve the host memory address from the device memory; retrieving the host memory address by the DMA engine without the device processor reading the host memory address; and transferring the packet to the computing device by a DMA operation.Type: GrantFiled: January 14, 2015Date of Patent: April 24, 2018Assignee: Cavium, Inc.Inventor: Abhishek Mukherjee
-
Patent number: 9954551Abstract: A packet processing system having a barrel compactor that extracts a desired data subset from an input dataset (e.g. an incoming packet). The barrel compactor is able to selectively shift one or more of the input data units of the input dataset based on individual shift values for those data units. Additionally, in some embodiments one or more of the data units are able to be logically combined to produce a desired logical output unit.Type: GrantFiled: March 31, 2015Date of Patent: April 24, 2018Assignee: Cavium, Inc.Inventors: Premshanth Theivendran, Weihuang Wang, Sowmya Hotha, Srinath Alturi
-
Patent number: 9946671Abstract: Methods and systems for processing an input/output (I/O) requests are provided. The method includes generating an I/O request by an initiator adapter of a computing device that interfaces with a target adapter; indicating by the initiator adapter that the I/O request is sequential in nature. When the I/O request is a sequential read request, the target adapter notifies a target controller to read-ahead data associated with other sequential read requests; and stores the read ahead data at a cache such that data for the other sequential read requests is provided from the cache instead of a storage device managed by the target controller. A sequential write request is processed without claiming any cache space, when data for the write request is not to be accessed within a certain duration.Type: GrantFiled: December 15, 2015Date of Patent: April 17, 2018Assignee: Cavium, Inc.Inventors: Deepak Tawri, Vijay Thurpati
-
Patent number: 9948482Abstract: A network switch to support flexible lookup key generation comprises a control CPU configured to run a network switch control stack. The network switch control stacks is configured to manage and control operations of a switching logic circuitry, provide a flexible key having a plurality of possible fields that constitute part of a lookup key to a table, and enable a user to dynamically select at deployment or runtime a subset of the fields in the flexible key to form the lookup key and thus define a lookup key format for the table. The switching logic circuitry provisioned and controlled by the network switch control stack is configured to maintain said table to be searched via the lookup key in a memory cluster and process a received data packet based on search result of the table using the lookup key generated from the dynamically selected fields in the flexible key.Type: GrantFiled: April 27, 2016Date of Patent: April 17, 2018Assignee: CAVIUM, INC.Inventors: Leonid Livak, Ravindran Suresh, Zubin Shah, Sunita Bhaskaran, Ashwini Reddy
-
Patent number: 9934177Abstract: Methods and systems for efficiently processing input/output requests are provided. A network interface card (NIC) is coupled to a storage device via a peripheral link and accessible to a processor of a computing device executing instructions out of a memory device. The NIC is configured to receive a read/write request to read/write data; translate the read/write request to a storage device protocol used by the storage device coupled to the NIC; notify the storage device of the read/write request, without using the processor of the computing device, where the storage device reads/writes the data and notifies the NIC; and then the NIC prepares a response to the read/write request without having to use the processor of the computing device.Type: GrantFiled: March 24, 2015Date of Patent: April 3, 2018Assignee: Cavium, Inc.Inventors: Nir Goren, Rafi Shalom, Kobby Carmona
-
Patent number: 9933809Abstract: Pacing of a producer, operating in a producer clock domain, may be based on at least one heuristic of a credit wire that is used to return credits to the producer. The returned credits may indicate that a consumer, operating in a consumer clock domain, has consumed data produced by the producer. The at least one heuristic may be a rate at which the credits are returned to the producer. Pacing the producer based on the rate at which the credits are returned to the producer may reduce latency of the data, flowing from the producer clock domain to the consumer clock domain, by minimizing an average number of entries in use in a First-In-First-Out (FIFO) operating in a pipeline between the producer and the consumer.Type: GrantFiled: November 13, 2015Date of Patent: April 3, 2018Assignee: Cavium, Inc.Inventor: Steven C. Barner
-
Patent number: 9936003Abstract: Methods and systems for transmitting information are provided. A threshold message size is configured to determine when an application executed by a computing system can send a latency message identifying a memory location from where a device can procure a payload for transmission to a destination. The computing system sends a latency message to the device, where the latency message includes the memory location, a transfer size and an indicator indicating if the application wants a completion status after the latency message is processed. The computing system stores connection information at a location dedicated to the application that sends the latency message. The device transmits the payload to the destination; and posts a completion status, where the device posts the completion status at a completion queue associated with the application with information that enables the application to determine whether other latency messages can be posted.Type: GrantFiled: August 29, 2014Date of Patent: April 3, 2018Assignee: Cavium, Inc.Inventor: Kanoj Sarcar
-
Patent number: 9936021Abstract: Systems and methods for storage operations are provided.Type: GrantFiled: October 31, 2014Date of Patent: April 3, 2018Assignee: Cavium, Inc.Inventors: Sudhir Ponnachana, Ajmer Singh, Ronald B. Gregory
-
Patent number: 9928193Abstract: A silicon device configured to distribute a global timer value over a single serial bus to a plurality of processing elements that are disposed on the silicon device and that are coupled to the serial bus. Each of the processing elements comprises a slave timer. Upon receipt of the global timer value, the processing elements synchronize their respective slave timers with the global timer value. After the timers are synchronized, the global timer sends periodic increment signals to each of the processing elements. Upon receipt of the increment signals, the processing elements update their respective slave timers.Type: GrantFiled: November 14, 2014Date of Patent: March 27, 2018Assignee: Cavium, Inc.Inventors: Frank Worrell, Bryan W. Chin
-
Patent number: 9916274Abstract: An on-chip crossbar of a network switch comprising a central arbitration component configured to allocate packet data requests received from destination port groups to memory banks. The on-chip crossbar further comprises a Benes routing network comprising a forward network having a plurality of pipelined forward routing stages and a reverse network, wherein the Benes routing network retrieves the packet data from the memory banks coupled to input of the Benes routing network and route the packet data to the port groups coupled to output of the Benes routing network. The on-chip crossbar further comprises a plurality of stage routing control units each associated with one of the forward routing stages and configured to generate and provide a plurality of node control signals to control routing of the packet data through the forward routing stages to avoid contention between the packet data retrieved from different memory banks at the same time.Type: GrantFiled: July 23, 2015Date of Patent: March 13, 2018Assignee: Cavium, Inc.Inventors: Weihuang Wang, Dan Tu, Guy Hutchison, Prasanna Vetrivel
-
Patent number: 9910776Abstract: Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache.Type: GrantFiled: November 14, 2014Date of Patent: March 6, 2018Assignee: Cavium, Inc.Inventors: Shubhendu Sekhar Mukherjee, Albert Ma, Mike Bertone
-
Patent number: 9906468Abstract: A network processor controls packet traffic in a network by maintaining a count of pending packets. In the network processor, a pipe identifier (ID) is assigned to each of a number of paths connecting a packet output to respective network interfaces receiving those packets. A corresponding pipe ID is attached to each packet as it is transmitted. A counter employs the pipe ID to maintain a count of packets to be transmitted by a network interface. As a result, the network processor manages traffic on a per-pipe ID basis to ensure that traffic thresholds are not exceeded.Type: GrantFiled: October 27, 2011Date of Patent: February 27, 2018Assignee: Cavium, Inc.Inventors: Richard E. Kessler, Michael Sean Bertone
-
Patent number: 9904511Abstract: An improved shifter design for high-speed data processors is described. The shifter may include a first stage, in which the input bits are shifted by increments of N bits where N>1, followed by a second stage, in which all bits are shifted by a residual amount. A pre-shift may be removed from an input to the shifter and replaced by a shift adder at the second stage to further increase the speed of the shifter.Type: GrantFiled: February 6, 2015Date of Patent: February 27, 2018Assignee: Cavium, Inc.Inventors: Nitin Mohan, Ilan Pragaspathy
-
Patent number: 9904630Abstract: A method, and corresponding apparatus and system are provided for optimizing matching of at least one regular expression pattern in an input stream by storing a context for walking a given node, of a plurality of nodes of a given finite automaton of at least one finite automaton, the store including a store determination, based on context state information associated with a first memory, for accessing the first memory and not a second memory or the first memory and the second memory. Further, to retrieve a pending context, the retrieval may include a retrieve determination, based on the context state information associated with the first memory, for accessing the first memory and not the second memory or the second memory and not the first memory. The first memory may have read and write access times that are faster relative to the second memory.Type: GrantFiled: January 31, 2014Date of Patent: February 27, 2018Assignee: Cavium, Inc.Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa
-
Patent number: 9904305Abstract: A low drop-out voltage regulator includes an error amplifier that generates an amplified error voltage, the error amplifier including a first input for receiving a reference voltage, a second input for receiving a feedback voltage, a bias terminal for receiving an adaptive bias current, and an output. A pass gate providing an output voltage includes a first input connected to a supply voltage and a second input connected to the error amplifier output. A feedback network generating the feedback voltage includes a first terminal connected to the output of the pass gate and a second terminal connected to the second input of the error amplifier. An adaptive bias network providing the adaptive bias current includes a first transistor connected to the bias terminal of the error amplifier, a second transistor connected to the first transistor as a current mirror, and a third transistor connected in parallel with the pass gate.Type: GrantFiled: April 29, 2016Date of Patent: February 27, 2018Assignee: Cavium, Inc.Inventors: Jonathan K. Brown, JingDong Deng
-
Patent number: 9900253Abstract: A data processing system includes a phantom queue for each of a plurality of output ports each associated with an output link for outputting data. The phantom queues receive/monitor traffic on the respective ports and/or the associated links such that the congestion or traffic volume on the output ports/links is able to be determined by a congestion mapper coupled with the phantom queues. Based on the determined congestion level on each of the ports/links, the congestion mapper selects one or more non or less congested ports/links as destination of one or more packets. A link selection logic element then processes the packets according to the selected path or multi-path thereby reducing congestion on the system.Type: GrantFiled: March 24, 2015Date of Patent: February 20, 2018Assignee: Cavium, Inc.Inventor: Martin Leslie White
-
Patent number: 9880844Abstract: Embodiments of the present invention relate to fast and conditional data modification and generation in a software-defined network (SDN) processing engine. Modification of multiple inputs and generation of multiple outputs can be performed in parallel. A size of each input or output data can be large, such as in hundreds of bytes. The processing engine includes a control path and a data path. The control path generates instructions for modifying inputs and generating new outputs. The data path executes all instructions produced by the control path. The processing engine is typically programmable such that conditions and rules for data modification and generation can be reconfigured depending on network features and protocols supported by the processing engine. The SDN processing engine allows for processing multiple large-size data flows and is efficient in manipulating such data. The SDN processing engine achieves full throughput with multiple back-to-back input and output data flows.Type: GrantFiled: December 30, 2013Date of Patent: January 30, 2018Assignee: CAVIUM, INC.Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Mohan Balan
-
Patent number: 9882678Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. The process, in one aspect, is able to receive a data stream formatted with soft information from a communication network such as a wireless network. After identifying a set of bits representing a first logic value from a portion of the data stream in accordance with a predefined soft encoding scheme, the set of bits is compressed into a compressed set of bits. The compressed set of bits which represents the first logic value is subsequently stored in a local memory.Type: GrantFiled: September 23, 2014Date of Patent: January 30, 2018Assignee: CAVIUM, INC.Inventor: Mehran Nekuii
-
Patent number: 9871733Abstract: A policer system on one or more place and/or route blocks. The policer system including a plurality of local physical policers each stored in a plurality of physical memory banks and coupled with a plurality of global policers stored in one or more global banks separate from the physical banks. Thus, each bank of the global policers are able to represent a logical combination of a plurality of the physical banks of physical policers.Type: GrantFiled: April 1, 2015Date of Patent: January 16, 2018Assignee: Cavium, Inc.Inventors: Srinath Atluri, Weihuang Wang, Weinan Ma
-
Patent number: 9870173Abstract: An optimized design of n-write/1-read port memory comprises a memory unit including a plurality of memory banks each having one write port and one read port configured to write data to and read data from the memory banks, respectively. The memory further comprises a plurality of write interfaces configured to carry concurrent write requests to the memory unit for a write operation, wherein the first write request is always presented by its write interface directly to a crossbar, wherein the rest of the write requests are each fed through a set of temporary memory modules connected in a sequence before being presented to the crossbar. The crossbar is configured to accept the first write request directly and fetch the rest of the write requests from one of the memory modules in the set and route each of the write requests to one of the memory banks in the memory unit.Type: GrantFiled: February 19, 2016Date of Patent: January 16, 2018Assignee: CAVIUM, INC.Inventors: Saurin Patel, Weihuang Wang