Patents Assigned to Cavium, Inc.
  • Patent number: 9952799
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programmed based on how the tiles are allocated for each lookup.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: April 24, 2018
    Assignee: Cavium, Inc.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Patent number: 9952979
    Abstract: Systems and methods for a direct memory access (DMA) operation are provided. The method includes receiving a host memory address by a device coupled to a computing device; storing the host memory address at a device memory by a DMA engine; receiving a packet at the device for the computing device; instructing the DMA engine by a device processor to retrieve the host memory address from the device memory; retrieving the host memory address by the DMA engine without the device processor reading the host memory address; and transferring the packet to the computing device by a DMA operation.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: April 24, 2018
    Assignee: Cavium, Inc.
    Inventor: Abhishek Mukherjee
  • Patent number: 9946671
    Abstract: Methods and systems for processing an input/output (I/O) requests are provided. The method includes generating an I/O request by an initiator adapter of a computing device that interfaces with a target adapter; indicating by the initiator adapter that the I/O request is sequential in nature. When the I/O request is a sequential read request, the target adapter notifies a target controller to read-ahead data associated with other sequential read requests; and stores the read ahead data at a cache such that data for the other sequential read requests is provided from the cache instead of a storage device managed by the target controller. A sequential write request is processed without claiming any cache space, when data for the write request is not to be accessed within a certain duration.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: April 17, 2018
    Assignee: Cavium, Inc.
    Inventors: Deepak Tawri, Vijay Thurpati
  • Patent number: 9948482
    Abstract: A network switch to support flexible lookup key generation comprises a control CPU configured to run a network switch control stack. The network switch control stacks is configured to manage and control operations of a switching logic circuitry, provide a flexible key having a plurality of possible fields that constitute part of a lookup key to a table, and enable a user to dynamically select at deployment or runtime a subset of the fields in the flexible key to form the lookup key and thus define a lookup key format for the table. The switching logic circuitry provisioned and controlled by the network switch control stack is configured to maintain said table to be searched via the lookup key in a memory cluster and process a received data packet based on search result of the table using the lookup key generated from the dynamically selected fields in the flexible key.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: April 17, 2018
    Assignee: CAVIUM, INC.
    Inventors: Leonid Livak, Ravindran Suresh, Zubin Shah, Sunita Bhaskaran, Ashwini Reddy
  • Patent number: 9936021
    Abstract: Systems and methods for storage operations are provided.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: April 3, 2018
    Assignee: Cavium, Inc.
    Inventors: Sudhir Ponnachana, Ajmer Singh, Ronald B. Gregory
  • Patent number: 9933809
    Abstract: Pacing of a producer, operating in a producer clock domain, may be based on at least one heuristic of a credit wire that is used to return credits to the producer. The returned credits may indicate that a consumer, operating in a consumer clock domain, has consumed data produced by the producer. The at least one heuristic may be a rate at which the credits are returned to the producer. Pacing the producer based on the rate at which the credits are returned to the producer may reduce latency of the data, flowing from the producer clock domain to the consumer clock domain, by minimizing an average number of entries in use in a First-In-First-Out (FIFO) operating in a pipeline between the producer and the consumer.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: April 3, 2018
    Assignee: Cavium, Inc.
    Inventor: Steven C. Barner
  • Patent number: 9936003
    Abstract: Methods and systems for transmitting information are provided. A threshold message size is configured to determine when an application executed by a computing system can send a latency message identifying a memory location from where a device can procure a payload for transmission to a destination. The computing system sends a latency message to the device, where the latency message includes the memory location, a transfer size and an indicator indicating if the application wants a completion status after the latency message is processed. The computing system stores connection information at a location dedicated to the application that sends the latency message. The device transmits the payload to the destination; and posts a completion status, where the device posts the completion status at a completion queue associated with the application with information that enables the application to determine whether other latency messages can be posted.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: April 3, 2018
    Assignee: Cavium, Inc.
    Inventor: Kanoj Sarcar
  • Patent number: 9934177
    Abstract: Methods and systems for efficiently processing input/output requests are provided. A network interface card (NIC) is coupled to a storage device via a peripheral link and accessible to a processor of a computing device executing instructions out of a memory device. The NIC is configured to receive a read/write request to read/write data; translate the read/write request to a storage device protocol used by the storage device coupled to the NIC; notify the storage device of the read/write request, without using the processor of the computing device, where the storage device reads/writes the data and notifies the NIC; and then the NIC prepares a response to the read/write request without having to use the processor of the computing device.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: April 3, 2018
    Assignee: Cavium, Inc.
    Inventors: Nir Goren, Rafi Shalom, Kobby Carmona
  • Patent number: 9928193
    Abstract: A silicon device configured to distribute a global timer value over a single serial bus to a plurality of processing elements that are disposed on the silicon device and that are coupled to the serial bus. Each of the processing elements comprises a slave timer. Upon receipt of the global timer value, the processing elements synchronize their respective slave timers with the global timer value. After the timers are synchronized, the global timer sends periodic increment signals to each of the processing elements. Upon receipt of the increment signals, the processing elements update their respective slave timers.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: March 27, 2018
    Assignee: Cavium, Inc.
    Inventors: Frank Worrell, Bryan W. Chin
  • Patent number: 9916274
    Abstract: An on-chip crossbar of a network switch comprising a central arbitration component configured to allocate packet data requests received from destination port groups to memory banks. The on-chip crossbar further comprises a Benes routing network comprising a forward network having a plurality of pipelined forward routing stages and a reverse network, wherein the Benes routing network retrieves the packet data from the memory banks coupled to input of the Benes routing network and route the packet data to the port groups coupled to output of the Benes routing network. The on-chip crossbar further comprises a plurality of stage routing control units each associated with one of the forward routing stages and configured to generate and provide a plurality of node control signals to control routing of the packet data through the forward routing stages to avoid contention between the packet data retrieved from different memory banks at the same time.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: March 13, 2018
    Assignee: Cavium, Inc.
    Inventors: Weihuang Wang, Dan Tu, Guy Hutchison, Prasanna Vetrivel
  • Patent number: 9910776
    Abstract: Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: March 6, 2018
    Assignee: Cavium, Inc.
    Inventors: Shubhendu Sekhar Mukherjee, Albert Ma, Mike Bertone
  • Patent number: 9904511
    Abstract: An improved shifter design for high-speed data processors is described. The shifter may include a first stage, in which the input bits are shifted by increments of N bits where N>1, followed by a second stage, in which all bits are shifted by a residual amount. A pre-shift may be removed from an input to the shifter and replaced by a shift adder at the second stage to further increase the speed of the shifter.
    Type: Grant
    Filed: February 6, 2015
    Date of Patent: February 27, 2018
    Assignee: Cavium, Inc.
    Inventors: Nitin Mohan, Ilan Pragaspathy
  • Patent number: 9904305
    Abstract: A low drop-out voltage regulator includes an error amplifier that generates an amplified error voltage, the error amplifier including a first input for receiving a reference voltage, a second input for receiving a feedback voltage, a bias terminal for receiving an adaptive bias current, and an output. A pass gate providing an output voltage includes a first input connected to a supply voltage and a second input connected to the error amplifier output. A feedback network generating the feedback voltage includes a first terminal connected to the output of the pass gate and a second terminal connected to the second input of the error amplifier. An adaptive bias network providing the adaptive bias current includes a first transistor connected to the bias terminal of the error amplifier, a second transistor connected to the first transistor as a current mirror, and a third transistor connected in parallel with the pass gate.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: February 27, 2018
    Assignee: Cavium, Inc.
    Inventors: Jonathan K. Brown, JingDong Deng
  • Patent number: 9904630
    Abstract: A method, and corresponding apparatus and system are provided for optimizing matching of at least one regular expression pattern in an input stream by storing a context for walking a given node, of a plurality of nodes of a given finite automaton of at least one finite automaton, the store including a store determination, based on context state information associated with a first memory, for accessing the first memory and not a second memory or the first memory and the second memory. Further, to retrieve a pending context, the retrieval may include a retrieve determination, based on the context state information associated with the first memory, for accessing the first memory and not the second memory or the second memory and not the first memory. The first memory may have read and write access times that are faster relative to the second memory.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: February 27, 2018
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa
  • Patent number: 9906468
    Abstract: A network processor controls packet traffic in a network by maintaining a count of pending packets. In the network processor, a pipe identifier (ID) is assigned to each of a number of paths connecting a packet output to respective network interfaces receiving those packets. A corresponding pipe ID is attached to each packet as it is transmitted. A counter employs the pipe ID to maintain a count of packets to be transmitted by a network interface. As a result, the network processor manages traffic on a per-pipe ID basis to ensure that traffic thresholds are not exceeded.
    Type: Grant
    Filed: October 27, 2011
    Date of Patent: February 27, 2018
    Assignee: Cavium, Inc.
    Inventors: Richard E. Kessler, Michael Sean Bertone
  • Patent number: 9900253
    Abstract: A data processing system includes a phantom queue for each of a plurality of output ports each associated with an output link for outputting data. The phantom queues receive/monitor traffic on the respective ports and/or the associated links such that the congestion or traffic volume on the output ports/links is able to be determined by a congestion mapper coupled with the phantom queues. Based on the determined congestion level on each of the ports/links, the congestion mapper selects one or more non or less congested ports/links as destination of one or more packets. A link selection logic element then processes the packets according to the selected path or multi-path thereby reducing congestion on the system.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: February 20, 2018
    Assignee: Cavium, Inc.
    Inventor: Martin Leslie White
  • Patent number: 9880844
    Abstract: Embodiments of the present invention relate to fast and conditional data modification and generation in a software-defined network (SDN) processing engine. Modification of multiple inputs and generation of multiple outputs can be performed in parallel. A size of each input or output data can be large, such as in hundreds of bytes. The processing engine includes a control path and a data path. The control path generates instructions for modifying inputs and generating new outputs. The data path executes all instructions produced by the control path. The processing engine is typically programmable such that conditions and rules for data modification and generation can be reconfigured depending on network features and protocols supported by the processing engine. The SDN processing engine allows for processing multiple large-size data flows and is efficient in manipulating such data. The SDN processing engine achieves full throughput with multiple back-to-back input and output data flows.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: January 30, 2018
    Assignee: CAVIUM, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Mohan Balan
  • Patent number: 9882678
    Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. The process, in one aspect, is able to receive a data stream formatted with soft information from a communication network such as a wireless network. After identifying a set of bits representing a first logic value from a portion of the data stream in accordance with a predefined soft encoding scheme, the set of bits is compressed into a compressed set of bits. The compressed set of bits which represents the first logic value is subsequently stored in a local memory.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: January 30, 2018
    Assignee: CAVIUM, INC.
    Inventor: Mehran Nekuii
  • Patent number: 9870204
    Abstract: A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: January 16, 2018
    Assignee: Cavium, Inc.
    Inventors: Ajeer Salil Pudiyapura, Kishore Badari Atreya, Ravindran Suresh
  • Patent number: 9870328
    Abstract: Communicating among multiple sets of multiples cores includes: buffering messages in first buffer associated with a first set of multiple cores; buffering messages in a second buffer associated with a second set of multiple cores; and transferring messages over communication circuitry from cores not in the first set to the first buffer, and to transferring messages from cores not in the second set to the second buffer. A first core of the first set sends messages corresponding to multiple types of instructions to a second core of the second set through the communication circuitry. The second buffer is large enough to store a maximum number of instructions of a second type that are allowed to be outstanding from cores in the first set at the same time, and still have enough storage space for one or more instructions of a first type.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: January 16, 2018
    Assignee: CAVIUM, INC.
    Inventors: Shubhendu Sekhar Mukherjee, David Asher, Bradley Dobbie, Thomas Hummel, Daniel Dever