Patents by Inventor Rajan Goyal

Rajan Goyal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200019339
    Abstract: An example processing device includes a memory including a discreet finite automata (DFA) buffer configured to store at least a portion of a DFA graph, the DFA graph comprising a plurality of nodes, each of the nodes having zero or more arcs each including a respective label and pointing to a respective subsequent node of the plurality of nodes, at least one of the plurality of nodes comprising a match node, wherein the at least portion of the DFA graph comprises one or more slots of a memory slice, the one or more slots comprising data representing one or more of the arcs for at least one node of the plurality of nodes, and a DFA engine implemented in circuitry, the DFA engine comprising one or more DFA threads implemented in circuitry and configured to evaluate a payload relative to the DFA graph.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 16, 2020
    Inventors: Yi-Hua Edward Yang, Rajan Goyal, Eric Scot Swartzendruber
  • Publication number: 20200021664
    Abstract: A DFA engine is described that determines whether a current symbol of a payload matches a label of any effective arcs or negative arcs associated with a current node of a DFA graph that are stored in a cache. Responsive to determining that the current symbol does not match a label of any effective or negative arcs associated with the current node of the DFA graph, the DFA engine determines whether the current symbol matches a label of any arc associated with the current node of the DFA graph that is stored in a memory. Responsive to determining that the current symbol matches a label of a particular arc associated with the current node of the DFA graph that is stored in the memory, the DFA engine stores the particular arc in the cache as a new effective arc and uses the particular arc to evaluate the current symbol.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 16, 2020
    Inventors: Rajan Goyal, Yi-Hua Edward Yang, Satyanarayana Lakshmipathi Billa, Eric Scot Swartzendruber
  • Publication number: 20200019391
    Abstract: A compiler/loader unit for a RegEx accelerator is described that receives a first set of regular expression rules for implementing the RegEx accelerator, generates, based on the first set of regular expression rules, an initial deterministic finite automata (DFA) graph, and generates, an initial memory map for allocating the initial DFA graph to a memory of the RegEx accelerator. The compiler/loader unit receives receive, a second set of one or more new or modified regular expression rules for implementing the RegEx accelerator and in response performs incremental compilation of the second set of regular expressions. The compiler/loader unit generates, based on the second set of one or more regular expression rules, a supplemental DFA graph and reconciles the initial DFA graph with the supplemental DFA graph to generate an updated memory map for allocating the initial DFA graph and the supplemental DFA graph to the memory of the RegEx accelerator.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 16, 2020
    Inventors: Yi-Hua Edward Yang, Satyanarayana Lakshmipathi Billa, Rajan Goyal, Abhishek Kumar Dikshit
  • Publication number: 20200019404
    Abstract: An example processing device includes a memory including a non-deterministic finite automata (NFA) buffer configured to store a plurality of instructions defining an ordered sequence of instructions of at least a portion of an NFA graph, the portion of the NFA graph comprising a plurality of nodes arranged along a plurality of paths. The NFA engine determines a current symbol and one or more subsequent symbols of a payload segment that satisfy a match condition specified by a subset of instructions of the plurality of instructions for a path of the plurality of paths and in response to determining the current symbol and the one or more subsequent symbols of the payload segment that satisfy the match condition, outputs an indication that the payload data has resulted in a match.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 16, 2020
    Inventors: Satyanarayana Lakshmipathi Billa, Rajan Goyal, Abhishek Kumar Dikshit, Yi-Hua Edward Yang, Sandipkumar J. Ladhani
  • Patent number: 10511324
    Abstract: A highly programmable data processing unit includes multiple processing units for processing streams of information, such as network packets or storage packets. The data processing unit includes one or more specialized hardware accelerators configured to perform acceleration for various data-processing functions. The data processing unit is configured to retrieve speculative probability values for range coding a plurality of bits with a single read instruction to an on-chip memory that stores a table of probability values. The data processing unit is configured to store state information used for context-coding packets of a data stream so that the state information is available after switching between data streams.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: December 17, 2019
    Assignee: Fungible, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Gurumani Senthil Nayakam
  • Patent number: 10466964
    Abstract: An engine architecture for processing finite automata includes a hyper non-deterministic automata (HNA) processor specialized for non-deterministic finite automata (NFA) processing. The HNA processor includes a plurality of super-clusters and an HNA scheduler. Each super-cluster includes a plurality of clusters. Each cluster of the plurality of clusters includes a plurality of HNA processing units (HPUs). A corresponding plurality of HPUs of a corresponding plurality of clusters of at least one selected super-cluster is available as a resource pool of HPUs to the HNA scheduler for assignment of at least one HNA instruction to enable acceleration of a match of at least one regular expression pattern in an input stream received from a network.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: November 5, 2019
    Assignee: Cavium, LLC
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Yossef Shanava, Gregg A. Bouchard, Timothy Toshio Nakada
  • Patent number: 10460250
    Abstract: A root node of a decision tree data structure may cover all values of a search space used for packet classification. The search space may include a plurality of rules, the plurality of rules having at least one field. The decision tree data structure may include a plurality of nodes, the plurality of nodes including a subset of the plurality of rules. Scope in the decision tree data structure may be based on comparing a portion of the search space covered by a node to a portion of the search space covered by the node's rules. Scope in the decision tree data structure may be used to identify whether or not a compilation operation may be unproductive. By identifying an unproductive compilation operation it may be avoided, thereby improving compiler efficiency as the unproductive compilation operation may be time-consuming.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: October 29, 2019
    Assignee: Cavium, LLC
    Inventors: Rajan Goyal, Kenneth A. Bullis
  • Patent number: 10409911
    Abstract: A hardware-based programmable text analytics processor has a plurality of components including at least a tokenizer, a tagger, a parser, and a classifier. The tokenizer processes an input stream of unstructured text data and identifies a sequence of tokens along with their associated token ids. The tagger assigns a tag to each of the sequence of tokens from the tokenizer using a trained machine learning model. The parser parses the tagged tokens from the tagger and creates a parse tree for the tagged tokens via a plurality of shift, reduce and/or finalize transitions based on a trained machine learning model. The classifier performs classification for tagging and parsing by accepting features extracted by the tagger and the parser, classifying the features and returning classes of the features back to the tagger and the parser, respectively. The TAP then outputs structured data to be processed for various text analytics processing applications.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: September 10, 2019
    Assignee: Cavium, LLC
    Inventors: Rajan Goyal, Ken Bullis, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Patent number: 10277510
    Abstract: In one embodiment, a system includes a data navigation unit configured to navigate through a data structure stored in a first memory to a first representation of at least one rule. The system further includes at least one rule processing unit configured to a) receive the at least one rule based on the first representation of the at least one rule from a second memory to one of the rule processing unit, and b) processing a key using the at least one rule.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: April 30, 2019
    Assignee: Cavium, LLC
    Inventors: Rajan Goyal, Gregg A. Bouchard
  • Patent number: 10235211
    Abstract: A processor device comprises a plurality of virtual systems on chip, configured to utilize resources of a plurality of resources in accordance with a resource alignment between the plurality of virtual systems on chip and the plurality of resources. The processor device may further comprises a resource aligning unit configured to modify the resource alignment, dynamically, responsive to at least one event. Modifying the resource alignment, dynamically, may prevent a loss in throughput otherwise effectuated by the at least one event.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: March 19, 2019
    Assignee: Cavium, LLC
    Inventors: Rajan Goyal, Muhammad Raghib Hussain, Richard E. Kessler
  • Patent number: 10229144
    Abstract: In an embodiment, a method of updating a memory with a plurality of memory lines, the memory storing a tree, a plurality of buckets, and a plurality of rules, can include maintaining a copy of the memory with a plurality of memory lines. The method can further include writing a plurality of changes to at least one of the tree, the plurality of buckets, and the plurality of rules to the copy. The method can additionally include determining whether each of the plurality of changes is an independent write or a dependent write. The method can further include merging independent writes to the same line of the copy. The method further includes transferring updates from the plurality of lines of the copy to the plurality of lines of the memory.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: March 12, 2019
    Assignee: Cavium, LLC
    Inventors: Satyanarayana Lakshmipathi Billa, Rajan Goyal
  • Patent number: 10229139
    Abstract: A system, apparatus, and method are provided for receiving one or more incremental updates including adding, deleting, or modifying rules of a Rule Compiled Data Structure (RCDS) used for packet classification. Embodiments disclosed herein may employ at least one heuristic for maintaining quality of the RCDS. At a given one of the one or more incremental updates received, a section of the RCDS may be identified and recompilation of the identified section may be triggered, altering the RCDS shape or depth in a manner detected by the at least one heuristic employed. The at least one heuristic employed enables performance and functionality of an active search process using the RCDS to be improved by advantageously determining when and where to recompile one or more sections of the RCDS being searched.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: March 12, 2019
    Assignee: CAVIUM, LLC
    Inventors: Rajan Goyal, Kenneth A. Bullis, Satyanarayana Lakshmipathi Billa
  • Publication number: 20190012278
    Abstract: A new processing architecture is described in which a data processing unit (DPU) is utilized within a device. Unlike conventional compute models that are centered around a central processing unit (CPU), example implementations described herein leverage a DPU that is specially designed and optimized for a data-centric computing model in which the data processing tasks are centered around, and the primary responsibility of, the DPU. For example, various data processing tasks, such as networking, security, and storage, as well as related work acceleration, distribution and scheduling, and other such tasks are the domain of the DPU. The DPU may be viewed as a highly programmable, high-performance input/output (I/O) and data-processing hub designed to aggregate and process network and storage I/O to and from multiple other components and/or devices. This frees resources of the CPU, if present, for computing-intensive tasks.
    Type: Application
    Filed: July 10, 2018
    Publication date: January 10, 2019
    Inventors: Pradeep Sindhu, Jean-Marc Frailong, Bertrand Serlet, Wael Noureddine, Felix A. Marti, Deepak Goel, Rajan Goyal
  • Publication number: 20190013965
    Abstract: A highly-programmable access node is described that can be configured and optimized to perform input and output (I/O) tasks, such as storage and retrieval of data to and from storage devices (such as solid state drives), networking, data processing, and the like. For example, the access node may be configured to execute a large number of data I/O processing tasks relative to a number of instructions that are processed. The access node may be highly programmable such that the access node may expose hardware primitives for selecting and programmatically configuring data processing operations. As one example, the access node may be used to provide high-speed connectivity and I/O operations between and on behalf of computing devices and storage components of a network, such as for providing interconnectivity between those devices and a switch fabric of a data center.
    Type: Application
    Filed: July 10, 2018
    Publication date: January 10, 2019
    Inventors: Pradeep Sindhu, Jean-Marc Frailong, Bertrand Serlet, Wael Noureddine, Felix A. Marti, Deepak Goel, Paul Kim, Rajan Goyal, Aibing Zhou
  • Publication number: 20190012350
    Abstract: A new processing architecture is described that utilizes a data processing unit (DPU). Unlike conventional compute models that are centered around a central processing unit (CPU), the DPU that is designed for a data-centric computing model in which the data processing tasks are centered around the DPU. The DPU may be viewed as a highly programmable, high-performance I/O and data-processing hub designed to aggregate and process network and storage I/O to and from other devices. The DPU comprises a network interface to connect to a network, one or more host interfaces to connect to one or more application processors or storage devices, and a multi-core processor with two or more processing cores executing a run-to-completion data plane operating system and one or more processing cores executing a multi-tasking control plane operating system. The data plane operating system is configured to support software functions for performing the data processing tasks.
    Type: Application
    Filed: July 10, 2018
    Publication date: January 10, 2019
    Inventors: Pradeep Sindhu, Jean-Marc Frailong, Wael Noureddine, Felix A. Marti, Deepak Goel, Rajan Goyal, Bertrand Serlet
  • Patent number: 10146463
    Abstract: A virtual system on chip (VSoC) is an implementation of a machine that allows for sharing of underlying physical machine resources between different virtual systems. A method or corresponding apparatus of the present invention relates to a method that includes a plurality of virtual systems on chip and a configuring unit. The configuring unit is arranged to configure resources on the method for the plurality of virtual systems on chip as a function of an identification tag assigned to each virtual system on chip.
    Type: Grant
    Filed: August 6, 2014
    Date of Patent: December 4, 2018
    Assignee: Cavium, LLC
    Inventors: Muhammad Raghib Hussain, Rajan Goyal, Richard Kessler
  • Patent number: 10110558
    Abstract: At least one processor may be operatively coupled to a plurality of memories and a node cache and configured to walk nodes of a per-pattern non-deterministic finite automaton (NFA). Nodes of the per-pattern NFA may be stored amongst one or more of the plurality of memories based on a node distribution determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels, optimizing run time performance of the walk.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: October 23, 2018
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa
  • Patent number: 10083200
    Abstract: A system, apparatus, and method are provided for adding, deleting, and modifying rules in one update from the perspective of an active search process for packet classification. While a search processor searches for one or more rules that match keys generated from received packets, there is a need to add, delete, or modify rules. By organizing a plurality incremental updates for adding, deleting, or modifying rules into a batch update, several operations for incorporating the incremental updates may be made more efficient by minimizing a number of updates required.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: September 25, 2018
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Kenneth A. Bullis, Satyanarayana Lakshmipathi Billa
  • Patent number: 10002326
    Abstract: At least one per-pattern non-deterministic finite automaton (NFA) may be generated for a single regular expression pattern and may include a respective set of nodes. Nodes of the respective set of nodes of each per-pattern NFA generated may be distributed for storing in a plurality of memories based on hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels, optimizing run time performance for matching regular expression patterns in an input stream.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: June 19, 2018
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa
  • Patent number: 9904630
    Abstract: A method, and corresponding apparatus and system are provided for optimizing matching of at least one regular expression pattern in an input stream by storing a context for walking a given node, of a plurality of nodes of a given finite automaton of at least one finite automaton, the store including a store determination, based on context state information associated with a first memory, for accessing the first memory and not a second memory or the first memory and the second memory. Further, to retrieve a pending context, the retrieval may include a retrieve determination, based on the context state information associated with the first memory, for accessing the first memory and not the second memory or the second memory and not the first memory. The first memory may have read and write access times that are faster relative to the second memory.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: February 27, 2018
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa