Patents by Inventor Abhishek Dikshit

Abhishek Dikshit has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11055063
    Abstract: A hardware-based programmable deep learning processor (DLP) is proposed, wherein the DLP comprises with a plurality of accelerators dedicated for deep learning processing. Specifically, the DLP includes a plurality of tensor engines configured to perform operations for pattern recognition and classification based on a neural network. Each tensor engine includes one or more matrix multiplier (MatrixMul) engines each configured to perform a plurality of dense and/or sparse vector-matrix and matrix-matrix multiplication operations, one or more convolutional network (ConvNet) engines each configured to perform a plurality of efficient convolution operations on sparse or dense matrices, one or more vector floating point units (VectorFPUs) each configured to perform floating point vector operations, and a data engine configured to retrieve and store multi-dimensional data to both on-chip and external memories.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: July 6, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Rajan Goyal, Ken Bullis, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Patent number: 10409911
    Abstract: A hardware-based programmable text analytics processor has a plurality of components including at least a tokenizer, a tagger, a parser, and a classifier. The tokenizer processes an input stream of unstructured text data and identifies a sequence of tokens along with their associated token ids. The tagger assigns a tag to each of the sequence of tokens from the tokenizer using a trained machine learning model. The parser parses the tagged tokens from the tagger and creates a parse tree for the tagged tokens via a plurality of shift, reduce and/or finalize transitions based on a trained machine learning model. The classifier performs classification for tagging and parsing by accepting features extracted by the tagger and the parser, classifying the features and returning classes of the features back to the tagger and the parser, respectively. The TAP then outputs structured data to be processed for various text analytics processing applications.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: September 10, 2019
    Assignee: Cavium, LLC
    Inventors: Rajan Goyal, Ken Bullis, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Patent number: 9823895
    Abstract: Matching at least one regular expression pattern in an input stream may be optimized by initializing a search context in a run stack based on (i) partial match results determined from walking segments of a payload of a flow through a first finite automation and (ii) a historical search context associated with the flow. The search context may be modified via push or pop operations to direct at least one processor to walk segments of the payload through the at least one second finite automation. The search context may be maintained in a manner that obviates overflow of the search context and obviating stalling of the push or pop operations to increase match performance.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: November 21, 2017
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Yossef Shanava, Timothy Toshio Nakada, Abhishek Dikshit
  • Publication number: 20170316312
    Abstract: A hardware-based programmable deep learning processor (DLP) is proposed, wherein the DLP comprises with a plurality of accelerators dedicated for deep learning processing. Specifically, the DLP includes a plurality of tensor engines configured to perform operations for pattern recognition and classification based on a neural network. Each tensor engine includes one or more matrix multiplier (MatrixMul) engines each configured to perform a plurality of dense and/or sparse vector-matrix and matrix-matrix multiplication operations, one or more convolutional network (ConvNet) engines each configured to perform a plurality of efficient convolution operations on sparse or dense matrices, one or more vector floating point units (VectorFPUs) each configured to perform floating point vector operations, and a data engine configured to retrieve and store multi-dimensional data to both on-chip and external memories.
    Type: Application
    Filed: April 28, 2017
    Publication date: November 2, 2017
    Inventors: Rajan Goyal, Ken Bullis, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Publication number: 20170315984
    Abstract: A hardware-based programmable text analytics processor has a plurality of components including at least a tokenizer, a tagger, a parser, and a classifier. The tokenizer processes an input stream of unstructured text data and identifies a sequence of tokens along with their associated token ids. The tagger assigns a tag to each of the sequence of tokens from the tokenizer using a trained machine learning model. The parser parses the tagged tokens from the tagger and creates a parse tree for the tagged tokens via a plurality of shift, reduce and/or finalize transitions based on a trained machine learning model. The classifier performs classification for tagging and parsing by accepting features extracted by the tagger and the parser, classifying the features and returning classes of the features back to the tagger and the parser, respectively. The TAP then outputs structured data to be processed for various text analytics processing applications.
    Type: Application
    Filed: April 28, 2017
    Publication date: November 2, 2017
    Inventors: Rajan Goyal, Ken Bullis, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Patent number: 9438561
    Abstract: Nodes of a per-pattern NFA may be stored amongst one or more of a plurality of memories based on a node distribution determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels. At least one processor may be configured to cache one or more nodes of the per-pattern NFA in the node cache based on a cache miss of a given node of the one or more nodes and a hierarchical node transaction size associated with a given hierarchical level mapped to a given memory in which the given node is stored, optimizing run time performance of the walk.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: September 6, 2016
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Patent number: 9426165
    Abstract: A method and corresponding apparatus are provided implementing run time processing using Deterministic Finite Automata (DFA) and Non-Deterministic Finite Automata (NFA) to find the existence of a pattern in a payload. A subpattern may be selected from each pattern in a set of one or more regular expression patterns based on at least one heuristic and a unified deterministic finite automata (DFA) may be generated using the subpatterns selected from all patterns in the set, and at least one non-deterministic finite automata (NFA) may be generated for at least one pattern in the set, optimizing run time performance of the run time processing.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: August 23, 2016
    Assignee: Cavium, Inc.
    Inventors: Satyanarayana Lakshmipathi Billa, Rajan Goyal, Abhishek Dikshit
  • Patent number: 9419943
    Abstract: A method, and corresponding apparatus and system are provided for optimizing matching at least one regular expression pattern in an input stream by walking at least one finite automaton in a speculative manner. The speculative manner may include walking at least two nodes of a given finite automaton, of the at least one finite automaton, in parallel, with a segment, at a given offset within a payload of a packet in the input stream. The walking may include determining a match result for the segment, at the given offset within the payload, at each node of the at least two nodes. The walking may further include determining at least one subsequent action for walking the given finite automaton, based on an aggregation of each match result determined.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: August 16, 2016
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Publication number: 20150295891
    Abstract: Nodes of a per-pattern NFA may be stored amongst one or more of a plurality of memories based on a node distribution determined as a function of hierarchical levels mapped to the plurality of memories and per-pattern NFA storage allocation settings configured for the hierarchical levels. At least one processor may be configured to cache one or more nodes of the per-pattern NFA in the node cache based on a cache miss of a given node of the one or more nodes and a hierarchical node transaction size associated with a given hierarchical level mapped to a given memory in which the given node is stored, optimizing run time performance of the walk.
    Type: Application
    Filed: April 14, 2014
    Publication date: October 15, 2015
    Applicant: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Publication number: 20150186786
    Abstract: A method, and corresponding apparatus and system are provided for optimizing matching at least one regular expression pattern in an input stream by walking at least one finite automaton in a speculative manner. The speculative manner may include walking at least two nodes of a given finite automaton, of the at least one finite automaton, in parallel, with a segment, at a given offset within a payload of a packet in the input stream. The walking may include determining a match result for the segment, at the given offset within the payload, at each node of the at least two nodes. The walking may further include determining at least one subsequent action for walking the given finite automaton, based on an aggregation of each match result determined.
    Type: Application
    Filed: December 30, 2013
    Publication date: July 2, 2015
    Applicant: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Abhishek Dikshit
  • Publication number: 20150067776
    Abstract: A method and corresponding apparatus are provided implementing run time processing using Deterministic Finite Automata (DFA) and Non-Deterministic Finite Automata (NFA) to find the existence of a pattern in a payload. A subpattern may be selected from each pattern in a set of one or more regular expression patterns based on at least one heuristic and a unified deterministic finite automata (DFA) may be generated using the subpatterns selected from all patterns in the set, and at least one non-deterministic finite automata (NFA) may be generated for at least one pattern in the set, optimizing run time performance of the run time processing.
    Type: Application
    Filed: August 30, 2013
    Publication date: March 5, 2015
    Applicant: Cavium, Inc.
    Inventors: Satyanarayana Lakshmipathi Billa, Rajan Goyal, Abhishek Dikshit
  • Publication number: 20150067200
    Abstract: Matching at least one regular expression pattern in an input stream may be optimized by initializing a search context in a run stack based on (i) partial match results determined from walking segments of a payload of a flow through a first finite automation and (ii) a historical search context associated with the flow. The search context may be modified via push or pop operations to direct at least one processor to walk segments of the payload through the at least one second finite automation. The search context may be maintained in a manner that obviates overflow of the search context and obviating stalling of the push or pop operations to increase match performance.
    Type: Application
    Filed: April 14, 2014
    Publication date: March 5, 2015
    Applicant: Cavium, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa, Yossef Shanava, Timothy Toshio Nakada, Abhishek Dikshit