Patents by Inventor Tsahi Daniel

Tsahi Daniel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230185735
    Abstract: A packet processing system having each of a plurality of hierarchical clients and a packet memory arbiter serially communicatively coupled together via a plurality of primary interfaces thereby forming a unidirectional client chain. This chain is then able to be utilized by all of the hierarchical clients to write the packet data to or read the packet data from the packet memory.
    Type: Application
    Filed: February 3, 2023
    Publication date: June 15, 2023
    Inventors: Enrique Musoll, Tsahi Daniel
  • Patent number: 11677664
    Abstract: Embodiments of the present invention relate to a Lookup and Decision Engine (LDE) for generating lookup keys for input tokens and modifying the input tokens based on contents of lookup results. The input tokens are parsed from network packet headers by a Parser, and the tokens are then modified by the LDE. The modified tokens guide how corresponding network packets will be modified or forwarded by other components in a software-defined networking (SDN) system. The design of the LDE is highly flexible and protocol independent. Conditions and rules for generating lookup keys and for modifying tokens are fully programmable such that the LDE can perform a wide variety of reconfigurable network features and protocols in the SDN system.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: June 13, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Harish Krishnamoorthy
  • Patent number: 11627087
    Abstract: Embodiments of the present invention relate to a centralized network analytic device, the centralized network analytic device efficiently uses on-chip memory to flexibly perform counting, traffic rate monitoring and flow sampling. The device includes a pool of memory that is shared by all cores and packet processing stages of each core. The counting, the monitoring and the sampling are all defined through software allowing for greater flexibility and efficient analytics in the device. In some embodiments, the device is a network switch.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: April 11, 2023
    Assignee: MARVELL ASIA PTE, LTD
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Patent number: 11586562
    Abstract: A packet processing system having each of a plurality of hierarchical clients and a packet memory arbiter serially communicatively coupled together via a plurality of primary interfaces thereby forming a unidirectional client chain. This chain is then able to be utilized by all of the hierarchical clients to write the packet data to or read the packet data from the packet memory.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: February 21, 2023
    Assignee: Marvell Asia PTE, LTD.
    Inventors: Enrique Musoll, Tsahi Daniel
  • Publication number: 20220404995
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programed based on how the tiles are allocated for each lookup.
    Type: Application
    Filed: July 27, 2022
    Publication date: December 22, 2022
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Patent number: 11435925
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programed based on how the tiles are allocated for each lookup.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: September 6, 2022
    Assignee: Marvell Asia PTE, LTD.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Publication number: 20220086089
    Abstract: Embodiments of the present invention are directed to a wildcard matching solution that uses a combination of static random access memories (SRAMs) and ternary content addressable memories (TCAMs) in a hybrid solution. In particular, the wildcard matching solution uses a plurality of SRAM pools for lookup and a spillover TCAM pool for unresolved hash conflicts.
    Type: Application
    Filed: November 23, 2021
    Publication date: March 17, 2022
    Inventors: Jeffrey T. Huynh, Weihuang Wang, Tsahi Daniel, Srinath Atluri, Mohan Balan
  • Publication number: 20220078137
    Abstract: A network switch using a search engine to generate chained table lookup requests. After the search engine executes a first lookup, the next-pass logic in the search engine uses the first lookup result and information in the master key to generate a second lookup key as well as other parts of a second lookup request. A next-pass crossbar routes the second lookup request to a target memory, and the search logic executes the second lookup. The first lookup request may originate from a processing engine coupled to the search engine. The first and the second lookup results, if any, can then be returned back to the processing engine for further processing or decision making. The chain of lookups can be configured by software by specifying various operational parameters of the processing engines and the next-pass logic, including specifying a key construction mode for the second lookup.
    Type: Application
    Filed: November 15, 2021
    Publication date: March 10, 2022
    Inventors: Jeffrey HUYNH, Weihuang WANG, Tsahi DANIEL, Gerald SCHMIDT
  • Patent number: 11258886
    Abstract: Embodiments of the apparatus for handling large protocol layers relate to an implementation that optimizes a field selection circuit. This implementation provides software like flexibility to a hardware parser engine in parsing packets. The implementation limits a size of each layer and splits any layer that exceeds that size into smaller layers. The parser engine extracts data from the split layers just as it would from a non-split layer and, then, concatenates the extracted data in a final result.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: February 22, 2022
    Assignee: Marvell Asia PTE, LTD.
    Inventors: Vishal Anand, Tsahi Daniel, Premshanth Theivendran
  • Patent number: 11218410
    Abstract: Embodiments of the present invention are directed to a wildcard matching solution that uses a combination of static random access memories (SRAMs) and ternary content addressable memories (TCAMs) in a hybrid solution. In particular, the wildcard matching solution uses a plurality of SRAM pools for lookup and a spillover TCAM pool for unresolved hash conflicts.
    Type: Grant
    Filed: November 4, 2015
    Date of Patent: January 4, 2022
    Assignee: Marvell Asia PTE, LTD.
    Inventors: Jeffrey T. Huynh, Weihuang Wang, Tsahi Daniel, Srinath Atluri, Mohan Balan
  • Patent number: 11184296
    Abstract: A network switch using a search engine to generate chained table lookup requests. After the search engine executes a first lookup, the next-pass logic in the search engine uses the first lookup result and information in the master key to generate a second lookup key as well as other parts of a second lookup request. A next-pass crossbar routes the second lookup request to a target memory, and the search logic executes the second lookup. The first lookup request may originate from a processing engine coupled to the search engine. The first and the second lookup results, if any, can then be returned back to the processing engine for further processing or decision making. The chain of lookups can be configured by software by specifying various operational parameters of the processing engines and the next-pass logic, including specifying a key construction mode for the second lookup.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: November 23, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Jeffrey Huynh, Weihuang Wang, Tsahi Daniel, Gerald Schmidt
  • Publication number: 20210334224
    Abstract: A packet processing system having each of a plurality of hierarchical clients and a packet memory arbiter serially communicatively coupled together via a plurality of primary interfaces thereby forming a unidirectional client chain. This chain is then able to be utilized by all of the hierarchical clients to write the packet data to or read the packet data from the packet memory.
    Type: Application
    Filed: July 8, 2021
    Publication date: October 28, 2021
    Inventors: Enrique Musoll, Tsahi Daniel
  • Publication number: 20210329104
    Abstract: Embodiments of the apparatus for modifying packet headers relate to a use of bit vectors to allow expansion and collapse of protocol headers within packets for enabling flexible modification. A rewrite engine expands each protocol header into a generic format and applies various commands to modify the generalized protocol header. The rewrite engine maintains a bit vector for the generalized protocol header with each bit in the bit vector representing a byte of the generalized protocol header. A bit marked as 0 in the bit vector corresponds to an invalid byte, while a bit marked as 1 in the bit vector corresponds to a valid byte. The rewrite engine uses the bit vector to remove all the invalid bytes after all commands have been operated on the generalized protocol header to thereby form a new protocol header.
    Type: Application
    Filed: May 26, 2021
    Publication date: October 21, 2021
    Inventors: Chirinjeev Singh, Tsahi Daniel, Gerald Schmidt
  • Patent number: 11093415
    Abstract: A packet processing system having each of a plurality of hierarchical clients and a packet memory arbiter serially communicatively coupled together via a plurality of primary interfaces thereby forming a unidirectional client chain. This chain is then able to be utilized by all of the hierarchical clients to write the packet data to or read the packet data from the packet memory.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: August 17, 2021
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Enrique Musoll, Tsahi Daniel
  • Patent number: 11082366
    Abstract: An apparatus and method for queuing data to a memory buffer. The method includes selecting a queue from a plurality of queues; receiving a token of data from the selected queue and requesting, by a queue module, addresses and pointers from a buffer manager for addresses allocated by the buffer manager for storing the token of data. Subsequently, a memory list is accessed by the buffer manager and addresses and pointers are generated to allocated addresses in the memory list which comprises a plurality of linked memory lists for additional address allocation. The method further includes writing into the accessed memory list the pointers for the allocated address where the pointers link together allocated addresses; and migrating to other memory lists for additional address allocations upon receipt of subsequent tokens of data from the queue; and generating additional pointers linking together the allocated addresses in the other memory lists.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 3, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Vamsi Panchagnula, Saurin Patel, Keqin Han, Tsahi Daniel
  • Patent number: 11050859
    Abstract: Embodiments of the apparatus for modifying packet headers relate to a use of bit vectors to allow expansion and collapse of protocol headers within packets for enabling flexible modification. A rewrite engine expands each protocol header into a generic format and applies various commands to modify the generalized protocol header. The rewrite engine maintains a bit vector for the generalized protocol header with each bit in the bit vector representing a byte of the generalized protocol header. A bit marked as 0 in the bit vector corresponds to an invalid byte, while a bit marked as 1 in the bit vector corresponds to a valid byte. The rewrite engine uses the bit vector to remove all the invalid bytes after all commands have been operated on the generalized protocol header to thereby form a new protocol header.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: June 29, 2021
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Chirinjeev Singh, Tsahi Daniel, Gerald Schmidt
  • Publication number: 20210067435
    Abstract: A multicast rule is represented in a hierarchical linked list with N tiers. Each tier or level in the hierarchical linked list corresponds to a network layer of a network stack that requires replication. Redundant groups in each tier are eliminated such that the groups in each tier are stored exactly once in a replication table. A multicast replication engine traverses the hierarchical linked list and replicates a packet according to each node in the hierarchical linked list.
    Type: Application
    Filed: October 30, 2020
    Publication date: March 4, 2021
    Inventors: Gerald Schmidt, Harish Krishnamoorthy, Tsahi Daniel
  • Publication number: 20210034269
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programed based on how the tiles are allocated for each lookup.
    Type: Application
    Filed: August 18, 2020
    Publication date: February 4, 2021
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Patent number: 10855573
    Abstract: A multicast rule is represented in a hierarchical linked list with N tiers. Each tier or level in the hierarchical linked list corresponds to a network layer of a network stack that requires replication. Redundant groups in each tier are eliminated such that the groups in each tier are stored exactly once in a replication table. A multicast replication engine traverses the hierarchical linked list and replicates a packet according to each node in the hierarchical linked list.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: December 1, 2020
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Gerald Schmidt, Harish Krishnamoorthy, Tsahi Daniel
  • Publication number: 20200374240
    Abstract: A software-defined network (SDN) system, device and method comprise one or more input ports, a programmable parser, a plurality of programmable lookup and decision engines (LDEs), programmable lookup memories, programmable counters, a programmable rewrite block and one or more output ports. The programmability of the parser, LDEs, lookup memories, counters and rewrite block enable a user to customize each microchip within the system to particular packet environments, data analysis needs, packet processing functions, and other functions as desired. Further, the same microchip is able to be reprogrammed for other purposes and/or optimizations dynamically.
    Type: Application
    Filed: August 13, 2020
    Publication date: November 26, 2020
    Inventors: Guy Townsend Hutchison, Sachin Ramesh Gandhi, Tsahi Daniel, Gerald Schmidt, Albert Fishman, Martin Leslie White, Zubin Shah