Patents by Inventor Tsahi Daniel

Tsahi Daniel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150373164
    Abstract: Embodiments of the apparatus for forming a hash input from packet contents relate to a programmable flexible solution to form hash inputs, allowing for hardware changes and for adding support for newer protocols as and when they are defined in the future. A packet is split into individual layers. Each layer is given a unique layer type number that helps identify what that layer is. Based on the layer type, each layer is expanded to a generic format. Each layer has a set of hash commands that is generic to that layer. Fields of each hash command are fieldOffset, fieldLen, hashMask, and hashMaskMSB. These hash commands allow information in the packet to be extracted in a programmable manner. The fields extracted from each protocol layer of the packet are concatenated to form a hash layer. A bit vector indicates which hash layers are used to form the hash input.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Vishal Anand, Tsahi Daniel, Gerald Schmidt
  • Publication number: 20150373163
    Abstract: Embodiments of the apparatus for extracting data from packets relate to programmable layer commands that allow fields from packets to be extracted. A packet is split into individual layers. Each layer is given a unique layer type number that identifies the layer. Based on the layer type, each layer is expanded to a generic format. Each layer has a set of layer commands that is generic to that layer. Fields of each layer command are fieldOffset and fieldLen. These layer commands allow information in the packet to be extracted in a programmable manner. Extracted fields from each protocol layer are concatenated to form a token layer. All token layers are concatenated to form a final token, which is used for further processing of the packet.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Vishal Anand, Tsahi Daniel, Gerald Schmidt
  • Publication number: 20150373165
    Abstract: Embodiments of the apparatus for handling large protocol layers relate to an implementation that optimizes a field selection circuit. This implementation provides software like flexibility to a hardware parser engine in parsing packets. The implementation limits a size of each layer and splits any layer that exceeds that size into smaller layers. The parser engine extracts data from the split layers just as it would from a non-split layer and, then, concatenates the extracted data in a final result.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Vishal Anand, Tsahi Daniel, Premshanth Theivendran
  • Publication number: 20150373160
    Abstract: Embodiments of the apparatus for modifying packet headers relate to a use of bit vectors to allow expansion and collapse of protocol headers within packets for enabling flexible modification. A rewrite engine expands each protocol header into a generic format and applies various commands to modify the generalized protocol header. The rewrite engine maintains a bit vector for the generalized protocol header with each bit in the bit vector representing a byte of the generalized protocol header. A bit marked as 0 in the bit vector corresponds to an invalid byte, while a bit marked as 1 in the bit vector corresponds to a valid byte. The rewrite engine uses the bit vector to remove all the invalid bytes after all commands have been operated on the generalized protocol header to thereby form a new protocol header.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Chirinjeev Singh, Tsahi Daniel, Gerald Schmidt
  • Publication number: 20150372860
    Abstract: Embodiments of the apparatus for reducing latency in a flexible parser relate to an implementation that optimizes each parser engine within the parser. A packet enters the parser. Each of the parser engines processes the packet if processing is required. Otherwise, the parser engine simply forwards the packet through without processing the packet, thereby reducing latency. Each parser engine includes a memory. The memory stores bypass data and status information that indicates whether parsing for this packet is completed and, thus, no further processing is required by subsequent parser engines. Each parser engine also includes a counter, which is incremented whenever a packet enters the parser engine and is decremented whenever a packet exists the parser engine. A packet bypasses the parser engine based on the counter of the parser engine and the status information of that packet.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Vishal Anand, Tsahi Daniel, Gerald Schmidt
  • Publication number: 20150373156
    Abstract: Embodiments of the apparatus for modifying packet headers relate to a packet generalization scheme that maintains information across protocol layers of packets. The packet generalization scheme uses a protocol table that includes layer information for all possible protocol layer combinations. The protocol layer combinations in the protocol table are manually configured through software. Each protocol layer combination in the protocol table is uniquely identified by a PktID. A rewrite engine of a network device receives the PktID for a packet and uses that unique identifier as key to the protocol table to access information for each protocol layer of the packet that the rewrite engine requires during modification of the packet. The packet generalization scheme eliminates the need for a parser engine of the network device to pass parsed data to the rewrite engine, which is resource intensive.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Chirinjeev Singh, Tsahi Daniel, Gerald Schmidt, Saurin Patel
  • Publication number: 20150373161
    Abstract: Embodiments of the apparatus for modifying packet headers relate to pointer structure for splitting a packet into individual layers and for intelligently stitching them back together. The pointer structure includes N+1 layer pointers to N+1 protocol headers. The pointer structure also includes a total size of all headers. A rewrite engine uses the layer pointers to extract the first N corresponding protocol layers within the packet for modification. The rewrite engine uses the layer pointers to form an end point, which together with the total size of all headers is associated with a body of the headers. The body of the headers is a portion of headers that are not modified by the rewrite engine. After all the modifications are performed and modified headers are compressed, the modified layer pointers are used to stitch the modified headers back together with the body of the headers.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Chirinjeev Singh, Tsahi Daniel, Gerald Schmidt, Saurin Patel
  • Publication number: 20150373155
    Abstract: Embodiments of the apparatus for modifying packet headers relate to a rewrite engine that represents each protocol header of packets in a generic format specific to that protocol to enable programmable modifications of packets, resulting in hardware and software flexibility in modifying packet headers. Software programs generic formats in a hardware table for various protocols. The rewrite engine is able to detect missing fields from a protocol header and is able to expand the protocol header to a maximum size such that the protocol header contains all possible fields of that protocol. Each of the fields has the same offset irrespective of which variation of the protocol the protocol header corresponds to. In a bit vector, all newly added fields are marked invalid (represented by 0), and all existing fields are marked valid (represented by 1). Software modification commands allow data to be replaced, removed and inserted.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Chirinjeev Singh, Vishal Anand, Tsahi Daniel, Gerald Schmidt
  • Publication number: 20150373159
    Abstract: Embodiments of the apparatus for modifying packet headers relate to programmable modifications of packets by applying commands to generalized protocol headers. Each protocol header of incoming packets is represented in a generic format specific to that protocol to enable modifications to packet headers. Missing fields from a protocol header are detected, and the protocol header is expanded to a maximum size such that the protocol header contains all possible fields of that protocol, including the missing fields. Each of the fields has the same offset irrespective of which variation of the protocol the protocol header corresponds to. Modification uses a set of commands that is applied to expanded protocol headers. All of the commands are thus generic as these commands are independent of incoming headers (e.g., size and protocol).
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Chirinjeev Singh, Tsahi Daniel, Gerald Schmidt, Saurin Patel
  • Publication number: 20150373166
    Abstract: Embodiments of the apparatus of identifying internal destinations of network packets relate to a network chip that allows flexibility in handling packets. The handling of packets can be a function of what the packet contents are or where the packets are from. The handling of packets can also be a function of both what the packet contents are and where the packets are from. In some embodiments, where the packets are from refers to unique port numbers of chip ports that the packets arrived at. The packets can be distributed for processing within the network chip.
    Type: Application
    Filed: June 19, 2014
    Publication date: December 24, 2015
    Inventors: Vishal Anand, Tsahi Daniel, Gerald Schmidt, Premshanth Theivendran
  • Publication number: 20150347313
    Abstract: Embodiments of the present invention relate to a centralized table aging module that efficiently and flexibly utilizes an embedded memory resource, and that enables and facilitates separate network controllers. The centralized table aging module performs aging of tables in parallel using the embedded memory resource. The table aging module performs an age marking process and an age refreshing process. The memory resource includes age mark memory and age mask memory. Age marking is applied to the age mark memory. The age mask memory provides per-entry control granularity regarding the aging of table entries.
    Type: Application
    Filed: May 28, 2014
    Publication date: December 3, 2015
    Applicant: XPLIANT, Inc
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Mohan Balan
  • Publication number: 20150350089
    Abstract: Embodiments of the present invention relate to a centralized network analytic device, the centralized network analytic device efficiently uses on-chip memory to flexibly perform counting, traffic rate monitoring and flow sampling. The device includes a pool of memory that is shared by all cores and packet processing stages of each core. The counting, the monitoring and the sampling are all defined through software allowing for greater flexibility and efficient analytics in the device. In some embodiments, the device is a network switch.
    Type: Application
    Filed: May 28, 2014
    Publication date: December 3, 2015
    Applicant: XPLIANT, Inc
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Patent number: 9166916
    Abstract: In a modular switching device that includes a plurality of packet processors interconnected by a plurality of connecting devices, one or more data packets are received at a first packet processor. The first packet processor generates a communication frame that includes at least a portion of a first data packet among the one or more data packets. The first packet processor divides the communication frame into a plurality of transmission units, includes a communication frame identifier in respective transmission units, and includes a respective position identifier in respective transmission units. The transmission units are transmitted to the plurality of connecting devices via a first plurality of uplinks, and the plurality of connecting devices transmits the transmission units to the second packet processor via a second plurality of uplinks. The communication frame is reassembled from the plurality of transmission units using the communication frame identifier and the position identifiers.
    Type: Grant
    Filed: October 7, 2013
    Date of Patent: October 20, 2015
    Assignees: Marvell International Ltd., Marvell Israel (M.I.S.L) Ltd.
    Inventors: Tal Mizrahi, Carmi Arad, Martin White, Tsahi Daniel, Yoram Revah, Ehud Sivan
  • Patent number: 9137030
    Abstract: First data units corresponding to a first multicast group (MCG) and second data units corresponding to a second MCG are stored in a first queue of a network switching device. At least one first data unit retrieved from the first queue and at least one second data unit retrieved from the first queue are aggregated into a first frame. The first frame is transmitted by the network switching device to a superset MCG that includes at least the first MCG and the second MCG. Only third data units corresponding to a third MCG are stored in a second queue of the network switching device. Third data units retrieved from the second queue are transmitted by the network switching device to the third MCG.
    Type: Grant
    Filed: October 7, 2013
    Date of Patent: September 15, 2015
    Assignees: Marvell International Ltd., Marvell Israel (M.I.S.L) Ltd.
    Inventors: Tal Mizrahi, Carmi Arad, Martin White, Tsahi Daniel
  • Publication number: 20150186516
    Abstract: Embodiments of the present invention relate to a Lookup and Decision Engine (LDE) for generating lookup keys for input tokens and modifying the input tokens based on contents of lookup results. The input tokens are parsed from network packet headers by a Parser, and the tokens are then modified by the LDE. The modified tokens guide how corresponding network packets will be modified or forwarded by other components in a software-defined networking (SDN) system. The design of the LDE is highly flexible and protocol independent. Conditions and rules for generating lookup keys and for modifying tokens are fully programmable such that the LDE can perform a wide variety of reconfigurable network features and protocols in the SDN system.
    Type: Application
    Filed: December 30, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Harish Krishnamoorthy
  • Publication number: 20150187419
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programmed based on how the tiles are allocated for each lookup.
    Type: Application
    Filed: December 27, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Publication number: 20150188848
    Abstract: Embodiments of the present invention relate to a scalable interconnection scheme of multiple processing engines on a single chip using on-chip configurable routers. The interconnection scheme supports unicast and multicast routing of data packets communicated by the processing engines. Each on-chip configurable router includes routing tables that are programmable by software, and is configured to correctly deliver incoming data packets to its output ports in a fair and deadlock-free manner. In particular, each output port of the on-chip configurable routers includes an output port arbiter to avoid deadlocks when there are contentions at output ports of the on-chip configurable routers and to guarantee fairness in delivery among transferred data packets.
    Type: Application
    Filed: December 27, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Nimalan Siva
  • Publication number: 20150186143
    Abstract: Embodiments of the present invention relate to fast and conditional data modification and generation in a software-defined network (SDN) processing engine. Modification of multiple inputs and generation of multiple outputs can be performed in parallel. A size of each input or output data can be large, such as in hundreds of bytes. The processing engine includes a control path and a data path. The control path generates instructions for modifying inputs and generating new outputs. The data path executes all instructions produced by the control path. The processing engine is typically programmable such that conditions and rules for data modification and generation can be reconfigured depending on network features and protocols supported by the processing engine. The SDN processing engine allows for processing multiple large-size data flows and is efficient in manipulating such data. The SDN processing engine achieves full throughput with multiple back-to-back input and output data flows.
    Type: Application
    Filed: December 30, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Mohan Balan
  • Patent number: 9065775
    Abstract: A network device comprises a plurality of physical ports and a packet processing pipeline. The packet processing pipeline is configured to assign a virtual port from a plurality of virtual ports to a packet received via one of the physical ports, wherein a quantity of the virtual ports is larger than a quantity of the physical ports, and wherein, for each of at least some of the physical ports, multiple virtual ports correspond to one physical port. The packet processing pipeline is also configured to assign a virtual domain from a plurality of virtual domains to the packet based on the assigned virtual port, and process the packet based on one or more of i) the assigned virtual port, ii) the assigned virtual domain, and iii) a header field of the packet, including determining zero, one, or more physical ports to which the packet is to be forwarded.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: June 23, 2015
    Assignee: Marvell World Trade Ltd.
    Inventors: Uri Safrai, David Melman, Tsahi Daniel, Nafea Bishara
  • Patent number: 9036993
    Abstract: Embodiments of the present disclosure provide methods for allocating bandwidth to a plurality of traffic containers of a passive optical network. The method comprises receiving upstream data from a plurality of traffic containers of the passive optical network and passing the upstream data to a traffic manager. The method further comprises dynamically changing the allocated bandwidth based at least in part on the amount of the upstream data stored in one or more queues of the traffic manager.
    Type: Grant
    Filed: February 12, 2013
    Date of Patent: May 19, 2015
    Assignee: Marvell World Trade Ltd.
    Inventors: Dimitry Melts, Tsahi Daniel