Patents Assigned to Xpliant
  • Patent number: 9600614
    Abstract: System and method of automatically performing flip-flop insertions for each net in a logic interface by using the RTL-estimated maximum count as a limit. Based on the timing analysis on the physical layout, a flip-flop insertion count needed for each net is derived and candidate locations for insertions are automatically detected. A set of constraints is applied to identify ineligible locations for flip-flop insertions. If more flip-flop insertions than the count limit are needed to satisfy the timing requirements for a net, timing-related variables are iteratively adjusted using the current layout until the timing requirements can be satisfied using the RTL count limit. If all the nets in the interface need fewer flip-flop insertions than the RTL count limit, the information can be fed back to update the RTL count limit. Each net is then parsed and flip-flops are inserted at appropriated locations.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: March 21, 2017
    Assignee: XPLIANT
    Inventors: Nikhil Jayakumar, Weihuang Wang, Weinan Ma, Daman Ahluwalia, Chirinjeev Singh
  • Patent number: 9600620
    Abstract: System and method of automatically performing repeater insertions in physical design of an integrated circuit. Repeaters are inserted in interconnects in a staggered fashion and spaced apart to accommodate potential flip-flop insertions. The sufficient spacing between the repeaters in combination with the staggered pattern ensures that flip-flop insertions can be performed at any of the repeater locations without space limitation. When rerouting is needed following a flip-flop insertion on an interconnect, automatic rerouting is performed but restricted to a short and specified region along the interconnect. Thereby, the resulted alteration from the current routing configuration is minimal and deterministic.
    Type: Grant
    Filed: March 20, 2015
    Date of Patent: March 21, 2017
    Assignee: XPLIANT
    Inventors: Daman Ahluwalia, Nikhil Jayakumar
  • Patent number: 9547733
    Abstract: System and method of checking logic equivalence following flip-flop insertions to identify paths with inversion errors. All the flip-flops in a gate-level netlist and the corresponding RTL design are treated as buffers in a logic equivalence check (LEC) tool. A logic mismatch of a path between the RTL design and the netlist indicates an odd number of inverters have been inserted in the path during a flip-flop insertion process. Accordingly, the identified path is adjusted to ensure an even number of inverters.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: January 17, 2017
    Assignee: Xpliant
    Inventor: Chirinjeev Singh
  • Patent number: 9509585
    Abstract: A method includes receiving a packet at an ingress node of a network. A hierarchical time stamp is created in the packet by the ingress node. The hierarchical time stamp includes an initial time stamp and an initial node identifier. The packet is passed to another network node, which adds a subsequent time stamp and a subsequent node identifier to the hierarchical time stamp. The packet is received at an egress node of the network. A final time stamp and a final node identifier are added to the hierarchical time stamp at the egress node. The hierarchical time stamp is then removed from the packet and the packet is passed to another network. The hierarchical time stamp is delivered to an analyzer.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: November 29, 2016
    Assignee: Xpliant, Inc.
    Inventors: Tsahi Daniel, Enric Musoll, Sridevi Polasanapalli
  • Patent number: 9438539
    Abstract: A packet processor includes a packet memory manager configured to receive a single header reference count and a single payload reference count for a packet. A page link list walk for the header under the control of the header reference count is performed in parallel with a page link list walk for the payload under the control of the payload reference count.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: September 6, 2016
    Assignee: Xpliant, Inc.
    Inventors: Tsahi Daniel, Enric Musoll, Sridevi Polasanapalli
  • Publication number: 20160241473
    Abstract: A network switch includes a memory configurable to store alternate table representations of an individual trie in a hierarchy of tries. A prefix table processor accesses in parallel, using an input network address, the alternate table representations of the individual trie and searches for a longest prefix match in each alternate table representation to obtain local prefix matches. The longest prefix match from the local prefix matches is selected. The longest prefix match has an associated next hop index base address and offset value. A next hop index processor accesses a next hop index table in the memory utilizing the next hop index base address and offset value to obtain a next hop table pointer. A next hop processor accesses a next hop table in the memory using the next hop table pointer to obtain a destination network address.
    Type: Application
    Filed: April 27, 2016
    Publication date: August 18, 2016
    Applicant: Xpliant, Inc.
    Inventors: Weihuang Wang, Mohan Balan, Nimalan Siva, Zubin Shah
  • Patent number: 9331942
    Abstract: A network switch includes a memory configurable to store alternate table representations of an individual trie in a hierarchy of tries. A prefix table processor accesses in parallel, using an input network address, the alternate table representations of the individual trie and searches for a longest prefix match in each alternate table representation to obtain local prefix matches. The longest prefix match from the local prefix matches is selected. The longest prefix match has an associated next hop index base address and offset value. A next hop index processor accesses a next hop index table in the memory utilizing the next hop index base address and offset value to obtain a next hop table pointer. A next hop processor accesses a next hop table in the memory using the next hop table pointer to obtain a destination network address.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: May 3, 2016
    Assignee: Xpliant, Inc.
    Inventors: Weihuang Wang, Mohan Balan, Nimalan Siva, Zubin Shah
  • Patent number: 9262369
    Abstract: A packet processor has a packet memory manager configured to store a page walk link list, receive a descriptor and initiate a page walk through the page walk link list in response to the descriptor and without a prompt from transmit direct memory access circuitry. The packet memory manager is configured to receive an indicator of a single page packet and read a new packet in response to the indicator without waiting to obtain page state associated with the page of the single page packet.
    Type: Grant
    Filed: April 1, 2015
    Date of Patent: February 16, 2016
    Assignee: Xpliant, Inc.
    Inventors: Tsahi Daniel, Enric Musoll, Dan Tu, Sridevi Polasanapalli
  • Patent number: 9264357
    Abstract: A network switch includes packet processing units in a first processor core. An interface module is connected to the packet processing units. The interface module supports a unified table search request interface and a unified table search response interface. A common memory pool is connected to the interface module. The common memory pool includes a variety of memory types configurable to support multiple parallel table search requests.
    Type: Grant
    Filed: March 7, 2014
    Date of Patent: February 16, 2016
    Assignee: Xpliant, Inc.
    Inventors: Weihuang Wang, Tsahi Daniel, Mohan Balan, Nimalan Siva
  • Patent number: 9256380
    Abstract: A method of processing packets includes receiving packets and assigning the packets to different pages, where each page represents a fixed amount of memory. The different pages are distributed to different pools, where each pool has a unique mapping to banks, and where each bank is a set of memory resources. The different pages from the different pools are assigned to different banks in accordance with the unique mapping.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: February 9, 2016
    Assignee: Xpliant, Inc.
    Inventors: Tsahi Daniel, Enric Musoll, Dan Tu
  • Publication number: 20150347313
    Abstract: Embodiments of the present invention relate to a centralized table aging module that efficiently and flexibly utilizes an embedded memory resource, and that enables and facilitates separate network controllers. The centralized table aging module performs aging of tables in parallel using the embedded memory resource. The table aging module performs an age marking process and an age refreshing process. The memory resource includes age mark memory and age mask memory. Age marking is applied to the age mark memory. The age mask memory provides per-entry control granularity regarding the aging of table entries.
    Type: Application
    Filed: May 28, 2014
    Publication date: December 3, 2015
    Applicant: XPLIANT, Inc
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Mohan Balan
  • Publication number: 20150350089
    Abstract: Embodiments of the present invention relate to a centralized network analytic device, the centralized network analytic device efficiently uses on-chip memory to flexibly perform counting, traffic rate monitoring and flow sampling. The device includes a pool of memory that is shared by all cores and packet processing stages of each core. The counting, the monitoring and the sampling are all defined through software allowing for greater flexibility and efficient analytics in the device. In some embodiments, the device is a network switch.
    Type: Application
    Filed: May 28, 2014
    Publication date: December 3, 2015
    Applicant: XPLIANT, Inc
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Publication number: 20150186143
    Abstract: Embodiments of the present invention relate to fast and conditional data modification and generation in a software-defined network (SDN) processing engine. Modification of multiple inputs and generation of multiple outputs can be performed in parallel. A size of each input or output data can be large, such as in hundreds of bytes. The processing engine includes a control path and a data path. The control path generates instructions for modifying inputs and generating new outputs. The data path executes all instructions produced by the control path. The processing engine is typically programmable such that conditions and rules for data modification and generation can be reconfigured depending on network features and protocols supported by the processing engine. The SDN processing engine allows for processing multiple large-size data flows and is efficient in manipulating such data. The SDN processing engine achieves full throughput with multiple back-to-back input and output data flows.
    Type: Application
    Filed: December 30, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Mohan Balan
  • Publication number: 20150186583
    Abstract: Clock networks constructed with variable drive strength clock drivers are prepared for tuning. The clock drivers are built from a smaller set of base standard cells. Locations of the input and output netlists of the macrocells are marked and reserved even through the extraction process. The macrocells are able to be flattened, generating a netlist with the base cells, and recombined during circuit simulation, thereby reducing the number of iterations, making the tuning flow more efficient. The clock network is initially tuned by adding or removing cross-links in the mesh to balance capacitive loads on each driver of the clock mesh.
    Type: Application
    Filed: December 26, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, Inc.
    Inventors: Nikhil Jayakumar, Vivek Trivedi, Vasant K. Palisetti, Bhagavati R. Mula, Daman Ahluwalia, Amir H. Motamedi
  • Publication number: 20150186560
    Abstract: An electronic device fabrication tool uses only standard-size cells from a cell library to fabricate a clock distribution network on a semiconductor device, thereby reducing the cost of the fabrication process. Target clock drive strengths are determined to reduce skew along the clock-distribution network, and the standard size cells are combined to produce clock-driving components substantially equal to the target clock drive strengths. The cells are combined using VIA programming, by electrically coupling them by adding or removing vias connecting the cells. In hybrid tree-mesh clock distribution networks, VIA programming ensures that the binary tree portions of the network are not affected by the tuning. Preferably, the clock-driving elements are clock inverters or buffers, though other elements are able to be used to drive clock signals on the clock distribution network.
    Type: Application
    Filed: December 26, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, Inc.
    Inventors: Nikhil Jayakumar, Vivek Trivedi, Vasant K. Palisetti, Bhagavati R. Mula, Daman Ahluwalia, Amir H. Motamedi
  • Publication number: 20150186589
    Abstract: Clock stations in a hybrid tree-mesh clock distribution network are placed and routed using placement information embedded in instance names of the macrocells that form the clock-distribution network. The instance name includes (X,Y) coordinate information corresponding to placement of the macrocell in the physical layout of the network design. Base cells in each macrocell are placed in a known deterministic arrangement, such as one on top of another in a layout of the clock distribution network, all at the same (X,Y) offset. Preferably, the base cells are all from a standard-cell library, thereby reducing design cost and debug.
    Type: Application
    Filed: December 26, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, Inc.
    Inventors: Nikhil Jayakumar, Vivek Trivedi, Vasant K. Palisetti, Bhagavati R. Mula, Daman Ahluwalia, Amir H. Motamedi
  • Publication number: 20150188848
    Abstract: Embodiments of the present invention relate to a scalable interconnection scheme of multiple processing engines on a single chip using on-chip configurable routers. The interconnection scheme supports unicast and multicast routing of data packets communicated by the processing engines. Each on-chip configurable router includes routing tables that are programmable by software, and is configured to correctly deliver incoming data packets to its output ports in a fair and deadlock-free manner. In particular, each output port of the on-chip configurable routers includes an output port arbiter to avoid deadlocks when there are contentions at output ports of the on-chip configurable routers and to guarantee fairness in delivery among transferred data packets.
    Type: Application
    Filed: December 27, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Nimalan Siva
  • Publication number: 20150187419
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programmed based on how the tiles are allocated for each lookup.
    Type: Application
    Filed: December 27, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Publication number: 20150186516
    Abstract: Embodiments of the present invention relate to a Lookup and Decision Engine (LDE) for generating lookup keys for input tokens and modifying the input tokens based on contents of lookup results. The input tokens are parsed from network packet headers by a Parser, and the tokens are then modified by the LDE. The modified tokens guide how corresponding network packets will be modified or forwarded by other components in a software-defined networking (SDN) system. The design of the LDE is highly flexible and protocol independent. Conditions and rules for generating lookup keys and for modifying tokens are fully programmable such that the LDE can perform a wide variety of reconfigurable network features and protocols in the SDN system.
    Type: Application
    Filed: December 30, 2013
    Publication date: July 2, 2015
    Applicant: XPLIANT, INC.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Harish Krishnamoorthy
  • Patent number: 9009364
    Abstract: A packet processor has a packet memory manager configured to store a page walk link list, receive a descriptor and initiate a page walk through the page walk link list in response to the descriptor and without a prompt from transmit direct memory access circuitry. The packet memory manager is configured to receive an indicator of a single page packet and read a new packet in response to the indicator without waiting to obtain page state associated with the page of the single page packet.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: April 14, 2015
    Assignee: Xpliant, Inc.
    Inventors: Tsahi Daniel, Enric Musoll, Dan Tu, Sridevi Polasanapalli