Abstract: A content addressable memory (CAM) can include a CAM memory array having both a data field and a mask field. A multiplexer (MUX) can selectively load data from either a register or an external data input to one or both fields of the CAM memory array.
Abstract: A method of processing a signal is disclosed. The method comprises generating a digital signal, converting the digital signal to an analog signal, and generating an amplified analog signal having distortions. The method further comprises converting the amplified analog signal to a feedback digital signal at a sample rate and updating a model of the distortions based on the feedback digital signal.
Abstract: A system and method are disclosed for securely transmitting and receiving a signal. A nonlinear keying modulator is used in the transmitter to encrypt the signal using a nonlinear keying modulation technique. A nonlinear keying demodulator is used in the receiver to decrypt the signal.
Abstract: A method of mapping logical block select signals to physical blocks can include receiving at least one signal for each of n+1 logical blocks, where n is an integer greater than one, that each map to one of m+1 physical blocks, where n<m. The method also includes mapping the at least one signal for each logical block to physical block from a corresponding a set of r+1 physical blocks, each set of r+1 physical blocks being different from one another.
Abstract: A data stream search system can include a plurality of search data inputs logically divided into at least M+N sets. The sets have a logical order with respect to one another, each set providing more than one bit value. A key application circuit can comprise a plurality of data paths that each couple a different group of at least M data input sets to a corresponding content addressable memory (CAM) section. Each different group of at least M data input sets can be contiguous with respect to the logical order, and shifted in bit order from one another by at least two bits.
Type:
Grant
Filed:
November 24, 2008
Date of Patent:
July 3, 2012
Assignee:
NetLogic Microsystems, Inc.
Inventors:
Mark Birman, Andrew Rosman, Pankaj Gupta, Ashish Goel
Abstract: A CAM system (200) can include a number of entries (202-0 to 202-3) having one portion for storing a data value (e.g., E1) and another portion for storing a replicated data value (E1(REP)). For on-the-fly error correction, the entries can be searched by applying an appended key value that includes a key portion (KEY) and replicated key portion (KEY(REP)).
Abstract: A method may include, in response to a single command and an N-bit segment value, generating a search key comprising M segments for at least one of a plurality of different databases, the N-bit segment value forming different ones of the M search key segments according to a database configuration of the at least one database.
Type:
Grant
Filed:
October 11, 2010
Date of Patent:
May 22, 2012
Assignee:
NetLogic Microsystems, Inc.
Inventors:
Vinay Raja Iyengar, Venkat Rajendher Reddy Gaddam, Aparna Bharat
Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract: A data receiver circuit includes a transmission line to generate the appropriate timing for clock and data recovery. The transmission line receives a reference signal, and propagates the reference signal through at least two segments of predetermined lengths. The transmission line is configured with a first tab to extract, from the first predetermined length, a first delayed signal, and a second tab to extract, from the second predetermined length, a second delayed signal. A sampling circuit generates samples, at a first time period, from an input signal and the first delayed signal. The sampling circuit also generates samples, at a second time period, from the input signal and the second delayed signal. A capacitance control device to adjust the capacitance of the transmission line is disclosed.
Abstract: A multi-protocol communication circuit, for example, a serializer-deserializer (SerDes) circuit for communicating between an internal logic circuit and an external link includes a select terminal configured to accept a select signal representing a plurality of mode select signal. A SerDes core is coupled to the select terminal and configured to transmit outbound data conforming with a first communication protocol in response to a first mode select signal and conforming with a second communication protocol in response to a second mode select signal. The SerDes core is also configured to receive inbound data respective to a first communication protocol in response to a first mode select signal and respective to a second communication protocol in response to a second mode select signal. Advantages of the invention include the ability to provide high bandwidth communications between integrated circuits that employ different SerDes protocols.
Type:
Grant
Filed:
May 30, 2003
Date of Patent:
April 3, 2012
Assignee:
NetLogic Microsystems, Inc.
Inventors:
Craig S. Forrest, Gaurav Singh, Kiran B. Kattel
Abstract: A method may include comparing a first content addressable memory (“CAM”) entry with a first key value to generate a first comparison result; comparing each of multiple second CAM entries with a second key value to generate multiple second comparison results; and generating a match signal if the first key value matches the first CAM entry and the second key value matches at least one of the multiple second CAM entries.
Abstract: The power consumption of a search engine such as a CAM device is dynamically adjusted to prevent performance degradation and/or damage resulting from overheating. For some embodiments, the CAM device is continuously sampled to generate sampling signals indicating the number of active states and number of compare operations performed during each sampling period. The sampling signals are accumulated to generate an estimated device power profile, which is compared with reference values corresponding to predetermined power levels to generate a dynamic power control signal indicating predicted increases in the device's operating temperature resulting from its power consumption. The dynamic power control signal is then used to selectively reduce the input data rate of the CAM device, thereby reducing power consumption and allowing the device to cool.
Abstract: A data receiver circuit includes a transmission line to generate the appropriate timing for clock and data recovery. The transmission line receives a reference signal, and propagates the reference signal through at least two segments of predetermined lengths. The transmission line is configured with a first tab to extract, from the first predetermined length, a first delayed signal, and a second tab to extract, from the second predetermined length, a second delayed signal. A sampling circuit generates samples, at a first time period, from an input signal and the first delayed signal. The sampling circuit also generates samples, at a second time period, from the input signal and the second delayed signal. A capacitance control device to adjust the capacitance of the transmission line is disclosed.
Abstract: A method may include selectively coupling a result line to a reference node in response to a compare data value being applied to a plurality of compare cell circuits; precharging the result line to the precharge potential by enabling a first precharge path while the compare data value is being applied; and after precharging the result line by enabling the first precharge path, disabling the first precharge path to place it in a high impedance state.
Type:
Grant
Filed:
August 23, 2010
Date of Patent:
January 3, 2012
Assignee:
Netlogic Microsystems, Inc.
Inventors:
Chetan Deshpande, Bindiganavale S. Nataraj
Abstract: A content addressable memory (CAM) cell includes a first storage element for storing a data value, a second storage element for storing the data value, and a compare circuit having first inputs to receive from the first storage element a first complementary data signal indicative of the data value, second inputs to receive from the second storage element a second complementary data signal indicative of the data value, third inputs to receive comparand data, and an output coupled to a match line. The CAM cell allows for simultaneous read and compare operations, as well as simultaneous refresh and compare operations.
Abstract: An integrated search engine device evaluates span prefix masks for keys residing at leaf parent levels of a search tree to identify a longest prefix match to an applied search key. This longest prefix match resides at a leaf node of the search tree that is outside a search path of the search tree for the applied search key. The search engine device is also configured to read a bitmap associated with the leaf node to identify a pointer to associated data for the longest prefix match. The pointer has a value that is based on a position of a set bit within the bitmap that corresponds to a set bit within the span prefix mask that signifies the longest prefix match.
Abstract: A method of placing a content addressable memory (CAM) into a low current state is disclosed. The CAM can include at least one storage location that does not store valid data for a compare operation and includes a plurality of CAM cells, each CAM cell having at least two data controllable impedance paths arranged in parallel with one another. The method can include configuring the majority of the CAM cells to store data values that maintain the corresponding at least two data controllable impedance paths in high impedance states.
Abstract: Extended translation look-aside buffers (eTLB) for converting virtual addresses into physical addresses are presented, the eTLB including, a physical memory address storage having a number of physical addresses, a virtual memory address storage configured to store a number of virtual memory addresses corresponding with the physical addresses, the virtual memory address storage including, a set associative memory structure (SAM), and a content addressable memory (CAM) structure; and comparison circuitry for determining whether a requested address is present in the virtual memory address storage, wherein the eTLB is configured to receive an index register for identifying the SAM structure and the CAM structure, and wherein the eTLB is configured to receive an entry register for providing a virtual page number corresponding with the plurality of virtual memory addresses.
Abstract: A method, apparatus, and storage medium product are provided for forming a forwarding database, and for using the formed database to more efficiently and quickly route packets of data across a computer network. The forwarding database is arranged into multiple sub-databases. Each sub-database is pointed to by a pointer within a pointer table. When performing a longest-match search of incoming addresses, a longest prefix matching algorithm can be used to find the longest match among specialized “spear prefixes” stored in the pointer table. After the longest spear prefixes are found, the pointer table will direct the next search within a sub-database pointed to by that spear prefix. Another longest-match search can be performed for database prefixes (or simply “prefixes”) within the sub-database selected by the pointer. Only the sub-database of interest will, therefore, be searched and all other sub-databases are not accessed.
Abstract: An advanced processor comprises a plurality of multithreaded processor cores configured to support a plurality of software generated read or write instructions for interfacing with a star topology serial bus interface. The multiple-core processor has at least one of an internal fast messaging network or an interface switch interconnect configured to link the processor cores together such that each processor core has a data pathway to each of the other processor cores without going through memory. The fast messaging network or interface switch is also configured to be operably coupled to the star topology serial bus interface. In one aspect of an embodiment of the invention, the fast messaging network connects to a high-bandwidth star-topology serial bus such as a PCI express (PCIe) interface capable of supporting multiple high-bandwidth PCIe lanes.