Patents Assigned to Xilinx, Inc.
  • Patent number: 11455144
    Abstract: Apparatus and associated methods relate to providing a modified CORDIC approach and implementing the modified CORDIC approach in SoftMax calculation to reduce usage of hardware resources.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: September 27, 2022
    Assignee: XILINX, INC.
    Inventor: Tomas Figliolia
  • Patent number: 11449344
    Abstract: A processing circuit includes a random access memory (RAM) configured to look up a first next state based on a first address simultaneously with looking up a second next state based on a second address. The first address is formed of a first current state and an input data and the second address is formed of a second current state and the input data. The processing circuit includes a state control circuit that receives the first and second next states, the first current state, and the second current state, and a first-in-first-out (FIFO) memory that stores selected ones of the first and second next states, the first current state, and the second current state. The processing circuit includes a multiplexer configured to selectively pass two states from the FIFO memory or two states from the state control circuit as a third current state and a fourth current state.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: September 20, 2022
    Assignee: Xilinx, Inc.
    Inventors: Sachin Kumawat, Hare Krishna Verma, Vincent Mirian
  • Patent number: 11451230
    Abstract: An example integrated circuit includes an array of circuit tiles; interconnect coupling the circuit tiles in the array, the interconnect including interconnect tiles each having a plurality of connections that include at least a connection to a respective one of the circuit tiles and a connection to at least one other interconnect tile; and a plurality of local crossbars in each of the interconnect tiles, the plurality of local crossbars coupled to form a non-blocking crossbar, each of the plurality of local crossbars including handshaking circuitry for asynchronous communication.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: September 20, 2022
    Assignee: XILINX, INC.
    Inventors: Steven P. Young, Brian C. Gaide
  • Patent number: 11443088
    Abstract: Simulation of a circuit design using accelerated models can include determining, using computer hardware, that a design unit of a circuit design specified in a hardware description language is a prime block and determining, using the computer hardware, an output vector corresponding to an output of the prime block. Using the computer hardware, contents of the prime block can be replaced with an accelerated simulation model specified in a high level language, wherein the accelerated simulation model can determine a value for the output of the prime block as a function of values of one or more inputs of the prime block using the output vector. Using the computer hardware, the circuit design can be elaborated and compiled into object code that is executable to simulate the circuit design.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: September 13, 2022
    Assignee: Xilinx, Inc.
    Inventors: Gaurav Kumar Verma, Saikat Bandyopadhyay
  • Patent number: 11442844
    Abstract: An integrated circuit includes a high-speed interface configured to communicate with a host system for debugging and a debug hub coupled to the high-speed interface. The debug hub is configured to receive a debug command from the host system as memory mapped data. The integrated circuit also includes a plurality of debug cores coupled to the debug hub. Each debug core is coupled to the debug hub by channels. The debug hub is configured to translate the debug command to a data stream and provide the data stream to a target debug core of the plurality of debug cores based on an address specified by the debug command.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: September 13, 2022
    Assignee: Xilinx, Inc.
    Inventors: Michael E. Peattie, Niloy Roy, Vishal Kumar Vangala
  • Patent number: 11443091
    Abstract: An integrated circuit includes a plurality of data processing engines (DPEs) DPEs. Each DPE may include a core configured to perform computations. A first DPE of the plurality of DPEs includes a first core coupled to an input cascade connection of the first core. The input cascade connection is directly coupled to a plurality of source cores of the plurality of DPEs. The input cascade connection includes a plurality of inputs, wherein each of the plurality of inputs is connected to a cascade output of a different one of the plurality of source cores. The input cascade connection is programmable to enable a selected one of the plurality of inputs.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: September 13, 2022
    Assignee: Xilinx, Inc.
    Inventors: Peter McColgan, Baris Ozgul, David Clarke, Tim Tuan, Juan J. Noguera Serra, Goran H. K. Bilski, Jan Langer, Sneha Bhalchandra Date, Stephan Munz, Jose Marques
  • Patent number: 11443018
    Abstract: An example hardware accelerator for a computer system includes a programmable device and further includes kernel logic configured in a programmable fabric of the programmable device, and an intellectual property (IP) checker circuit in the kernel logic. The IP checker circuit is configured to obtain a device identifier (ID) of the programmable device and a signed whitelist, the signed whitelist including a list of device IDs and a signature, verify the signature of the signed whitelist, compare the device ID against the list of device IDs, and selectively assert or deassert an enable of the kernel logic in response to presence or absence, respectively, of the device ID in the list of device IDs and verification of the signature.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: September 13, 2022
    Assignee: XILINX, INC.
    Inventors: Brian S. Martin, Premduth Vidyanandan, Mark B. Carson, Neil Watson, Gary J. McClintock
  • Patent number: 11429481
    Abstract: Embodiments herein describe a hardware based scrubbing scheme where correction logic is integrated with memory elements such that scrubbing is performed by hardware. The correction logic reads the data words stored in the memory element during idle cycles. If a correctable error is detected, the correction logic can then use a subsequent idle cycle to perform a write to correct the error (i.e., replace the corrupted data stored in the memory element with corrected data). By using built-in or integrated correction logic, the embodiments herein do not add extra work for the processor, or can work with applications that do not include a processor. Further, because the correction logic scrubs the memory during idle cycles, correcting bit errors does not have a negative impact on the performance of the memory element. Memory scrubbing can delay the degradation of data error, extending the integrity of the data in the memory.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: August 30, 2022
    Assignee: XILINX, INC.
    Inventors: Sarosh I. Azad, Wern-Yan Koe, Amitava Majumdar
  • Patent number: 11431815
    Abstract: Mining proxy acceleration may include receiving, within a mining proxy, packetized data from a mining pool server and determining, using the mining proxy, whether the packetized data qualifies for broadcast processing. In response to determining that the packetized data qualifies for broadcast processing, the packetized data can be modified using the mining proxy to generate broadcast data. The broadcast data can be broadcast, using the mining proxy, to a plurality of miners subscribed to the mining proxy.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: August 30, 2022
    Assignee: Xilinx, Inc.
    Inventors: Guanwen Zhong, Haris Javaid, Chengchen Hu, Ji Yang, Gordon J. Brebner
  • Patent number: 11429767
    Abstract: Systems and methods for designing an information processing system are described. In one embodiment, a design space is partitioned into a plurality of independent partitions based on a defined set of rules. A unique processing core is assigned to each partition. A plurality of starting points is generated for each partition, where each starting point is associated with a machine learning algorithm. The starting points for each partition may include a performance driven seed and an area-driven seed. A set of feasible designs associated with the information processing system are determined.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 30, 2022
    Assignee: XILINX, INC.
    Inventors: Cody Hao Yu, Peng Zhang
  • Patent number: 11429850
    Abstract: A circuit arrangement includes an array of MAC circuits, wherein each MAC circuit includes a cache configured for storage of a plurality of kernels. The MAC circuits are configured to receive a first set of data elements of an IFM at a first rate. The MAC circuits are configured to perform first MAC operations on the first set of the data elements and a first one of the kernels associated with a first OFM depth index during a first MAC cycle, wherein a rate of MAC cycles is faster than the first rate. The MAC circuits are configured to perform second MAC operations on the first set of the data elements and a second one of the kernels associated with a second OFM depth index during a second MAC cycle that consecutively follows the first MAC cycle.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: August 30, 2022
    Assignee: XILINX, INC.
    Inventors: Xiaoqian Zhang, Ephrem C. Wu, David Berman
  • Patent number: 11428733
    Abstract: Some examples described herein provide for an on-die virtual probe in an integrated circuit structure for measurement of voltages. In an example, an integrated circuit comprises a voltage-controlled frequency oscillator circuitry and a processor circuitry. The voltage-controlled frequency oscillator circuitry comprises a plurality of circuitry components and is configured to generate a signal having a frequency related to a supply voltage. The voltage-controlled frequency oscillator circuitry is disposed at a location of the integrated circuit proximal to the supply voltage being monitored. The processor circuitry is configured to identify a relationship between the frequency of the signal and the supply voltage. The processor circuitry is also configured to determine a value of the supply voltage associated with the signal based on the identified relationship. The processor circuitry further monitors on-die transient voltages at the location of the integrated circuit based on the value of the supply voltage.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: August 30, 2022
    Assignee: XILINX, INC.
    Inventors: Yanran Chen, Edward C. Priest, Martin L. Voogel, Hing Yan To
  • Patent number: 11429851
    Abstract: Disclosed circuits and methods involve a first register configured to store of a first convolutional neural network (CNN) instruction during processing of the first CNN instruction and a second register configured to store a second CNN instruction during processing of the second CNN instruction. Each of a plurality of address generation circuits is configured to generate one or more addresses in response to an input CNN instruction. Control circuitry is configured to select one of the first CNN instruction or the second CNN instruction as input to the address generation circuits.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: August 30, 2022
    Assignee: XILINX, INC.
    Inventors: Xiaoqian Zhang, Ephrem C. Wu, David Berman
  • Patent number: 11429769
    Abstract: Implementing a hardware description language (HDL) memory includes determining, using computer hardware, a width and a depth of the HDL memory specified as an HDL module for implementation in an integrated circuit (IC), partitioning, using the computer hardware, the HDL memory into a plurality of super slices corresponding to columns and the plurality of super slices into a plurality of super tiles arranged in rows. A heterogeneous memory array may be generated, using the computer hardware. The heterogeneous memory array is formed of different types of memory primitives of the IC. Input and output circuitry configured to access the heterogeneous memory array can be generated using the computer hardware.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: August 30, 2022
    Assignee: Xilinx, Inc.
    Inventors: Pradip Kar, Nithin Kumar Guggilla, Chaithanya Dudha, Satyaprakash Pareek
  • Patent number: 11429848
    Abstract: In disclosed approaches of neural network processing, a host computer system copies an input data matrix from host memory to a shared memory for performing neural network operations of a first layer of a neural network by a neural network accelerator. The host instructs the neural network accelerator to perform neural network operations of each layer of the neural network beginning with the input data matrix. The neural network accelerator performs neural network operations of each layer in response to the instruction from the host. The host waits until the neural network accelerator signals completion of performing neural network operations of layer i before instructing the neural network accelerator to commence performing neural network operations of layer i+1, for i?1. The host instructs the neural network accelerator to use a results data matrix in the shared memory from layer i as an input data matrix for layer i+1 for i?1.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: August 30, 2022
    Assignee: XILINX, INC.
    Inventors: Aaron Ng, Elliott Delaye, Jindrich Zejda, Ashish Sirasao
  • Patent number: 11429438
    Abstract: A network interface device has an input configured to receive data from a network. The data is for one of a plurality of different applications. The network interface device also has at least one processor configured to determine which of a plurality of available different caches in a host system the data is to be injected by accessing to a receive queue comprising at least one descriptor indicating a cache location in one of said plurality of caches to which data is to be injected, wherein said at least one descriptor, which indicates the cache location, has an effect on subsequent descriptors of said receive queue until a next descriptor indicates another cache location. The at least one processor is also configured to cause the data to be injected to the cache location in the host system.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: August 30, 2022
    Assignee: Xilinx, Inc.
    Inventors: Steven Leslie Pope, David James Riddoch
  • Patent number: 11422879
    Abstract: Embodiments herein describe error interceptors disposed along a bus that communicatively couples first and second circuits for redirecting in-band errors. That is, the error interceptors can block (or mask) in-band errors so they are not forwarded along the bus. Further, the error interceptors can redirect those errors such that they are converted into out-of-band errors. Moreover, the user can select which error interceptors to activate (e.g., block and redirect the errors) and which to deactivate (e.g., permit the in-band errors to pass). In this manner, the user can control which circuits receive in-band errors and which do not based on whether those circuits can handle the in-band errors.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: August 23, 2022
    Assignee: XILINX, INC.
    Inventors: Andrew Thomas Novotny, Roger D. Flateau, Jr.
  • Patent number: 11422781
    Abstract: Disclosed approaches for generating vector codes include inputting tensor processing statements. Each statement specifies an output variable, an initial variable, and multiply-and-accumulate (MAC) operations, and each MAC operation references the output variable, elements of a first tensor, and one or more elements of a second tensor. The MAC operations are organized into groups, and the MAC operations in each group reference the same output variable and have overlapping references to elements of the first tensor. For each group of MAC operations, at least one instruction is generated to load elements of the first tensor into a first register and at least one instruction is generated to load one or more elements of the second tensor into a second register. For each group of MAC operations, instructions are generated to select for each MAC operation in the group for input to an array of MAC circuits, elements from the first register and one or more elements from the second register.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: August 23, 2022
    Assignee: XILINX, INC.
    Inventors: Stephen A. Neuendorffer, Prasanth Chatarasi, Samuel R. Bayliss
  • Patent number: 11425036
    Abstract: A match-action circuit includes one or more conditional logic circuits, each having an input coupled to input header or metadata of a network packet, and each configured to generate an enable signal as a function of one or more signals of the header or metadata. Each match circuit of one or more match circuits is configured with response values associated with key values. Each match circuit is configured to conditionally lookup response value(s) associated with an input key value from the header or metadata in response to the enable signal from a conditional logic circuit. One or more action circuits are configured to conditionally modify, in response to states of the response value(s) output from the match circuit(s), data of the header or the metadata.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: August 23, 2022
    Assignee: XILINX, INC.
    Inventors: Jaime Herrera, Gordon J. Brebner, Ian McBryan, Rowan Lyons
  • Patent number: 11423303
    Abstract: Apparatus and associated methods relate to providing a machine learning methodology that uses the machine learning's own failure experiences to optimize future solution search and provide self-guided information (e.g., the dependency and independency among various adaptation behavior) to predict a receiver's equalization adaptations. In an illustrative example, a method may include performing a first training on a first neural network model and determining whether all of the equalization parameters are tracked. If not all of the equalization parameters are tracked under the first training, then, a second training on a cascaded model may be performed. The cascaded model may include the first neural network model, and training data of the second training may include successful learning experiences and data of the first neural network model. The prediction accuracy of the trained model may be advantageously kept while having a low demand for training data.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: August 23, 2022
    Assignee: XILINX, INC.
    Inventors: Shuo Jiao, Romi Mayder, Bowen Li, Geoffrey Zhang