Patents Assigned to Xilinx, Inc.
-
Patent number: 11100267Abstract: Embodiments herein describe techniques for designing a compressed hardware implementation of a user-designed memory. In one example, a user defines a memory in hardware description language (HDL) with a depth (D) and a width (W). To compress the memory, a synthesizer designs a core memory array representing the user-defined memory. Using addresses, the synthesizer can identify groups of nodes in the array that can be compressed into a memory element. The synthesizer designs input circuitry such as a data replicator and a write enable generator for generating the inputs and control signals for the groups. The synthesizer can then implement the design in an integrated circuit where each group of nodes maps to a single memory element, thereby resulting in a compressed design.Type: GrantFiled: May 5, 2020Date of Patent: August 24, 2021Assignee: XILINX, INC.Inventors: Nithin Kumar Guggilla, Pradip Kar, Chaithanya Dudha
-
Patent number: 11099918Abstract: A method for accelerating algorithms and applications on field-programmable gate arrays (FPGAs). The method includes: obtaining, from a host application, by a run-time configurable kernel, implemented on an FPGA, a first set of kernel input data; obtaining, from the host application, by the run-time configurable kernel, a first set of kernel operation parameters; parameterizing the run-time configurable kernel at run-time, using the first set of kernel operation parameters; and performing, by the parameterized run-time configurable kernel, a first kernel operation on the first set of kernel input data to obtain a first set of kernel output data.Type: GrantFiled: May 10, 2016Date of Patent: August 24, 2021Assignee: XILINX, INC.Inventors: Nagesh Chandrasekaran Gupta, Varun Santhaseelan
-
Publication number: 20210258284Abstract: A network interface device having a hardware module comprising a plurality of processing units. Each of the plurality of processing units is associated with its own at least one predefined operation. At a compile time, the hardware module is configured by arranging at least some of the plurality of processing units to perform their respective at least one operation with respect to a data packet in a certain order so as to perform a function with respect to that data packet. A compiler is provide to assign different processing stages to each processing unit. A controller is provided to switch between different processing circuitry on the fly so that one processing circuitry may be used whilst another is being compiled.Type: ApplicationFiled: April 30, 2021Publication date: August 19, 2021Applicant: Xilinx, Inc.Inventors: Steven Leslie Pope, Neil Turton, David James Riddoch, Dmitri Kitariev, Ripduman Sohan, Derek Edward Roberts
-
Publication number: 20210255987Abstract: A data processing system and method are provided. A host computing device comprises at least one processor. A network interface device is arranged to couple the host computing device to a network. The network interface device comprises a buffer for receiving data for transmission from the host computing device. The processor is configured to execute instructions to transfer the data for transmission to the buffer. The data processing system further comprises an indicator store configured to store an indication that at least some of the data for transmission has been transferred to the buffer wherein the indication is associated with a descriptor pointing to the buffer.Type: ApplicationFiled: May 5, 2021Publication date: August 19, 2021Applicant: Xilinx, Inc.Inventors: Steven L. Pope, David J. Riddoch, Dmitri Kitariev
-
Patent number: 11095515Abstract: A data processing system comprising: first and second network ports each operable to support a network connection configured according to one or more of a predetermined set of physical layer protocols; and a processor configured to, on a network message being formed for transmission to a network endpoint accessible over either of the first and second network ports: estimate the total time required to, for each of the predetermined set of physical layer protocols, negotiate a respective network connection and transmit the entire network message over that respective network connection; select the physical layer protocol having the lowest estimate of the total time required to negotiate a respective network connection and transmit the network message over that respective network connection; and configure at least one of the first and second network ports to use the selected physical layer protocol.Type: GrantFiled: November 12, 2019Date of Patent: August 17, 2021Assignee: XILINX, INC.Inventor: Steve L. Pope
-
Patent number: 11093225Abstract: A high parallelism computing system and instruction scheduling method thereof are disclosed. The computing system comprises: an instruction reading and distribution module for reading a plurality of types of instructions in a specific order, and distributing the acquired instructions to corresponding function modules according to the types; an internal buffer for buffering data and instructions for performing computation; a plurality of function modules each of which sequentially executes instructions of the present type distributed by the instruction reading and distribution module and reads the data from the internal buffer; and wherein the specific order is obtained by topologically sorting the instructions according to a directed acyclic graph consisting of the types and dependency relationships.Type: GrantFiled: June 27, 2019Date of Patent: August 17, 2021Assignee: Xilinx, Inc.Inventors: Qian Yu, Lingzhi Sui, Shaoxia Fang, Junbin Wang, Yi Shan
-
Patent number: 11093284Abstract: A data processing system has a poll mode driver and a library supporting protocol processing. The poll mode driver and the library are non-operating system functionalities. An application is provided. An operation system is configured while executing in kernel mode and in response to the application being determined to be unresponsive, use a helper process being an operating system functionality executing at user-mode to cause a receive or transmit mode of the application to continue.Type: GrantFiled: May 12, 2017Date of Patent: August 17, 2021Assignee: XILINX, INC.Inventors: Steven L. Pope, Kieran Mansley, Maciej Aleksander Jablonski
-
Patent number: 11093394Abstract: An example Cache-Coherent Non-Uniform Memory Access (CC-NUMA) system includes: one or more fabric switches; a home agent coupled to the one or more fabric switches; first and second response agents coupled to the fabric switches; wherein the home agent is configured to send a delegated snoop message to the first response agent, the delegated snoop message instructing the first response agent to snoop the second response agent; wherein the first response agent is configured to snoop the second response agent in response to the delegated snoop message; and wherein the first and second response agents are configured to perform a cache-to-cache transfer during the snoop.Type: GrantFiled: September 4, 2019Date of Patent: August 17, 2021Assignee: XILINX, INC.Inventors: Millind Mittal, Jaideep Dastidar
-
Patent number: 11088678Abstract: Examples described herein generally relate to devices that include a pulsed flip-flop capable of being implemented across multiple voltage domains. In an example, a device includes a pulsed flip-flop. The pulsed flip-flop includes a master circuit and a slave circuit sequentially connected to the master circuit. The master circuit includes a pre-charge input circuit and a first latch. A first node is connected between the pre-charge input circuit and the first latch. The slave circuit includes a resolving circuit and a second latch. The first node is connected to an input node of the resolving circuit. A second node is connected between the resolving circuit and the second latch. The resolving circuit is configured to selectively (i) pull up or pull down a voltage of the second node and (ii) be disabled.Type: GrantFiled: February 11, 2020Date of Patent: August 10, 2021Assignee: XILINX, INC.Inventors: Kumar Rahul, Mohammad Anees, Mahendrakumar Gunasekaran
-
Patent number: 11086815Abstract: Supporting multiple clients on a single programmable integrated circuit (IC) can include implementing a first image within the programmable IC in response to a first request for processing to be performed by the programmable IC, wherein the request is from a first process executing in a host data processing system coupled to the programmable IC, receiving, using a processor of the host data processing system, a second request for processing to be performed on the programmable IC from a second and different process executing in the host data processing system while the programmable IC still implements the first image, comparing, using the processor, a second image specified by the second request to the first image, and, in response to determining that the second image matches the first image based on the comparing, granting, using the processor, the second request for processing to be performed by the programmable IC.Type: GrantFiled: April 15, 2019Date of Patent: August 10, 2021Assignee: Xilinx, Inc.Inventors: Sonal Santan, Soren T. Soe, Cheng Zhen
-
Patent number: 11089308Abstract: A method for video encoding is provided. The method comprises retrieving a first video frame comprising a plurality of pixel blocks; determining a rate distortion optimization (RDO) cost for a first prediction mode for a pixel block; determining a variance-bits ratio (VBR) of the pixel block; upon determining the VBR is greater than a predefined threshold, scaling the RDO cost for the first prediction mode based on a predefined scale factor; and selecting one of the first prediction mode and a second prediction mode for video encoding of the first video frame based on comparing the scaled RDO cost for the first prediction mode and a second RDO cost for the second prediction mode for the pixel block.Type: GrantFiled: June 13, 2019Date of Patent: August 10, 2021Assignee: XILINX, INC.Inventors: Sumit Johar, Mahesh Narain Shukla, Vijay Kumar Bansal
-
Patent number: 11082067Abstract: Embodiments described herein provide a code generation mechanism (FIG. 3, 301) in a Polar encoder (FIG. 2, 204) to determine a bit type (FIG. 3, 312) corresponding to each coded bit in the Polar code before sending the data bits for encoding (FIG. 3, 303). For example, each bit in the Polar code is determined to have a bit type of a frozen bit, parity bit, an information bit, or a cyclic redundancy check (CRC) bit based at least on the respective reliability index of the bit from a pre-computed reliability index lookup table (FIG. 4A, 411). In this way, the bit type determination can be completed in one loop by iterating the list of entries in the pre-computed reliability index lookup table.Type: GrantFiled: October 3, 2019Date of Patent: August 3, 2021Assignee: XILINX, INC.Inventors: Ming Ruan, Gordon I. Old, Richard L. Walke, Zahid Khan
-
Patent number: 11082364Abstract: A method comprises receiving at a compiler a bit file description and a program, said bit file description comprising a description of routing of a part of a circuit. The method comprises compiling the program using said bit file description to output a bit file for said program.Type: GrantFiled: April 25, 2019Date of Patent: August 3, 2021Assignee: Xilinx, Inc.Inventors: Steven Leslie Pope, Neil Turton, David James Riddoch, Dmitri Kitariev, Ripduman Sohan, Derek Edward Roberts
-
Patent number: 11075650Abstract: A decoder circuit includes an input to receive a first codeword encoded based on a quasi-cyclic low-density parity-check (QC LDPC) code. The first codeword includes a sequence of data arranged according to an order of columns in a first parity-check matrix associated with the QC LDPC code. A codeword reordering stage generates a reordered codeword by changing the sequence of the data in the first codeword based at least in part on a size of one or more circulant submatrices in the first parity-check matrix. An LDPC decoder generates a decoded codeword by decoding the reordered codeword based on a second parity-check matrix associated with the QC LDPC code. In some implementations, the second parity-check matrix may comprise a plurality of second circulant submatrices of a different size than the first circulant submatrices.Type: GrantFiled: October 29, 2019Date of Patent: July 27, 2021Assignee: Xilinx, Inc.Inventors: Andrew Dow, Richard L. Walke
-
Patent number: 11075117Abstract: Techniques for singulating dies from a respective workpiece and for incorporating one or more singulated die into a stacked device structure are described herein. In some examples, singulating a die from a workpiece includes chemically etching the workpiece in a scribe line. In some examples, singulating a die from a workpiece includes mechanically dicing the workpiece in a scribe line and forming a liner along a sidewall of the die. The die can be incorporated into a stacked device structure. The die can be attached to a substrate along with another die that is attached to the substrate. An encapsulant can be between each die and the substrate and laterally between the dies.Type: GrantFiled: February 26, 2018Date of Patent: July 27, 2021Assignee: Xilinx, Inc.Inventors: Ganesh Hariharan, Raghunandan Chaware, Inderjit Singh
-
Patent number: 11073550Abstract: A test vehicle, along with methods for fabricating and using a test vehicle, are disclosed herein. In one example, a test vehicle is provided that includes a substrate, at least a first passive die mounted on the substrate, and at least a first test die mounted on the substrate. The first test die includes test circuitry configured to test continuity through solder interconnects formed between the substrate and the first passive die.Type: GrantFiled: April 29, 2019Date of Patent: July 27, 2021Assignee: XILINX, INC.Inventors: Yuqing Gong, Suresh Parameswaran, Boon Y. Ang
-
Patent number: 11074208Abstract: An adaptive memory expansion scheme is proposed, where one or more memory expansion capable Hosts or Accelerators can have their memory mapped to one or more memory expansion devices. The embodiments below describe discovery, configuration, and mapping schemes that allow independent SCM implementations and CPU-Host implementations to match their memory expansion capabilities. As a result, a memory expansion host (e.g., a memory controller in a CPU or an Accelerator) can declare multiple logical memory expansion pools, each with a unique capacity. These logical memory pools can be matched to physical memory in the SCM cards using windows in a global address map. These windows represent shared memory for the Home Agents (HAs) (e.g., the Host) and the Slave Agent (SAs) (e.g., the memory expansion device).Type: GrantFiled: August 29, 2019Date of Patent: July 27, 2021Assignee: XILINX, INC.Inventors: Jaideep Dastidar, Millind Mittal
-
Patent number: 11063594Abstract: An integrated circuit (IC) includes a first interface configured for operation with a plurality of tenants implemented concurrently in the integrated circuit, wherein the plurality of tenants communicate with a host data processing system using the first interface. The IC includes a second interface configured for operation with the plurality of tenants, wherein the plurality of tenants communicate with one or more network nodes via a network using the second interface. The IC can include a programmable logic circuitry configured for operation with the plurality of tenants, wherein the programmable logic circuitry implements one or more hardware accelerated functions for the plurality of tenants and routes data between the first interface and the second interface. The first interface, the second interface, and the programmable logic circuitry are configured to provide isolation among the plurality of tenants.Type: GrantFiled: May 11, 2020Date of Patent: July 13, 2021Assignee: Xilinx, Inc.Inventors: Sagheer Ahmad, Jaideep Dastidar, Brian C. Gaide, Juan J. Noguera Serra, Ian A. Swarbrick
-
Patent number: 11061673Abstract: An example core for data processing engine (DPE) includes a first register file configured to provide a first plurality of output lanes, a processor, coupled to the register file, including: a multiply-accumulate (MAC) circuit, and a first permute circuit coupled between the first register file and the MAC circuit. The first permute circuit is configured to generate a first vector by selecting a first set of output lanes from the first plurality of output lanes, and a second permute circuit coupled between the first register file and the MAC circuit. The second permute circuit is configured to generate a second vector by selecting a second set of output lanes from the first plurality of output lanes.Type: GrantFiled: April 3, 2018Date of Patent: July 13, 2021Assignee: XILINX, INC.Inventors: Baris Ozgul, Jan Langer, Juan J. Noguera Serra, Goran H. K. Bilski, Richard L. Walke
-
Patent number: 11055458Abstract: Verification for a design can include, for a covergroup corresponding to a variable of the design, generating a state coverage data structure specifying a plurality of transition bins. Each transition bin can include a sequence. Each sequence can specify states of the variable to be traversed in order during simulation of the design. Verification can include generating a state sequence table configured to use state values as keys and one or more of the sequences as data for the respective keys, and during simulation of the design, maintaining a sequence list specifying each sequence that is running based on sample values of the variable. Hit counts for the transition bins can be updated during the simulation.Type: GrantFiled: June 11, 2020Date of Patent: July 6, 2021Assignee: Xilinx, Inc.Inventors: Aparna Suresh, Tapodyuti Mandal, Vinayak Thonda