Patents by Inventor Sonal Santan
Sonal Santan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12204940Abstract: Remote kernel execution in a heterogeneous computing system can include executing, using a device processor of a device communicatively linked to a host processor, a device runtime and receiving from the host processor within a hardware submission queue of the device, a command. The command requests execution of a software kernel and specifies a descriptor stored in a region of a memory of the device shared with the host processor. In response to receiving the command, the device runtime, as executed by the device processor, invokes a runner program associated with the software kernel. The runner program can map a physical address of the descriptor to a virtual memory address corresponding to the descriptor that is usable by the software kernel. The runner program can execute the software kernel. The software kernel can access data specified by the descriptor using the virtual memory address as provided by the runner program.Type: GrantFiled: January 17, 2022Date of Patent: January 21, 2025Assignee: Xilinx, Inc.Inventors: Sonal Santan, Yu Liu, Yenpang Lin, Stephen P. Rozum
-
Patent number: 12105716Abstract: Embodiments herein describe techniques for preparing and executing tasks related to a database query in a database accelerator. In one embodiment, the database accelerator is separate from a host CPU. A database management system (DBMS) can offload tasks corresponding to a database query to the database accelerator. The DBMS can request data from the database relevant to the query and then convert that data into one or more data blocks that are suitable for processing by the database accelerator. In one embodiment, the database accelerator contains individual hardware processing units (PUs) that can process data in parallel or concurrently. In order to process the data concurrently, the data block includes individual PU data blocks that are each intended for a respective PU in the database accelerator.Type: GrantFiled: June 23, 2017Date of Patent: October 1, 2024Assignee: XILINX, INC.Inventors: Hare K. Verma, Sonal Santan, Yongjun Wu
-
Publication number: 20240211302Abstract: Dynamic provisioning of portions of a data processing array includes receiving, from an executing application, a context request. The context request specifies a requested task to be performed by a data processing array. A configuration for the data processing array is selected from a plurality of configurations for the data processing array. The selected configuration conforms with the context request and is capable of performing the requested task. A determination is made whether the selected configuration is implementable in the data processing array based, at least in part, on a space requirement of the selected configuration and a current status of the data processing array. The selected configuration is selectively implemented in the data processing array based on the determination.Type: ApplicationFiled: December 22, 2022Publication date: June 27, 2024Applicant: Xilinx, Inc.Inventors: Sonal Santan, Yu Liu, Akila Subramaniam, Vinod K. Kathail, King Chiu Tam, Tung Chuen Kwong, Pranjal Joshi, Soren T. Soe
-
Patent number: 11947469Abstract: Embodiments herein describe partitioning an acceleration device based on the needs of each user application executing in a host. In one embodiment, a flexible queue provisioning method allows the acceleration device to be dynamically partitioned by pushing the configuration through a control command queue to the device by management software running in a trusted zone. The new configuration is parsed and verified by trusted firmware, which, then, creates isolated IO command queues on the acceleration device. These IO command queues can be directly mapped to a user application, VM, or other PCIe devices. In one embodiment, each IO command queue exposes only the compute resource assigned by the trusted firmware in the acceleration device.Type: GrantFiled: February 18, 2022Date of Patent: April 2, 2024Assignee: XILINX, INC.Inventors: Cheng Zhen, Sonal Santan, Min Ma, Chien-Wei Lan
-
Patent number: 11861010Abstract: An integrated circuit can include a communication endpoint configured to maintain a communication link with a host computer, a queue configured to receive a plurality of host commands from the host computer via the communication link, and a processor configured to execute a device runtime. The processor, responsive to executing the device runtime, is configured to perform validation of the host commands read from the queue and selectively execute the host commands based on a result of the validation on a per host command basis. The host commands are executable by the processor to manage functions of the integrated circuit. The queue is implemented in a region of memory that is shared by the integrated circuit and the host computer.Type: GrantFiled: February 14, 2022Date of Patent: January 2, 2024Assignee: Xilinx, Inc.Inventors: Sonal Santan, Yu Liu, Yenpang Lin, Lizhi Hou, Cheng Zhen, Yidong Zhang
-
Publication number: 20230267080Abstract: Embodiments herein describe partitioning an acceleration device based on the needs of each user application executing in a host. In one embodiment, a flexible queue provisioning method allows the acceleration device to be dynamically partitioned by pushing the configuration through a control command queue to the device by management software running in a trusted zone. The new configuration is parsed and verified by trusted firmware, which, then, creates isolated IO command queues on the acceleration device. These IO command queues can be directly mapped to a user application, VM, or other PCIe devices. In one embodiment, each IO command queue exposes only the compute resource assigned by the trusted firmware in the acceleration device.Type: ApplicationFiled: February 18, 2022Publication date: August 24, 2023Inventors: Cheng ZHEN, Sonal SANTAN, Min MA, Chien-Wei LAN
-
Publication number: 20230259627Abstract: An integrated circuit can include a communication endpoint configured to maintain a communication link with a host computer, a queue configured to receive a plurality of host commands from the host computer via the communication link, and a processor configured to execute a device runtime. The processor, responsive to executing the device runtime, is configured to perform validation of the host commands read from the queue and selectively execute the host commands based on a result of the validation on a per host command basis. The host commands are executable by the processor to manage functions of the integrated circuit. The queue is implemented in a region of memory that is shared by the integrated circuit and the host computer.Type: ApplicationFiled: February 14, 2022Publication date: August 17, 2023Applicant: Xilinx, Inc.Inventors: Sonal Santan, Yu Liu, Yenpang Lin, Lizhi Hou, Cheng Zhen, Yidong Zhang
-
Patent number: 11720422Abstract: A unified container file can be selected using computer hardware. The unified container file can include a plurality of files embedded therein used to configure a programmable integrated circuit (IC). The plurality of files can include a first partial configuration bitstream and a second partial configuration bitstream. The unified container file also includes metadata specifying a defined relationship between the first partial configuration bitstream and the second partial configuration bitstream for programming the programmable IC. The defined relationship can be determined using computer hardware by reading the metadata from the unified container file. The programmable IC can be configured, using the computer hardware, based on the defined relationship specified by the metadata using the first partial configuration bitstream and the second partial configuration bitstream.Type: GrantFiled: March 11, 2021Date of Patent: August 8, 2023Assignee: Xilinx, Inc.Inventors: Hem C. Neema, Sonal Santan, Soren T. Soe, Stephen P. Rozum, Nik Cimino
-
Publication number: 20230229497Abstract: Remote kernel execution in a heterogeneous computing system can include executing, using a device processor of a device communicatively linked to a host processor, a device runtime and receiving from the host processor within a hardware submission queue of the device, a command. The command requests execution of a software kernel and specifies a descriptor stored in a region of a memory of the device shared with the host processor. In response to receiving the command, the device runtime, as executed by the device processor, invokes a runner program associated with the software kernel. The runner program can map a physical address of the descriptor to a virtual memory address corresponding to the descriptor that is usable by the software kernel. The runner program can execute the software kernel. The software kernel can access data specified by the descriptor using the virtual memory address as provided by the runner program.Type: ApplicationFiled: January 17, 2022Publication date: July 20, 2023Applicant: Xilinx, Inc.Inventors: Sonal Santan, Yu Liu, Yenpang Lin, Stephen P. Rozum
-
Patent number: 11694066Abstract: Embodiments herein describe techniques for interfacing a neural network application with a neural network accelerator using a library. The neural network application may execute on a host computing system while the neural network accelerator executes on a massively parallel hardware system, e.g., a FPGA. The library operates a pipeline for submitting the tasks received from the neural network application to the neural network accelerator. In one embodiment, the pipeline includes a pre-processing stage, an FPGA execution stage, and a post-processing stage which each correspond to different threads. When receiving a task from the neural network application, the library generates a packet that includes the information required for the different stages in the pipeline to perform the tasks. Because the stages correspond to different threads, the library can process multiple packets in parallel which can increase the utilization of the neural network accelerator on the hardware system.Type: GrantFiled: October 17, 2017Date of Patent: July 4, 2023Assignee: XILINX, INC.Inventors: Aaron Ng, Jindrich Zejda, Elliott Delaye, Xiao Teng, Sonal Santan, Soren T. Soe, Ashish Sirasao, Ehsan Ghasemi, Sean Settle
-
Patent number: 11474555Abstract: An example computing system includes: a processing system, a hardware accelerator coupled to the processing system, and a software platform executing on the processing system. The hardware accelerator includes: a programmable integrated circuit (IC) configured with an acceleration circuit having a static region and a programmable region; a memory in the programmable IC configured to store metadata describing interface circuitry in at least one of the static region and the programmable region of the acceleration circuit. The software platform includes program code executable by the processing system to read the metadata from the memory of the hardware accelerator.Type: GrantFiled: August 23, 2017Date of Patent: October 18, 2022Assignee: XILINX, INC.Inventors: Hem C. Neema, Sonal Santan, Julian M. Kain, Stephen P. Rozum, Khang K. Dao, Kyle Corbett
-
Patent number: 11386034Abstract: A hardware acceleration device can include a switch communicatively linked to a host central processing unit (CPU), an adapter coupled to the switch via a control bus, wherein the control bus is configured to convey addresses of descriptors from the host central CPU to the adapter, and a random-access memory (RAM) coupled to the switch through a data bus. The RAM is configured to store descriptors received from the host CPU via the data bus. The hardware acceleration device can include a compute unit coupled to the adapter and configured to perform operations specified by the descriptors. The adapter may be configured to retrieve the descriptors from the RAM via the data bus, provide arguments from the descriptors to the compute unit, and provide control signals to the compute unit to initiate the operations using the arguments.Type: GrantFiled: October 30, 2020Date of Patent: July 12, 2022Assignee: Xilinx, Inc.Inventors: Sonal Santan, Ravi N. Kurlagunda, Min Ma, Himanshu Choudhary, Manjunath Chepuri, Cheng Zhen, Pranjal Joshi, Sebastian Turullols, Amit Kumar, Kaustuv Manji, Ravinder Sharma, Ch Vamshi Krishna
-
Publication number: 20220138140Abstract: A hardware acceleration device can include a switch communicatively linked to a host central processing unit (CPU), an adapter coupled to the switch via a control bus, wherein the control bus is configured to convey addresses of descriptors from the host central CPU to the adapter, and a random-access memory (RAM) coupled to the switch through a data bus. The RAM is configured to store descriptors received from the host CPU via the data bus. The hardware acceleration device can include a compute unit coupled to the adapter and configured to perform operations specified by the descriptors. The adapter may be configured to retrieve the descriptors from the RAM via the data bus, provide arguments from the descriptors to the compute unit, and provide control signals to the compute unit to initiate the operations using the arguments.Type: ApplicationFiled: October 30, 2020Publication date: May 5, 2022Applicant: Xilinx, Inc.Inventors: Sonal Santan, Ravi N. Kurlagunda, Min Ma, Himanshu Choudhary, Manjunath Chepuri, Cheng Zhen, Pranjal Joshi, Sebastian Turullols, Amit Kumar, Kaustuv Manji, Ravinder Sharma, Ch Vamshi Krishna
-
Patent number: 11163605Abstract: Examples herein describe techniques for launching and executing a pipeline formed by heterogeneous processing units. A system on a chip (SoC) can include different hardware elements which form a collection of heterogeneous processing units, such as general purpose processor, programmable logic array, and specialized processors. These processing units are heterogeneous meaning their underlying hardware and techniques for processing data are different, in contrast to a system that using homogeneous processing units. In the embodiments herein, the heterogeneous processing units can be arranged into a pipeline where each stage of the pipeline is performed by one of the processing units.Type: GrantFiled: September 16, 2019Date of Patent: November 2, 2021Assignee: XILINX, INC.Inventors: Sonal Santan, Min Ma, Soren Soe, Cheng Zhen, Lizhi Hou, Yu Liu
-
Patent number: 11086815Abstract: Supporting multiple clients on a single programmable integrated circuit (IC) can include implementing a first image within the programmable IC in response to a first request for processing to be performed by the programmable IC, wherein the request is from a first process executing in a host data processing system coupled to the programmable IC, receiving, using a processor of the host data processing system, a second request for processing to be performed on the programmable IC from a second and different process executing in the host data processing system while the programmable IC still implements the first image, comparing, using the processor, a second image specified by the second request to the first image, and, in response to determining that the second image matches the first image based on the comparing, granting, using the processor, the second request for processing to be performed by the programmable IC.Type: GrantFiled: April 15, 2019Date of Patent: August 10, 2021Assignee: Xilinx, Inc.Inventors: Sonal Santan, Soren T. Soe, Cheng Zhen
-
Patent number: 11042610Abstract: Embodiments herein describe techniques for validating binary files used to configure a hardware card in a computing system. In one embodiment, the hardware card (e.g., an FPGA) includes programmable logic which the binary file can configure to perform a specialized function. In one embodiment, multiple users can configure the hardware card to perform their specialized tasks. For example, the computing system may be server on the cloud that hosts multiple VMs or a shared workstation. Permitting multiple users to directly configure and use the hardware card may present a security risk. To mitigate this risk, the embodiments herein describe techniques for validating encrypted binary files.Type: GrantFiled: October 4, 2017Date of Patent: June 22, 2021Assignee: XILINX, INC.Inventors: Hem C. Neema, Sonal Santan, Bin Ochotta
-
Patent number: 10956241Abstract: A computer program product can include a non-transitory computer readable storage medium storing a unified container. The unified container can include a header structure, wherein the header structure has a fixed length and specifies a number of section headers included in the unified container. The unified container can include a plurality of section headers equivalent to the number of section headers specified in the header structure. The unified container can include a plurality of data sections corresponding to the plurality of section headers on a one-to-one basis. The plurality of data sections includes a first data section including a hardware binary and a second data section including a software binary. The hardware binary and the software binary are configured to program a programmable integrated circuit. Each section header specifies a type of data stored in the corresponding data section and specifies a mapping for the corresponding data section.Type: GrantFiled: December 20, 2017Date of Patent: March 23, 2021Assignee: Xilinx, Inc.Inventors: Hem C. Neema, Sonal Santan, Soren T. Soe, Stephen P. Rozum, Nik Cimino
-
Patent number: 10922068Abstract: Updating firmware in an programmable integrated circuit (IC) includes determining, using a processor of a computer, a base address register (BAR) of an accelerator card from a device data file, wherein the accelerator card includes a programmable IC and is connected to the computer via a communication bus, mapping, using the processor, a feature PROM and a flash programmer circuit of the programmable IC to local memory of the computer using the BAR, and reading, over the communication bus, the feature PROM on the programmable IC to determine a programming mode for programming an external flash memory coupled to the flash programmer circuit. Based on the programming mode and using the processor, firmware is provided to the flash programmer circuit on the programmable IC via the communication bus. The flash programmer circuit is configured to program the firmware into the external flash memory.Type: GrantFiled: November 9, 2018Date of Patent: February 16, 2021Assignee: Xilinx, Inc.Inventors: Ryan F. Radjabi, Hem C. Neema, Sonal Santan, Yenpang Lin
-
Patent number: 10924430Abstract: A system includes a host system and an integrated circuit coupled to the host system through a communication interface. The integrated circuit is configured for hardware acceleration. The integrated circuit includes a direct memory access circuit coupled to the communication interface, a kernel circuit, and a stream traffic manager circuit coupled to the direct memory access circuit and the kernel circuit. The stream traffic manager circuit is configured to control data streams exchanged between the host system and the kernel circuit.Type: GrantFiled: November 9, 2018Date of Patent: February 16, 2021Assignee: Xilinx, Inc.Inventors: Chandrasekhar S. Thyamagondlu, Hem C. Neema, Kenneth K. Chan, Ravi N. Kurlagunda, Karen Xie, Sonal Santan, Lizhi Hou
-
Patent number: 10877766Abstract: An integrated circuit (IC) may include a scheduler for hardware acceleration. The scheduler may include a command queue having a plurality of slots and configured to store commands offloaded from a host processor for execution by compute units of the IC. The scheduler may include a status register having bit locations corresponding to the slots of the command queue. The scheduler may also include a controller coupled to the command queue and the status register. The controller may be configured to schedule the compute units of the IC to execute the commands stored in the slots of the command queue and update the bit locations of the status register to indicate which commands from the command queue are finished executing.Type: GrantFiled: May 24, 2018Date of Patent: December 29, 2020Assignee: Xilinx, Inc.Inventors: Soren T. Soe, Idris I. Tarwala, Umang Parekh, Sonal Santan, Hem C. Neema