Patents by Inventor Jayanth N. Rao

Jayanth N. Rao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11531623
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: December 20, 2022
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Publication number: 20210286733
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Application
    Filed: February 19, 2021
    Publication date: September 16, 2021
    Applicant: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Patent number: 10937118
    Abstract: A method and system are described herein for an optimization technique on two aspects of thread scheduling and dispatch when the driver is allowed to pick the scheduling attributes. The present techniques rely on an enhanced GPGPU Walker hardware command and one dimensional local identification generation to maximize thread residency.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: March 2, 2021
    Assignee: INTEL CORPORATION
    Inventors: Jayanth N. Rao, Michal Mrozek
  • Patent number: 10929304
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Patent number: 10521874
    Abstract: An apparatus and method are described for executing workloads without host intervention. For example, one embodiment of an apparatus comprises: a host processor; and a graphics processor unit (GPU) to execute a hierarchical workload responsive to one or more commands issued by the host processor, the hierarchical workload comprising a parent workload and a plurality of child workloads interconnected in a logical graph structure; and a scheduler kernel implemented by the GPU to schedule execution of the plurality of child workloads without host intervention, the scheduler kernel to evaluate conditions required for execution of the child workloads and determine an order in which to execute the child workloads on the GPU based on the evaluated conditions; the GPU to execute the child workloads in the order determined by the scheduler kernel and to provide results of parent and child workloads to the host processor following execution of all of the child workloads.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: December 31, 2019
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Pavan K. Lanka, Michal Mrozek
  • Publication number: 20190259129
    Abstract: A method and system are described herein for an optimization technique on two aspects of thread scheduling and dispatch when the driver is allowed to pick the scheduling attributes. The present techniques rely on an enhanced GPGPU Walker hardware command and one dimensional local identification generation to maximize thread residency.
    Type: Application
    Filed: January 28, 2019
    Publication date: August 22, 2019
    Applicant: Intel Corporation
    Inventors: Jayanth N. Rao, Michal Mrozek
  • Publication number: 20190114267
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Application
    Filed: December 13, 2018
    Publication date: April 18, 2019
    Applicant: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Patent number: 10235732
    Abstract: A method and system are described herein for an optimization technique on two aspects of thread scheduling and dispatch when the driver is allowed to pick the scheduling attributes. The present techniques rely on an enhanced GPGPU Walker hardware command and one dimensional local identification generation to maximize thread residency.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: March 19, 2019
    Assignee: INTEL CORPORATION
    Inventors: Jayanth N. Rao, Michal Mrozek
  • Patent number: 10198361
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: February 5, 2019
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Patent number: 10068306
    Abstract: A mechanism is described for facilitating dynamic pipelining of workload executions at graphics processing units on computing devices. A method of embodiments, as described herein, includes generating a command buffer having a plurality of kernels relating to a plurality of workloads to be executed at a graphics processing unit (GPU), and pipelining the workloads to be processed at the GPU, where pipelining includes scheduling each kernel to be executed on the GPU based on at least one of availability of resource threads and status of one or more dependency events relating to each kernel in relation to other kernels of the plurality of kernels.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: September 4, 2018
    Assignee: INTEL CORPORATION
    Inventors: Jayanth N. Rao, Pavan K. Lanka
  • Patent number: 9892480
    Abstract: According to some embodiments, a graphics processor may abort a workload without requiring changes to the kernel code compilation or intruding upon graphics processing unit execution. Instead, it is possible to only read the predicate state once before starting and once before restarting a workload that has been preempted because the user wishes to abort the work. This avoids the need to read from each execution unit, reducing the drain on memory bandwidth and increasing power and performance in some embodiments.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: February 13, 2018
    Assignee: Intel Corporation
    Inventor: Jayanth N. Rao
  • Patent number: 9779472
    Abstract: A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: October 3, 2017
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Ronald W. Silvas, Ankur N. Shah
  • Patent number: 9606919
    Abstract: A method and apparatus to facilitate shared pointers in a heterogeneous platform. In one embodiment of the invention, the heterogeneous or non-homogeneous platform includes, but is not limited to, a central processing core or unit, a graphics processing core or unit, a digital signal processor, an interface module, and any other form of processing cores. The heterogeneous platform has logic to facilitate sharing of pointers to a location of a memory shared by the CPU and the GPU. By sharing pointers in the heterogeneous platform, the data or information sharing between different cores in the heterogeneous platform can be simplified.
    Type: Grant
    Filed: October 13, 2014
    Date of Patent: March 28, 2017
    Assignee: Intel Corporation
    Inventors: Yang Ni, Rajkishore Barik, Ali-Reza Adl-Tabatabai, Tatiana Shpeisman, Jayanth N. Rao, Ben J. Ashbaugh, Tomasz Janczak
  • Patent number: 9514559
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: December 6, 2016
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Publication number: 20160328823
    Abstract: A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.
    Type: Application
    Filed: May 13, 2016
    Publication date: November 10, 2016
    Applicant: INTEL CORPORATION
    Inventors: Jayanth N. RAO, Ronald W. SILVAS, Ankur N. SHAH
  • Publication number: 20160314077
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an 110 device page table.
    Type: Application
    Filed: June 30, 2016
    Publication date: October 27, 2016
    Applicant: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Patent number: 9449363
    Abstract: Apparatuses, systems, and methods may sample a texture, manage a page fault, and/or switch a context associated with the page fault. A three-dimensional (3D) graphics pipeline may provide texture sample location data corresponding to a texture, wherein sampling of the texture is to be executed external to the 3D graphics pipeline. A compute pipeline may execute sampling of the texture utilizing the texture sample location data and provide texture sample result data corresponding to the texture, wherein the 3D graphics pipeline may composite a frame utilizing the texture sample result data. The compute pipeline may manage a page fault, wherein the page fault and/or management of the page fault may be hidden from a graphics application. In addition, the compute pipeline may switch a compute context associated with the page fault to allow a graphics task not associated with the page fault to be executed and/or to prevent a stall.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: September 20, 2016
    Assignee: Intel Corporation
    Inventors: John A. Tsakok, Brandon L. Fliflet, Jayanth N. Rao
  • Publication number: 20160203580
    Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
    Type: Application
    Filed: March 24, 2016
    Publication date: July 14, 2016
    Applicant: Intel Corporation
    Inventors: Jayanth N. Rao, Murali Sundaresan
  • Patent number: 9378572
    Abstract: A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.
    Type: Grant
    Filed: August 17, 2012
    Date of Patent: June 28, 2016
    Assignee: Intel Corporation
    Inventors: Jayanth N. Rao, Ronald W. Silvas, Ankur N. Shah
  • Publication number: 20160180486
    Abstract: A mechanism is described for facilitating dynamic pipelining of workload executions at graphics processing units on computing devices. A method of embodiments, as described herein, includes generating a command buffer having a plurality of kernels relating to a plurality of workloads to be executed at a graphics processing unit (GPU), and pipelining the workloads to be processed at the GPU, where pipelining includes scheduling each kernel to be executed on the GPU based on at least one of availability of resource threads and status of one or more dependency events relating to each kernel in relation to other kernels of the plurality of kernels.
    Type: Application
    Filed: December 18, 2014
    Publication date: June 23, 2016
    Inventors: JAYANTH N. RAO, PAVAN K. LANKA