Patents by Inventor Abhishek R. Appu

Abhishek R. Appu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11675711
    Abstract: One embodiment provides for a graphics processor comprising a translation lookaside buffer (TLB) to cache a first page table entry for a virtual to physical address mapping for use by the graphics processor, the first page table entry to indicate that a first virtual page is a valid page that is cleared to a clear color and graphics pipeline circuitry to bypass a memory access for the first virtual page based on the first page table entry in response to determination that the first virtual page is cleared to the clear color and determine a color associated with the first virtual page without performing a memory access to the first virtual page.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: June 13, 2023
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Abhishek R. Appu, Kiran C. Veernapu
  • Publication number: 20230177817
    Abstract: A mechanism is described for facilitating recognition, reidentification, and security in machine learning at autonomous machines. A method of embodiments, as described herein, includes facilitating a camera to detect one or more objects within a physical vicinity, the one or more objects including a person, and the physical vicinity including a house, where detecting includes capturing one or more images of one or more portions of a body of the person. The method may further include extracting body features based on the one or more portions of the body, comparing the extracted body features with feature vectors stored at a database, and building a classification model based on the extracted body features over a period of time to facilitate recognition or reidentification of the person independent of facial recognition of the person.
    Type: Application
    Filed: October 14, 2022
    Publication date: June 8, 2023
    Applicant: Intel Corporation
    Inventors: BARNAN DAS, MAYURESH M. VARERKAR, NARAYAN BISWAL, STANLEY J. BARAN, GOKCEN CILINGIR, NILESH V. SHAH, ARCHIE SHARMA, SHERINE ABDELHAK, PRANEETHA KOTHA, NEELAY PANDIT, JOHN C. WEAST, MIKE B. MACPHERSON, DUKHWAN KIM, LINDA L. HURD, ABHISHEK R. APPU, ALTUG KOKER, JOYDEEP RAY
  • Patent number: 11669932
    Abstract: A mechanism is described for facilitating sharing of data and compression expansion of models at autonomous machines. A method of embodiments, as described herein, includes detecting a first processor processing information relating to a neural network at a first computing device, where the first processor comprises a first graphics processor and the first computing device comprises a first autonomous machine. The method further includes facilitating the first processor to store one or more portions of the information in a library at a database, where the one or more portions are accessible to a second processor of a computing device.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: June 6, 2023
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, John C. Weast, Mike B. Macpherson, Dukhwan Kim, Linda L. Hurd, Sara S. Baghsorkhi, Justin E. Gottschlich, Prasoonkumar Surti, Chandrasekaran Sakthivel, Joydeep Ray
  • Patent number: 11670044
    Abstract: One embodiment provides for a graphics processing unit comprising a processing cluster to perform coarse pixel shading and output shaded coarse pixels for processing by a pixel processing pipeline and a render cache to store coarse pixel data for input to or output from a pixel processing pipeline.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: June 6, 2023
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Abhishek R. Appu, Subhajit Dasgupta, Srivallaba Mysore, Michael J. Norris, Vasanth Ranganathan, Joydeep Ray
  • Patent number: 11670041
    Abstract: Systems, apparatuses and methods may provide for technology that selects an anti-aliasing mode for a vertex of a primitive based on a parameter associated with the vertex and generates a coverage mask based on the selected anti-aliasing mode. Additionally, one or more pixels corresponding to the vertex may be shaded based at least partly on the coverage mask, wherein the selected anti-aliasing mode varies across a plurality of vertices in the primitive.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: June 6, 2023
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Abhishek R. Appu, Joydeep Ray
  • Patent number: 11663774
    Abstract: Systems, apparatuses and methods may provide away to render edges of an object defined by multiple tessellation triangles. More particularly, systems, apparatuses and methods may provide a way to perform anti-aliasing at the edges of the object based on a coarse pixel rate, where the coarse pixels may be based on a coarse Z value indicate a resolution or granularity of detail of the coarse pixel. The systems, apparatuses and methods may use a shader dispatch engine to dispatch raster rules to a pixel shader to direct the pixel shader to include, in a tile and/or tessellation triangle, one more finer coarse pixels based on a percent of coverage provided by a finer coarse pixel of a tessellation triangle at or along the edge of the object.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: May 30, 2023
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Karthik Vaidyanathan, Murali Ramadoss, Michael Apodaca, Abhishek Venkatesh, Joydeep Ray, Abhishek R. Appu
  • Patent number: 11663746
    Abstract: Embodiments described herein provided for an instruction and associated logic to enable a processing resource including a tensor accelerator to perform optimized computation of sparse submatrix operations. One embodiment provides hardware logic to apply a numerical transform to matrix data to increase the sparsity of the data. Increasing the sparsity may result in a higher compression ratio when the matrix data is compressed.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: May 30, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Prasoonkumar Surti, Jill Boyce, Subramaniam Maiyuran, Michael Apodaca, Adam T. Lake, James Holland, Vasanth Ranganathan, Altug Koker, Lidong Xu, Nikos Kaburlasos
  • Patent number: 11650928
    Abstract: A mechanism is described for facilitating optimization of cache associated with graphics processors at computing devices. A method of embodiments, as described herein, includes introducing coloring bits to contents of a cache associated with a processor including a graphics processor, wherein the coloring bits to represent a signal identifying one or more caches available for use, while avoiding explicit invalidations and flushes.
    Type: Grant
    Filed: April 7, 2022
    Date of Patent: May 16, 2023
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Publication number: 20230142472
    Abstract: In accordance with some embodiments, the render rate is varied across and/or up and down the display screen. This may be done based on where the user is looking in order to reduce power consumption and/or increase performance. Specifically the screen display is separated into regions, such as quadrants. Each of these regions is rendered at a rate determined by at least one of what the user is currently looking at, what the user has looked at in the past and/or what it is predicted that the user will look at next. Areas of less focus may be rendered at a lower rate, reducing power consumption in some embodiments.
    Type: Application
    Filed: October 4, 2022
    Publication date: May 11, 2023
    Inventors: Eric J. Asperheim, Subramaniam Maiyuran, Kiran C. Veernapu, Sanjeev S. Jahagirdar, Balaji Vembu, Devan Burke, Philip R. Laws, Kamal Sinha, Abhishek R. Appu, Elmoustapha Ould-Ahmed-Vall, Peter L. Doyle, Joydeep Ray, Travis T. Schluessler, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Altug Koker
  • Patent number: 11636831
    Abstract: Methods and apparatus relating to an adaptive multibit bus for energy optimization are described. In an embodiment, a 1-bit interconnect of a processor is caused to select between a plurality of operational modes. The plurality of operational modes comprises a first mode and a second mode. The first mode causes transmission of a single bit over the 1-bit interconnect at a first frequency and the second mode causes transmission of a plurality of bits over the 1-bit interconnect at a second frequency based at least in part on a determination that an operating voltage of the 1-bit interconnect is at a high voltage level and that the second frequency is lower than the first frequency. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: April 25, 2023
    Assignee: Intel Corporation
    Inventors: Sanjeev S. Jahagirdar, Tapan A. Ganpule, Anupama A. Thaploo, Abhishek R. Appu, Joydeep Ray, Altug Koker
  • Publication number: 20230123644
    Abstract: One embodiment provides a graphics processor comprising an interface to a system interconnect and a graphics processor coupled to the interface, the graphics processor comprising circuitry configured to compact sample data for multiple sample locations of a pixel, map the multiple sample locations to memory locations that store compacted sample data, the memory locations in a memory of the graphics processor, apply lossless compression to the compacted sample data, and update a compression control surface associated with the memory locations, the compression control surface to specify a compression status for the memory locations
    Type: Application
    Filed: October 6, 2022
    Publication date: April 20, 2023
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Prasoonkumar Surti, Joydeep Ray, Michael J. Norris
  • Publication number: 20230104845
    Abstract: One embodiment provides a graphics processor including a processing resource including a register file, memory, a cache, and load/store/cache circuitry to process load, store, and prefetch messages from the processing resource. The circuitry will sort received memory access messages into address sorted lists of reads and writes. The circuitry schedules a first set of address sorted requests from a first request buffer for a first period of time, then schedules a second set of address sorted requests from a second request buffer for a second period of time.
    Type: Application
    Filed: September 24, 2021
    Publication date: April 6, 2023
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Abhishek R. Appu, Altug Koker, Aditya Navale, Varghese George, Vasanth Ranganathan, Fangwen Fu, Ben J. Ashbaugh, Vidhya Krishnan, Sabareesh Ganapathy, Prathamesh Raghunath Shinde
  • Publication number: 20230101654
    Abstract: Dynamic routing of texture-load in graphics processing is described. An example of a processor includes one or more processing resources, the one or more processing resources to load a message including a texture load; a texture sampler and a data port; and a message router to route the texture load to a destination, wherein the destination may be either the texture sampler or the data port; wherein the message router includes arbitration circuitry to select the destination for the texture load, the arbitration circuitry to base selection of the destination at least in part on support by the data port for a format of a memory surface for the texture load; and a utilization metric for the data port representing availability of the data port.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Applicant: Intel Corporation
    Inventors: Carlos Nava Rodriguez, Yoav Harel, Joydeep Ray, Abhishek R. Appu, Vamsee Vardhan Chivukula, Benjamin R. Pletcher
  • Publication number: 20230102538
    Abstract: Embodiments are directed to systems and methods for supporting generic pointers in hardware of a GPU. According to one embodiment, a GPU includes multiple sub-cores each having a processing resource and a load/store pipeline. The processing resource is operable to receive a memory access message including a pointer and a memory type identifier indicative of the pointer representing a generic pointer. The processing resource is further operable to output a load or store operation to the load/store pipeline based on the memory access message, including computing an address for the load or store operation by adding a base address of a named memory type of a plurality of named memory types referenced by the generic pointer to an offset into a memory of the named memory type. The load/store pipeline is operable to, responsive to receipt of the load or store operation, access the memory at the address.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Prathamesh Raghunath Shinde, Ben J. Ashbaugh, Wei-Yu Chen, Abhishek R. Appu, Vasanth Ranganathan, Dmitry Yurievich Babokin, Ankur N. Shah
  • Publication number: 20230096188
    Abstract: Examples include techniques for a fast clear of a 3-dimensional (3D) surface. Examples include re-describing 3D surface to a 2D surface using various dimension of the 3D surface as inputs in an algorithm to output a 2-dimensional (2D) surface as a re-description of the 3D surface. The algorithm to also includes additional inputs associated with a tiling mode used to read or write the 3D surface to a graphics display and a bit per pixel format to output the 2D surface. 2D surface width and height associated with the outputted 2D surface is included in a clear command to cause the 3D surface to be cleared.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Karol A. SZERSZEN, Prasoonkumar SURTI, Abhishek R. APPU, John H. FEIT
  • Publication number: 20230099093
    Abstract: A graphics processing apparatus includes graphics processors connected by a network connection, where the graphics processors pass compressed data. A first graphics processor stores data blocks as compressed data in a memory. The compressed data has data blocks of variable size, where a size of a block of compressed data depends on a compression ratio of the block of compressed data. A second graphics processor also stores data blocks as compressed data. The first graphics processor concatenates a variable number of blocks of compressed data into a packet of fixed size to send to the second graphics processor. The packet has a variable number of blocks of compressed data depending on the compression ratios of the multiple blocks of compressed data.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Karol A. SZERSZEN, Prasoonkumar SURTI, Abhishek R. APPU
  • Publication number: 20230090973
    Abstract: One embodiment provides a graphics processor including a processing resource including a register file, memory, a cache memory, and load/store/cache circuitry to process load, store, and prefetch messages from the processing resource. The circuitry includes support for an immediate address offset that will be used to adjust the address supplied for a memory access to be requested by the circuitry. Including support for the immediate address offset removes the need to execute additional instructions to adjust the address to be accessed prior to execution of the memory access instruction.
    Type: Application
    Filed: September 21, 2021
    Publication date: March 23, 2023
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Abhishek R. Appu, Timothy R. Bauer, James Valerio, Weiyu Chen, Subramaniam Maiyuran, Prasoonkumar Surti, Karthik Vaidyanathan, Carsten Benthin, Sven Woop, Jiasheng Chen
  • Patent number: 11610564
    Abstract: A mechanism is described for facilitating consolidated compression/de-compression of graphics data streams of varying types at computing devices. A method of embodiments, as described herein, includes generating a common sector cache relating to a graphics processor. The method may further include performing a consolidated compression of multiple types of graphics data streams associated with the graphics processor using the common sector cache.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: March 21, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Joydeep Ray, Prasoonkumar Surti, Altug Koker, Kiran C. Veernapu, Erik G. Liskay
  • Patent number: 11609856
    Abstract: In an example, an apparatus comprises a plurality of processing unit cores, a plurality of cache memory modules associated with the plurality of processing unit cores, and a machine learning model communicatively coupled to the plurality of processing unit cores, wherein the plurality of cache memory modules share cache coherency data with the machine learning model. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: March 21, 2023
    Assignee: INTEL CORPORATION
    Inventors: Chandrasekaran Sakthivel, Prasoonkumar Surti, John C. Weast, Sara S. Baghsorkhi, Justin E. Gottschlich, Abhishek R. Appu, Nicolas C. Galoppo Von Borries, Joydeep Ray, Narayan Srinivasa, Feng Chen, Ben J. Ashbaugh, Rajkishore Barik, Tsung-Han Lin, Kamal Sinha, Eriko Nurvitadhi, Balaji Vembu, Altug Koker
  • Publication number: 20230061670
    Abstract: One embodiment provides an apparatus comprising a memory stack including multiple memory dies and a parallel processor including a plurality of multiprocessors. Each multiprocessor has a single instruction, multiple thread (SIMT) architecture, the parallel processor coupled to the memory stack via one or more memory interfaces. At least one multiprocessor comprises a multiply-accumulate circuit to perform multiply-accumulate operations on matrix data in a stage of a neural network implementation to produce a result matrix comprising a plurality of matrix data elements at a first precision, precision tracking logic to evaluate metrics associated with the matrix data elements and indicate if an optimization is to be performed for representing data at a second stage of the neural network implementation, and a numerical transform unit to dynamically perform a numerical transform operation on the matrix data elements based on the indication to produce transformed matrix data elements at a second precision.
    Type: Application
    Filed: November 1, 2022
    Publication date: March 2, 2023
    Applicant: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Sara S. Baghsorkhi, Anbang Yao, Kevin Nealis, Xiaoming Chen, Altug Koker, Abhishek R. Appu, John C. Weast, Mike B. Macpherson, Dukhwan Kim, Linda L. Hurd, Ben J. Ashbaugh, Barath Lakshmanan, Liwei Ma, Joydeep Ray, Ping T. Tang, Michael S. Strickland