Patents by Inventor Kiran C. Veernapu

Kiran C. Veernapu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210034135
    Abstract: Described herein are various embodiments of reducing dynamic power consumption within a processor device. One embodiment provides a technique for dynamic link width reduction based on the instantaneous throughput demand for client of an interconnect fabric. One embodiment provides for a parallel processor comprising an interconnect fabric including a dynamic bus module to configure a bus width for a client of the interconnect fabric based on throughput demand from the client.
    Type: Application
    Filed: August 13, 2020
    Publication date: February 4, 2021
    Applicant: Intel Corporation
    Inventors: Mohammed Tameem, Altug Koker, Kiran C. Veernapu, Abhishek R. Appu, Ankur N. Shah, Joydeep Ray, Travis T. Schluessler, Jonathan Kennedy
  • Patent number: 10908905
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to determine a first number of threads to be scheduled for each context of a plurality of contexts in a multi-context processing system, allocate a second number of streaming multiprocessors (SMs) to the respective plurality of contexts, and dispatch threads from the plurality of contexts only to the streaming multiprocessor(s) allocated to the respective plurality of contexts. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: February 2, 2021
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Altug Koker, Balaji Vembu, Abhishek R. Appu, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu
  • Patent number: 10902546
    Abstract: A mechanism is described for facilitating selective skipping of compression cycles in computing devices. A method of embodiments, as described herein, includes facilitating determining a first current output relating to compression of a current set of data to be same as a previous output from compression of a previous set of data, and turning off a compression engine to skip compression of the current set of data.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: January 26, 2021
    Assignee: INTEL CORPORATION
    Inventors: Kiran C. Veernapu, Abhishek R. Appu, Prasoonkumar Surti, Arijit Mukhopadhyay, Altug Koker, Joydeep Ray
  • Publication number: 20200387399
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive a completion acknowledgment from the plurality of graphics processing units and in response to a determination that the workload is finished, to terminate one or more communication connections on the interconnect bridge. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: June 18, 2020
    Publication date: December 10, 2020
    Applicant: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Balaji Vembu
  • Patent number: 10852806
    Abstract: Methods and apparatus relating to techniques for a dual path sequential element to reduce toggles in data path are described. In an embodiment, switching logic causes signals for a single data path of a processor to be directed to at least two separate data paths. At least one of the two separate data paths is power gated to reduce signal toggles in the at least one data path. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: December 1, 2020
    Assignee: Intel Corporation
    Inventors: Subramaniam Maiyuran, Sanjeev S. Jahagirdar, Kiran C. Veernapu, Eric J. Asperheim, Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Patent number: 10831598
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive metadata from an application, wherein the meta data indicates one or more processing operations which can accommodate a predetermined level of bit errors in read operations from memory, determine, from the metadata, pixel data for which error correction code bypass is acceptable, and generate one or more error correction code bypass hints for subsequent cache access to the pixel data for which error correction code bypass is acceptable, and transmit the one or more error correction code bypass hints to a graphics processing pipeline. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: November 10, 2020
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray
  • Publication number: 20200348897
    Abstract: In accordance with some embodiments, the render rate is varied across and/or up and down the display screen. This may be done based on where the user is looking in order to reduce power consumption and/or increase performance. Specifically the screen display is separated into regions, such as quadrants. Each of these regions is rendered at a rate determined by at least one of what the user is currently looking at, what the user has looked at in the past and/or what it is predicted that the user will look at next. Areas of less focus may be rendered at a lower rate, reducing power consumption in some embodiments.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 5, 2020
    Inventors: Eric J. Asperheim, Subramaniam M. Maiyuran, Kiran C. Veernapu, Sanjeev S. Jahagirdar, Balaji Vembu, Devan Burke, Philip R. Laws, Kamal Sinha, Abhishek R. Appu, Elmoustapha Ould-Ahmed-Vall, Peter L. Doyle, Joydeep Ray, Travis T. Schluessler, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Altug Koker
  • Patent number: 10817296
    Abstract: In an example, an apparatus comprises a plurality of execution units, and logic, at least partially including hardware logic, to assemble a general register file (GRF) message and hold the GRF message in storage in a data port until all data for the GRF message is received. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: October 27, 2020
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Ramkumar Ravikumar, Kiran C. Veernapu, Prasoonkumar Surti, Vasanth Ranganathan
  • Publication number: 20200327068
    Abstract: One embodiment provides for a graphics processor comprising a translation lookaside buffer (TLB) to cache a first page table entry for a virtual to physical address mapping for use by the graphics processor, the first page table entry to indicate that a first virtual page is cleared to a clear color and a graphics pipeline to bypass a memory access for the first virtual page based on the first page table entry, wherein the graphics pipeline is to read a field in the first page table entry to determine a value of the clear color.
    Type: Application
    Filed: March 26, 2020
    Publication date: October 15, 2020
    Applicant: Intel Corporation
    Inventors: Prasoonkumar Surti, Abhishek R. Appu, Kiran C. Veernapu
  • Patent number: 10783084
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: September 22, 2020
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Atlug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
  • Patent number: 10769072
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive, in a read/modify/write (RMW) pipeline, a cache access request from a requestor, wherein the cache request comprises a cache set identifier associated with requested data in the cache set, determine whether the cache set associated with the cache set identifier is in an inaccessible invalid state, and in response to a determination that the cache set is in an inaccessible state or an invalid state, to terminate the cache access request. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: September 8, 2020
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Prasoonkumar Surti, Kamal Sinha, Kiran C. Veernapu, Balaji Vembu
  • Patent number: 10769818
    Abstract: A mechanism is described for facilitating smart compression/decompression schemes at computing devices. A method of embodiments, as described herein, includes unifying a first compression scheme relating to three-dimensional (3D) content and a second compression scheme relating to media content into a unified compression scheme to perform compression of one or more of the 3D content and the media content relating to a processor including a graphics processor.
    Type: Grant
    Filed: April 9, 2017
    Date of Patent: September 8, 2020
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Kiran C. Veernapu, Prasoonkumar Surti, Joydeep Ray, Altug Koker, Eric G. Liskay
  • Patent number: 10761589
    Abstract: Described herein are various embodiments of reducing dynamic power consumption within a processor device. One embodiment provides a technique for dynamic link width reduction based on the instantaneous throughput demand for client of an interconnect fabric. One embodiment provides for a parallel processor comprising an interconnect fabric including a dynamic bus module to configure a bus width for a client of the interconnect fabric based on throughput demand from the client.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Mohammed Tameem, Altug Koker, Kiran C. Veernapu, Abhishek R. Appu, Ankur N. Shah, Joydeep Ray, Travis T. Schluessler, Jonathan Kennedy
  • Publication number: 20200272215
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to collect user information for a user of a data processing device, generate a user profile for the user of the data processing device from the user information, and set a power profile a processor in the data processing device using the user profile. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: February 28, 2020
    Publication date: August 27, 2020
    Applicant: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Balaji Vembu, Prasoonkumar Surti, Kamal Sinha, Eric J. Hoekstra, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Travis T. Schluessler, Ankur N. Shah, Jonathan Kennedy
  • Publication number: 20200241622
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive data for a current write operation to a memory, determine a number of bits in the received data for the current write operation to the memory which have changed from a previous write operation to the memory and in response to a determination that the number of bits in the received data for the current write operation to the memory which have changed from a previous write operation to the memory exceeds a threshold, to toggle a plurality of bits in the data for the current write operation to create an encoded data set and set an indicator bit to a value which indicates that the plurality of bits have been toggled. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: February 5, 2020
    Publication date: July 30, 2020
    Applicant: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Eric J. Hoekstra, Kiran C. Veernapu, Prasoonkumar Surti, Vasanth Ranganathan, Kamal Sinha, Balaji Vembu, Eric J. Asperheim, Sanjeev S. Jahagirdar, Joydeep Ray
  • Publication number: 20200210238
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising at least a first type of execution unit and a second type of execution unit and logic, at least partially including hardware logic, to analyze a workload and assign the workload to one of the first type of execution unit or the second type of execution unit. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: December 24, 2019
    Publication date: July 2, 2020
    Applicant: Intel Corporation
    Inventors: Abhishek R Appu, Altug Koker, Balaji Vembu, Joydeep Ray, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu, Subramaniam Maiyuran, Sanjeev S. Jahagirdar, Eric J. Asperheim, Guei-Yuan Lueh, David Puffer, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Josh B. Mastronarde, Linda L. Hurd, Travis T. Schluessler, Tomasz Janczak, Abhishek Venkatesh, Kai Xiao, Slawomir Grajewski
  • Patent number: 10691392
    Abstract: In accordance with some embodiments, the render rate is varied across and/or up and down the display screen. This may be done based on where the user is looking in order to reduce power consumption and/or increase performance. Specifically the screen display is separated into regions, such as quadrants. Each of these regions is rendered at a rate determined by at least one of what the user is currently looking at, what the user has looked at in the past and/or what it is predicted that the user will look at next. Areas of less focus may be rendered at a lower rate, reducing power consumption in some embodiments.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: June 23, 2020
    Assignee: Intel Corporation
    Inventors: Eric J. Asperheim, Subramaniam M. Maiyuran, Kiran C. Veernapu, Sanjeev S. Jahagirdar, Balaji Vembu, Devan Burke, Philip R. Laws, Kamal Sinha, Abhishek R. Appu, Elmoustapha Ould-Ahmed-Vall, Peter L. Doyle, Joydeep Ray, Travis T. Schluessler, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Altug Koker
  • Patent number: 10691497
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive a completion acknowledgment from the plurality of graphics processing units and in response to a determination that the workload is finished, to terminate one or more communication connections on the interconnect bridge. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: June 23, 2020
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Balaji Vembu
  • Publication number: 20200183849
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 11, 2020
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
  • Publication number: 20200175948
    Abstract: A mechanism is described for facilitating consolidated compression/de-compression of graphics data streams of varying types at computing devices. A method of embodiments, as described herein, includes generating a common sector cache relating to a graphics processor. The method may further include performing a consolidated compression of multiple types of graphics data streams associated with the graphics processor using the common sector cache.
    Type: Application
    Filed: September 30, 2019
    Publication date: June 4, 2020
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Joydeep Ray, Prasoonkumar Surti, Altug Koker, Kiran C. Veernapu, Erik G. Liskay