Patents by Inventor Kiran C. Veernapu

Kiran C. Veernapu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200150741
    Abstract: Methods and apparatus relating to techniques for a dual path sequential element to reduce toggles in data path are described. In an embodiment, switching logic causes signals for a single data path of a processor to be directed to at least two separate data paths. At least one of the two separate data paths is power gated to reduce signal toggles in the at least one data path. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: October 23, 2019
    Publication date: May 14, 2020
    Applicant: Intel Corporation
    Inventors: Subramaniam Maiyuran, Sanjeev S. Jahagirdar, Kiran C. Veernapu, Eric J. Asperheim, Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Publication number: 20200117455
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to determine a first number of threads to be scheduled for each context of a plurality of contexts in a multi-context processing system, allocate a second number of streaming multiprocessors (SMs) to the respective plurality of contexts, and dispatch threads from the plurality of contexts only to the streaming multiprocessor(s) allocated to the respective plurality of contexts. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: October 11, 2019
    Publication date: April 16, 2020
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Altug Koker, Balaji Vembu, Abhishek R. Appu, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu
  • Patent number: 10621109
    Abstract: One embodiment provides for a graphics processor comprising a translation lookaside buffer (TLB) to cache a first page table entry for a virtual to physical address mapping for use by the graphics processor, the first page table entry to indicate that a first virtual page is cleared to a clear color and a graphics pipeline to bypass a memory access for the first virtual page based on the first page table entry, wherein the graphics pipeline is to read a field in the first page table entry to determine a value of the clear color.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Abhishek R. Appu, Kiran C. Veernapu
  • Publication number: 20200110721
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to monitor a thread switching overhead parameter for an application executing in a processing system and in response to a determination that the thread switching overhead parameter exceeds a threshold, to activate a thread management algorithm to reduce thread switching in the processing system. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: October 11, 2019
    Publication date: April 9, 2020
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kiran C. Veernapu, Balaji Vembu, Vasanth Ranganathan, Prasoonkumar Surti
  • Publication number: 20200104166
    Abstract: A mechanism is described to facilitate microcontroller-based flexible thread scheduling launching in computing environments. An apparatus of embodiments, as described herein, includes facilitating a graphics processor hosting a microcontroller having a thread scheduling unit, and detection and observation logic to detect a scheduling algorithm associated with an application at the apparatus. The apparatus may further include reading and dispatching logic to facilitate the microcontroller to prepare a flexible dispatch routine based on the scheduling algorithm. The apparatus may further include scheduling and launching logic to facilitate the thread scheduling unit to dynamically schedule and launch threads based on the flexible dispatch routine, where the threads are hosted by the graphics processor.
    Type: Application
    Filed: July 9, 2019
    Publication date: April 2, 2020
    Applicant: Intel Corporation
    Inventors: Kiran C. Veernapu, Kamlesh Pillai, James Valerio, Joydeep Ray, Abhishek Appu
  • Patent number: 10579121
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to collect user information for a user of a data processing device, generate a user profile for the user of the data processing device from the user information, and set a power profile a processor in the data processing device using the user profile. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: March 3, 2020
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Balaji Vembu, Prasoonkumar Surti, Kamal Sinha, Eric J. Hoekstra, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Travis T. Schluessler, Ankur N. Shah, Jonathan Kennedy
  • Patent number: 10558254
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive data for a current write operation to a memory, determine a number of bits in the received data for the current write operation to the memory which have changed from a previous write operation to the memory and in response to a determination that the number of bits in the received data for the current write operation to the memory which have changed from a previous write operation to the memory exceeds a threshold, to toggle a plurality of bits in the data for the current write operation to create an encoded data set and set an indicator bit to a value which indicates that the plurality of bits have been toggled. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: February 11, 2020
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Eric J. Hoekstra, Kiran C. Veernapu, Prasoonkumar Surti, Vasanth Ranganathan, Kamal Sinha, Balaji Vembu, Eric J. Asperheim, Sanjeev S. Jahagirdar, Joydeep Ray
  • Publication number: 20200042417
    Abstract: Methods and apparatus relating to techniques for power management. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to generate a voltage/frequency curve for at least one of a core or a sub-core in a processor and manage an operating voltage level of the at least one of a core or a sub-core using the voltage/frequency curve. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: July 30, 2019
    Publication date: February 6, 2020
    Applicant: Intel Corporation
    Inventors: Nikos Kaburlasos, Balaji Vembu, Josh B. Mastronarde, Altug Koker, Eric C. Samson, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Vasanth Ranganathan, Sanjeev S. Jahagirdar
  • Publication number: 20200027192
    Abstract: A mechanism is described for facilitating dynamic cache allocation in computing devices in computing devices. A method of embodiments, as described herein, includes facilitating monitoring one or more bandwidth consumptions of one or more clients accessing a cache associated with a processor; computing one or more bandwidth requirements of the one or more clients based on the one or more bandwidth consumptions; and allocating one or more portions of the cache to the one or more clients in accordance with the one or more bandwidth requirements.
    Type: Application
    Filed: August 5, 2019
    Publication date: January 23, 2020
    Applicant: Intel Corporation
    Inventors: Kiran C. Veernapu, Mohammed Tameem, Altug Koker, Abhishek R. Appu
  • Publication number: 20200026514
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a first general register file (GRF) communicatively couple to the plurality of execution units, wherein the first GRF is shared by the plurality of execution units. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: July 30, 2019
    Publication date: January 23, 2020
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kamal Sinha, Kiran C. Veernapu, Subramaniam Maiyuran, Prasoonkumar Surti, Guei-Yuan Lueh, David Puffer, Supratim Pal, Eric J. Hoekstra, Travis T. Schluessler, Linda L. Hurd
  • Patent number: 10521271
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising at least a first type of execution unit and a second type of execution unit and logic, at least partially including hardware logic, to analyze a workload and assign the workload to one of the first type of execution unit or the second type of execution unit. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: December 31, 2019
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R Appu, Altug Koker, Balaji Vembu, Joydeep Ray, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu, Subramaniam Maiyuran, Sanjeev S. Jahagirdar, Eric J. Asperheim, Guei-Yuan Lueh, David Puffer, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Josh B. Mastronarde, Linda L. Hurd, Travis T. Schluessler, Tomasz Janczak, Abhishek Venkatesh, Kai Xiao, Slawomir Grajewski
  • Publication number: 20190391871
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive metadata from an application, wherein the meta data indicates one or more processing operations which can accommodate a predetermined level of bit errors in read operations from memory, determine, from the metadata, pixel data for which error correction code bypass is acceptable, and generate one or more error correction code bypass hints for subsequent cache access to the pixel data for which error correction code bypass is acceptable, and transmit the one or more error correction code bypass hints to a graphics processing pipeline. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: February 25, 2019
    Publication date: December 26, 2019
    Applicant: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray
  • Patent number: 10503652
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: December 10, 2019
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
  • Patent number: 10459509
    Abstract: Methods and apparatus relating to techniques for a dual path sequential element to reduce toggles in data path are described. In an embodiment, switching logic causes signals for a single data path of a processor to be directed to at least two separate data paths. At least one of the two separate data paths is power gated to reduce signal toggles in the at least one data path. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: October 29, 2019
    Assignee: Intel Corporation
    Inventors: Subramaniam Maiyuran, Sanjeev S. Jahagirdar, Kiran C. Veernapu, Eric J. Asperheim, Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Patent number: 10452586
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to monitor a thread switching overhead parameter for an application executing in a processing system and in response to a determination that the thread switching overhead parameter exceeds a threshold, to activate a thread management algorithm to reduce thread switching in the processing system. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: October 22, 2019
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kiran C Veernapu, Balaji Vembu, Vasanth Ranganathan, Prasoonkumar Surti
  • Patent number: 10452397
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to determine a first number of threads to be scheduled for each context of a plurality of contexts in a multi-context processing system, allocate a second number of streaming multiprocessors (SMs) to the respective plurality of contexts, and dispatch threads from the plurality of contexts only to the streaming multiprocessor(s) allocated to the respective plurality of contexts. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: October 22, 2019
    Assignee: INTEL CORPORATION
    Inventors: Joydeep Ray, Altug Koker, Balaji Vembu, Abhishek R. Appu, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu
  • Patent number: 10438569
    Abstract: A mechanism is described for facilitating consolidated compression/de-compression of graphics data streams of varying types at computing devices. A method of embodiments, as described herein, includes generating a common sector cache relating to a graphics processor. The method may further include performing a consolidated compression of multiple types of graphics data streams associated with the graphics processor using the common sector cache.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: October 8, 2019
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Joydeep Ray, Prasoonkumar Surti, Altug Koker, Kiran C. Veernapu, Eric G. Liskay
  • Patent number: 10430310
    Abstract: Methods and apparatus relating to techniques for power management. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to generate a voltage/frequency curve for at least one of a core or a sub-core in a processor and manage an operating voltage level of the at least one of a core or a sub-core using the voltage/frequency curve. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: October 1, 2019
    Assignee: INTEL CORPORATION
    Inventors: Nikos Kaburlasos, Balaji Vembu, Josh B. Mastronarde, Altug Koker, Eric C. Samson, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Vasanth Ranganathan, Sanjeev S. Jahagirdar
  • Patent number: 10423415
    Abstract: Disclosed herein is an apparatus which comprises a plurality of execution units, and a first general register file (GRF) communicatively couple to the plurality of execution units, wherein the first GRF is shared by the plurality of execution units.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: September 24, 2019
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kamal Sinha, Kiran C. Veernapu, Subramaniam Maiyuran, Prasoonkumar Surti, Guei-Yuan Lueh, David Puffer, Supratim Pal, Eric J. Hoekstra, Travis T. Schluessler, Linda L. Hurd
  • Patent number: 10424040
    Abstract: A mechanism is described for facilitating dynamic cache allocation in computing devices in computing devices. A method of embodiments, as described herein, includes facilitating monitoring one or more bandwidth consumptions of one or more clients accessing a cache associated with a processor; computing one or more bandwidth requirements of the one or more clients based on the one or more bandwidth consumptions; and allocating one or more portions of the cache to the one or more clients in accordance with the one or more bandwidth requirements.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: September 24, 2019
    Assignee: INTEL CORPORATION
    Inventors: Kiran C. Veernapu, Mohammed Tameem, Altug Koker, Abhishek R. Appu