Patents by Inventor Kiran C. Veernapu

Kiran C. Veernapu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180308212
    Abstract: A mechanism is described for facilitating selective skipping of compression cycles in computing devices. A method of embodiments, as described herein, includes facilitating determining a first current output relating to compression of a current set of data to be same as a previous output from compression of a previous set of data, and turning off a compression engine to skip compression of the current set of data.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 25, 2018
    Applicant: Intel Corporation
    Inventors: Kiran C. Veernapu, Abhishek R. Appu, Prasoonkumar Surti, Arijit Mukhopadhyay, Altug Koker, Joydeep Ray
  • Publication number: 20180308215
    Abstract: A mechanism is described for facilitating dynamic cache allocation in computing devices in computing devices. A method of embodiments, as described herein, includes facilitating monitoring one or more bandwidth consumptions of one or more clients accessing a cache associated with a processor; computing one or more bandwidth requirements of the one or more clients based on the one or more bandwidth consumptions; and allocating one or more portions of the cache to the one or more clients in accordance with the one or more bandwidth requirements.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 25, 2018
    Applicant: Intel Corporation
    Inventors: Kiran C. Veernapu, Mohammed Tameem, Altug Koker, Abhishek R. Appu
  • Publication number: 20180307295
    Abstract: Described herein are various embodiments of reducing dynamic power consumption within a processor device. One embodiment provides a technique for dynamic link width reduction based on the instantaneous throughput demand for client of an interconnect fabric. One embodiment provides for a parallel processor comprising an interconnect fabric including a dynamic bus module to configure a bus width for a client of the interconnect fabric based on throughput demand from the client.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 25, 2018
    Applicant: Intel Corporation
    Inventors: Mohammed Tameem, Altug Koker, Kiran C. Veernapu, Abhishek R. Appu, Ankur N. Shah, Joydeep Ray, Travis T. Schluessler, Jonathan Kennedy
  • Publication number: 20180300130
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to monitor a thread switching overhead parameter for an application executing in a processing system and in response to a determination that the thread switching overhead parameter exceeds a threshold, to activate a thread management algorithm to reduce thread switching in the processing system. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kiran C Veernapu, Balaji Vembu, Vasanth Ranganathan, Prasoonkumar Surti
  • Publication number: 20180300251
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive, in a read/modify/write (RMW) pipeline, a cache access request from a requestor, wherein the cache request comprises a cache set identifier associated with requested data in the cache set, determine whether the cache set associated with the cache set identifier is in an inaccessible invalid state, and in response to a determination that the cache set is in an inaccessible state or an invalid state, to terminate the cache access request. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Prasoonkumar Surti, Kamal Sinha, Kiran C. Veernapu, Balaji Vembu
  • Publication number: 20180300096
    Abstract: In accordance with some embodiments, the render rate is varied across and/or up and down the display screen. This may be done based on where the user is looking in order to reduce power consumption and/or increase performance. Specifically the screen display is separated into regions, such as quadrants. Each of these regions is rendered at a rate determined by at least one of what the user is currently looking at, what the user has looked at in the past and/or what it is predicted that the user will look at next. Areas of less focus may be rendered at a lower rate, reducing power consumption in some embodiments.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Inventors: Eric J. Asperheim, Subramaniam M. Maiyuran, Kiran C. Veernapu, Sanjeev S. Jahagirdar, Balaji Vembu, Devan Burke, Philip R. Laws, Kamal Sinha, Abhishek R. Appu, Elmoustapha Ould-Ahmed-Vall, Peter L. Doyle, Joydeep Ray, Travis T. Schluessler, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Altug Koker
  • Publication number: 20180300238
    Abstract: Briefly, in accordance with one or more embodiments, an apparatus comprises a processor to monitor cache utilization of an application during execution of the application for a workload; and a memory to store cache utilization statistics responsive to the monitored cache utilization. The processor is to determine an optimal cache configuration for the application based at least in part on the cache utilization statistics for the workload such that a smallest amount of cache is turned on for subsequent executions of the workload by the application.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Inventors: Balaji Vembu, Altug Koker, Josh B. Mastronarde, Nikos Kaburlasos, Abhishek R. Appu, Sanjeev S. Jahagirdar, Eric J. Asperheim, Subramaniam Maiyuran, Kiran C. Veernapu, Pattabhiraman K, Kamal Sinha, Bhushan M. Borole, Wenyin Fu, Joydeep Ray, Prasoonkumar Surti, Eric J. Hoekstra, Travis T. Schluessler, Linda L. Hurd
  • Publication number: 20180301123
    Abstract: A mechanism is described for facilitating consolidated compression/de-compression of graphics data streams of varying types at computing devices. A method of embodiments, as described herein, includes generating a common sector cache relating to a graphics processor. The method may further include performing a consolidated compression of multiple types of graphics data streams associated with the graphics processor using the common sector cache.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Joydeep Ray, Prasoonkumar Surti, Altug Koker, Kiran C. Veernapu, Eric G. Liskay
  • Publication number: 20180300201
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive metadata from an application, wherein the meta data indicates one or more processing operations which can accommodate a predetermined level of bit errors in read operations from memory, determine, from the metadata, pixel data for which error correction code bypass is acceptable, and generate one or more error correction code bypass hints for subsequent cache access to the pixel data for which error correction code bypass is acceptable, and transmit the one or more error correction code bypass hints to a graphics processing pipeline. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Applicant: Intel Corporation
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray
  • Publication number: 20180300177
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive a completion acknowledgment from the plurality of graphics processing units and in response to a determination that the workload is finished, to terminate one or more communication connections on the interconnect bridge. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Applicant: Intel Corporation
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Balaji Vembu
  • Publication number: 20180293702
    Abstract: In accordance with one embodiment each page table entry maps a variable page size (per entry), if multiple continuous virtual pages map to contiguous physical pages. This may drastically reduce the number of translation lookaside buffer (TLB) entries needed since each entry can potentially map a larger chunk of memory, in some embodiments.
    Type: Application
    Filed: April 10, 2017
    Publication date: October 11, 2018
    Inventors: Abhishek R. Appu, Joydeep Ray, Altug Koker, Balaji Vembu, Prasoonkumar P. Surti, Kamal Sinha, Vasanth Ranganathan, Kiran C. Veernapu, Bhushan M. Borole, Wenyin Fu
  • Publication number: 20180293778
    Abstract: A mechanism is described for facilitating smart compression/decompression schemes at computing devices. A method of embodiments, as described herein, includes unifying a first compression scheme relating to three-dimensional (3D) content and a second compression scheme relating to media content into a unified compression scheme to perform compression of one or more of the 3D content and the media content relating to a processor including a graphics processor.
    Type: Application
    Filed: April 9, 2017
    Publication date: October 11, 2018
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Kiran C. Veernapu, Prasoonkumar Surti, Joydeep Ray, Altug Koker, Eric G. Liskay
  • Publication number: 20180293780
    Abstract: Methods and apparatus relating to techniques for a dual path sequential element to reduce toggles in data path are described. In an embodiment, switching logic causes signals for a single data path of a processor to be directed to at least two separate data paths. At least one of the two separate data paths is power gated to reduce signal toggles in the at least one data path. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 10, 2017
    Publication date: October 11, 2018
    Applicant: Intel Corporation
    Inventors: Subramaniam Maiyuran, Sanjeev S. Jahagirdar, Kiran C. Veernapu, Eric J. Asperheim, Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Publication number: 20180285106
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a first general register file (GRF) communicatively couple to the plurality of execution units, wherein the first GRF is shared by the plurality of execution units. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kamal Sinha, Kiran C. Veernapu, Subramaniam Maiyuran, Prasoonkumar Surti, Guei-Yuan Lueh, David Puffer, Supratim Pal, Eric J. Hoekstra, Travis T. Schluessler, Linda L. Hurd
  • Publication number: 20180285278
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
  • Publication number: 20180285191
    Abstract: Methods and apparatus relating to techniques for reference voltage control based on error detection are described. In an embodiment, modification to a reference voltage (to be supplied to one or more components of a processor) is based at least in part on error detection to be detected for a reference circuit. In another embodiment, modification is made to a power characteristic of a processor in response to a determination that the processor is to execute a safety critical application. The modification may include adjustment to an operating frequency and/or an operating voltage of the processor. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Inventors: Sanjeev S. Jahagirdar, Eric J. Asperheim, Subramaniam Maiyuran, Kiran C. Veernapu, Eric C. Samson, Joydeep Ray, Travis T. Schluessler, Jacek Kwiatkowski, Abhishek R. Appu, Ankur N. Shah, Altug Koker
  • Publication number: 20180284868
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to collect user information for a user of a data processing device, generate a user profile for the user of the data processing device from the user information, and set a power profile a processor in the data processing device using the user profile. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Balaji Vembu, Prasoonkumar Surti, Kamal Sinha, Eric J. Hoekstra, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Travis T. Schluessler, Ankur N. Shah, Jonathan Kennedy
  • Publication number: 20180285272
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive a first frame in a graphics processing pipeline, divide the first frame into a first plurality of tiles, wherein each tile in the plurality of tiles comprises an (M×N) array of pixel data, compute a first hash function of each tile of a set of the first plurality of tiles, store the first hash function of each of the tiles of the set of the first plurality of tiles as meta-data associated with the respective tiles, and use the meta-data in memory to manage memory write operations for a second frame which follows the first frame in the graphics processing pipeline. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Prasoonkumar Surti, Joydeep Ray, Kiran C. Veernapu
  • Publication number: 20180285158
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising at least a first type of execution unit and a second type of execution unit and logic, at least partially including hardware logic, to analyze a workload and assign the workload to one of the first type of execution unit or the second type of execution unit. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Inventors: Abhishek R. Appu, Altug Koker, Balaji Vembu, Joydeep Ray, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu, Subramaniam Maiyuran, Sanjeev S. Jahagirdar, Eric J. Asperheim, Guei-Yuan Lueh, David Puffer, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Josh B. Mastronarde, Linda L. Hurd, Travis T. Schluessler, Tomasz Janczak, Abhishek Venkatesh, Kai Xiao, Slawomir Grajewski
  • Publication number: 20180285110
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to determine a first number of threads to be scheduled for each context of a plurality of contexts in a multi-context processing system, allocate a second number of streaming multiprocessors (SMs) to the respective plurality of contexts, and dispatch threads from the plurality of contexts only to the streaming multiprocessor(s) allocated to the respective plurality of contexts. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Altug Koker, Balaji Vembu, Abhishek R. Appu, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu