Patents by Inventor Jonathan Pearce
Jonathan Pearce has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12117962Abstract: Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed.Type: GrantFiled: August 16, 2023Date of Patent: October 15, 2024Assignee: INTEL CORPORATIONInventors: Joydeep Ray, Aravindh Anantaraman, Abhishek R. Appu, Altug Koker, Elmoustapha Ould-Ahmed-Vall, Valentin Andrei, Subramaniam Maiyuran, Nicolas Galoppo Von Borries, Varghese George, Mike Macpherson, Ben Ashbaugh, Murali Ramadoss, Vikranth Vemulapalli, William Sadler, Jonathan Pearce, Sungye Kim
-
Publication number: 20240256456Abstract: Embodiments are generally directed to data prefetching for graphics data processing. An embodiment of an apparatus includes one or more processors including one or more graphics processing units (GPUs); and a plurality of caches to provide storage for the one or more GPUs, the plurality of caches including at least an L1 cache and an L3 cache, wherein the apparatus to provide intelligent prefetching of data by a prefetcher of a first GPU of the one or more GPUs including measuring a hit rate for the Li cache; upon determining that the hit rate for the L1 cache is equal to or greater than a threshold value, limiting a prefetch of data to storage in the L3 cache, and upon determining that the hit rate for the L1 cache is less than a threshold value, allowing the prefetch of data to the L1 cache.Type: ApplicationFiled: December 20, 2023Publication date: August 1, 2024Applicant: Intel CorporationInventors: Vikranth Vemulapalli, Lakshminarayanan Striramassarma, Mike MacPherson, Aravindh Anantaraman, Ben Ashbaugh, Murali Ramadoss, William B. Sadler, Jonathan Pearce, Scott Janus, Brent Insko, Vasanth Ranganathan, Kamal Sinha, Arthur Hunter, Jr., Prasoonkumar Surti, Nicolas Galoppo von Borries, Joydeep Ray, Abhishek R. Appu, ElMoustapha Ould-Ahmed-Vall, Altug Koker, Sungye Kim, Subramaniam Maiyuran, Valentin Andrei
-
Publication number: 20240045830Abstract: Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: August 16, 2023Publication date: February 8, 2024Applicant: Intel CorporationInventors: Joydeep RAY, Aravindh ANANTARAMAN, Abhishek R. APPU, Altug KOKER, Elmoustapha OULD-AHMED-VALL, Valentin ANDREI, Subramaniam MAIYURAN, Nicolas GALOPPO VON BORRIES, Varghese GEORGE, Mike MACPHERSON, Ben ASHBAUGH, Murali RAMADOSS, Vikranth VEMULAPALLI, William SADLER, Jonathan PEARCE, Sungye KIM
-
Patent number: 11892950Abstract: Embodiments are generally directed to data prefetching for graphics data processing. An embodiment of an apparatus includes one or more processors including one or more graphics processing units (GPUs); and a plurality of caches to provide storage for the one or more GPUs, the plurality of caches including at least an L1 cache and an L3 cache, wherein the apparatus to provide intelligent prefetching of data by a prefetcher of a first GPU of the one or more GPUs including measuring a hit rate for the L1 cache; upon determining that the hit rate for the L1 cache is equal to or greater than a threshold value, limiting a prefetch of data to storage in the L3 cache, and upon determining that the hit rate for the L1 cache is less than a threshold value, allowing the prefetch of data to the L1 cache.Type: GrantFiled: July 15, 2022Date of Patent: February 6, 2024Assignee: INTEL CORPORATIONInventors: Vikranth Vemulapalli, Lakshminarayanan Striramassarma, Mike MacPherson, Aravindh Anantaraman, Ben Ashbaugh, Murali Ramadoss, William B. Sadler, Jonathan Pearce, Scott Janus, Brent Insko, Vasanth Ranganathan, Kamal Sinha, Arthur Hunter, Jr., Prasoonkumar Surti, Nicolas Galoppo von Borries, Joydeep Ray, Abhishek R. Appu, ElMoustapha Ould-Ahmed-Vall, Altug Koker, Sungye Kim, Subramaniam Maiyuran, Valentin Andrei
-
Publication number: 20240028404Abstract: Embodiments are generally directed to thread group scheduling for graphics processing. An embodiment of an apparatus includes a plurality of processors including a plurality of graphics processors to process data; a memory; and one or more caches for storage of data for the plurality of graphics processors, wherein the one or more processors are to schedule a plurality of groups of threads for processing by the plurality of graphics processors, the scheduling of the plurality of groups of threads including the plurality of processors to apply a bias for scheduling the plurality of groups of threads according to a cache locality for the one or more caches.Type: ApplicationFiled: June 2, 2023Publication date: January 25, 2024Applicant: Intel CorporationInventors: Ben Ashbaugh, Jonathan Pearce, Murali Ramadoss, Vikranth Vemulapalli, William B. Sadler, Sungye Kim, Marian Alin Petre
-
Publication number: 20230315572Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Publication number: 20230315455Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Publication number: 20230315445Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Publication number: 20230315459Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Publication number: 20230315460Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Publication number: 20230315462Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Publication number: 20230315458Abstract: Techniques for using soft-barrier hints are described. An example includes a synchronous microthreading (SyMT) co-processor coupled to a logical processor to execute a plurality of microthreads, with each microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode, wherein the SyMT co-processor is further to support a soft-barrier hint instruction in code which when processed by a microthread is to pause execution of the microthread to be resumed based at least in part on a data structure having at least one entry, the entry to include an instruction pointer of the soft-barrier hint instruction and a count of microthreads that have encountered the soft-barrier hint instruction at the instruction pointer.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: Shreesha SRINATH, Jonathan PEARCE, David B. SHEFFIELD, Ching-Kai LIANG, Jeffrey COOK
-
Publication number: 20230315444Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Publication number: 20230315461Abstract: Techniques for synchronous microthreaded execution are described. An example includes a logical processor to execute one or more threads in a first mode; and a synchronous microthreading (SyMT) co-processor coupled to the logical processor to execute lightweight microthreads, with each lightweight microthread having an independent register state, upon an execution of an instruction to enter into SyMT mode.Type: ApplicationFiled: April 2, 2022Publication date: October 5, 2023Inventors: David B. SHEFFIELD, Erich BOLEYN, Jonathan PEARCE, Sofia PEDIADITAKI, Jeffrey COOK, Shreesha SRINATH, Ching-Kai LIANG, Tyler SONDAG
-
Patent number: 11762804Abstract: Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed.Type: GrantFiled: July 19, 2022Date of Patent: September 19, 2023Assignee: INTEL CORPORATIONInventors: Joydeep Ray, Aravindh Anantaraman, Abhishek R. Appu, Altug Koker, Elmoustapha Ould-Ahmed-Vall, Valentin Andrei, Subramaniam Maiyuran, Nicolas Galoppo Von Borries, Varghese George, Mike MacPherson, Ben Ashbaugh, Murali Ramadoss, Vikranth Vemulapalli, William Sadler, Jonathan Pearce, Sungye Kim
-
Patent number: 11709714Abstract: Embodiments are generally directed to thread group scheduling for graphics processing. An embodiment of an apparatus includes a plurality of processors including a plurality of graphics processors to process data; a memory; and one or more caches for storage of data for the plurality of graphics processors, wherein the one or more processors are to schedule a plurality of groups of threads for processing by the plurality of graphics processors, the scheduling of the plurality of groups of threads including the plurality of processors to apply a bias for scheduling the plurality of groups of threads according to a cache locality for the one or more caches.Type: GrantFiled: March 3, 2022Date of Patent: July 25, 2023Assignee: INTEL CORPORATIONInventors: Ben Ashbaugh, Jonathan Pearce, Murali Ramadoss, Vikranth Vemulapalli, William B. Sadler, Sungye Kim, Marian Alin Petre
-
Patent number: 11620256Abstract: Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache coupled to the processing resources. The cache controller is configured to control cache priority by determining whether default settings or an instruction will control cache operations for the cache.Type: GrantFiled: April 28, 2022Date of Patent: April 4, 2023Assignee: Intel CorporationInventors: Altug Koker, Joydeep Ray, Ben Ashbaugh, Jonathan Pearce, Abhishek Appu, Vasanth Ranganathan, Lakshminarayanan Striramassarma, Elmoustapha Ould-Ahmed-Vall, Aravindh Anantaraman, Valentin Andrei, Nicolas Galoppo Von Borries, Varghese George, Yoav Harel, Arthur Hunter, Jr., Brent Insko, Scott Janus, Pattabhiraman K, Mike Macpherson, Subramaniam Maiyuran, Marian Alin Petre, Murali Ramadoss, Shailesh Shah, Kamal Sinha, Prasoonkumar Surti, Vikranth Vemulapalli
-
Publication number: 20230051190Abstract: Embodiments are generally directed to data prefetching for graphics data processing. An embodiment of an apparatus includes one or more processors including one or more graphics processing units (GPUs); and a plurality of caches to provide storage for the one or more GPUs, the plurality of caches including at least an L1 cache and an L3 cache, wherein the apparatus to provide intelligent prefetching of data by a prefetcher of a first GPU of the one or more GPUs including measuring a hit rate for the L1 cache; upon determining that the hit rate for the L1 cache is equal to or greater than a threshold value, limiting a prefetch of data to storage in the L3 cache, and upon determining that the hit rate for the L1 cache is less than a threshold value, allowing the prefetch of data to the L1 cache.Type: ApplicationFiled: July 15, 2022Publication date: February 16, 2023Applicant: Intel CorporationInventors: Vikranth Vemulapalli, Lakshminarayanan Striramassarma, Mike MacPherson, Aravindh Anantaraman, Ben Ashbaugh, Murali Ramadoss, William B. Sadler, Jonathan Pearce, Scott Janus, Brent Insko, Vasanth Ranganathan, Kamal Sinha, Arthur Hunter, JR., Prasoonkumar Surti, Nicolas Galoppo von Borries, Joydeep Ray, Abhishek R. Appu, ElMoustapha Ould-Ahmed-Vall, Altug Koker, Sungye Kim, Subramaniam Maiyuran, Valentin Andrei
-
Publication number: 20230049657Abstract: An electronic sports betting system and method includes a betting extension in a browser of a client computer, communicating with an odds modeling engine of a betting platform. The betting platform defines a tournament structure of an electronic sports tournament comprising teams performing the electronic sports tournament on a computer network, and generates a module for each of a number of distinct parts of the tournament structure. The betting platform successively simulates each module to generate a model for the electronic sports tournament, the model representing possible actions and outcomes by the teams performing each of a number of distinct parts of the tournament structure. The betting platform then models team behavior of each of the teams based on the model, and generates odds of the possible actions and outcomes of the electronic sports tournament based on the successive simulation of each module.Type: ApplicationFiled: August 16, 2022Publication date: February 16, 2023Inventors: Jonathan Pearce, Bradley Cole
-
Publication number: 20230029176Abstract: Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: July 19, 2022Publication date: January 26, 2023Applicant: Intel CorporationInventors: JOYDEEP RAY, ARAVINDH ANANTARAMAN, ABHISHEK R. APPU, ALTUG KOKER, ELMOUSTAPHA OULD-AHMED-VALL, VALENTIN ANDREI, SUBRAMANIAM MAIYURAN, NICOLAS GALOPPO VON BORRIES, VARGHESE GEORGE, MIKE MACPHERSON, BEN ASHBAUGH, MURALI RAMADOSS, VIKRANTH VEMULAPALLI, WILLIAM SADLER, JONATHAN PEARCE, SUNGYE KIM