Patents by Inventor Chunhui MEI
Chunhui MEI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240111609Abstract: Low-latency synchronization utilizing local team barriers for thread team processing is described. An example of an apparatus includes one or more processors including a graphics processor, the graphics processor including a plurality of processing resources; and memory for storage of data including data for graphics processing, wherein the graphics processor is to receive a request for establishment of a local team barrier for a thread team, the thread team being allocated to a first processing resource, the thread team including multiple threads; determine requirements and designated threads for the local team barrier; and establish the local team barrier in a local register of the first processing resource based at least in part on the requirements and designated threads for the local barrier.Type: ApplicationFiled: September 30, 2022Publication date: April 4, 2024Applicant: Intel CorporationInventors: Biju George, Supratim Pal, James Valerio, Vasanth Ranganathan, Fangwen Fu, Chunhui Mei
-
Publication number: 20240111534Abstract: Embodiments described herein provide a technique enable a broadcast load from an L1 cache or shared local memory to register files associated with hardware threads of a graphics core. One embodiment provides a graphics processor comprising a cache memory and a graphics core coupled with the cache memory. The graphics core includes a plurality of hardware threads and memory access circuitry to facilitate access to memory by the plurality of hardware threads. The graphics core is configurable to process a plurality of load request from the plurality of hardware threads, detect duplicate load requests within the plurality of load requests, perform a single read from the cache memory in response to the duplicate load requests, and transmit data associated with the duplicate load requests to requesting hardware threads.Type: ApplicationFiled: September 30, 2022Publication date: April 4, 2024Applicant: Intel CorporationInventors: Fangwen Fu, Chunhui Mei, Maxim Kazakov, Biju George, Jorge Parra, Supratim Pal
-
Publication number: 20240112295Abstract: Shared local registers for thread team processing is described. An example of an apparatus includes one or more processors including a graphic processor having multiple processing resources; and memory for storage of data, the graphics processor to allocate a first thread team to a first processing resource, the first thread team including hardware threads to be executed solely by the first processing resource; allocate a shared local register (SLR) space that may be directly reference in the ISA instructions to the first processing resource, the SLR space being accessible to the threads of the thread team and being inaccessible to threads outside of the thread team; and allocate individual register spaces to the thread team, each of the individual register spaces being accessible to a respective thread of the thread team.Type: ApplicationFiled: September 30, 2022Publication date: April 4, 2024Applicant: Intel CorporationInventors: Biju George, Fangwen Fu, Supratim Pal, Jorge Parra, Chunhui Mei, Maxim Kazakov, Joydeep Ray
-
Publication number: 20240104025Abstract: Prefetch aware LRU cache replacement policy is described. An example of an apparatus includes one or more processors including a graphic processor, the graphics processor including a load store cache having multiple cache lines (CLs), each including bits for a cache line level (CL level) and one or more sectors for data storage; wherein the graphics processor is to receive one or more data elements for storage in the cache; set a CL level to track each CL receiving data, including setting CL level 1 for a CL receiving data in response to a miss in the cache and setting a CL level 2 for a CL receiving prefetched data in response to a prefetch request, and, upon determining that space is required in the cache to store data, apply a cache replacement policy, the policy being based at least in part on set CL levels for the CLs.Type: ApplicationFiled: September 23, 2022Publication date: March 28, 2024Applicant: Intel CorporationInventors: Biju George, Zamshed I. Chowdhury, Prathamesh Raghunath Shinde, Chunhui Mei, Fangwen Fu
-
Publication number: 20230205587Abstract: Thread group dispatch in a clustered graphics architecture is described. An example of an apparatus includes of compute front end (CFE) clusters to receive dispatched thread groups, the CFE clusters including at least a first CFE cluster and a second CFE cluster; processing resources coupled with the CFE clusters to execute threads within thread groups; and cache clusters to cache data including thread groups, wherein the apparatus is to receive thread groups for dispatch, and to dispatch the thread groups to the CFE clusters according to a dispatch operation, the dispatch operation including dispatching multiple thread groups to each of multiple CFEs in the first CFE cluster and multiple thread groups to each of multiple CFEs in the second CFE cluster.Type: ApplicationFiled: December 23, 2021Publication date: June 29, 2023Applicant: Intel CorporationInventors: Zamshed Iqbal Chowdhury, Joydeep Ray, Chunhui Mei, Yongsheng Liu, Vasanth Ranganathan, Abhishek R. Appu, Aravindh Anantaraman
-
Publication number: 20230153176Abstract: An apparatus to facilitate facilitating forward progress guarantee using single-level synchronization at individual thread granularity is disclosed. The apparatus includes a processor comprising a barrier synchronization hardware circuitry to assign a set of global named barrier identifiers (IDs) to individual execution threads of a plurality of execution threads and synchronize execution of the individual execution threads on a single level via the set of global named barrier IDs; and a plurality of processing resources to execute the plurality of execution threads and comprising divergent barrier scheduling hardware circuitry to facilitate execution flow switching from a first divergent branch executed by a first thread to a second divergent branch executed by a second thread, the execution flow switching performed responsive to the first thread stalling to wait on a named barrier of the set of global named barrier IDs.Type: ApplicationFiled: November 17, 2021Publication date: May 18, 2023Applicant: Intel CorporationInventors: Chunhui Mei, James Valerio, Supratim Pal, Guei-Yuan Lueh, Hong Jiang
-
Publication number: 20220414054Abstract: A processing apparatus described herein includes a general-purpose parallel processing engine comprising a systolic array having multiple pipelines, each of the multiple pipelines including multiple pipeline stages, wherein the multiple pipelines include a first pipeline, a second pipeline, and a common input shared between the first pipeline and the second pipeline.Type: ApplicationFiled: June 25, 2021Publication date: December 29, 2022Applicant: Intel CorporationInventors: Jorge Parra, Jiasheng Chen, Supratim Pal, Fangwen Fu, Sabareesh Ganapathy, Chandra Gurram, Chunhui Mei, Yue Qi
-
Patent number: 11494163Abstract: An apparatus to facilitate a computer number format conversion is disclosed. The apparatus comprises a control unit to receive to receive data format information indicating a first precision data format that input data is to be received and converter hardware to receive the input data and convert the first precision data format to a second precision data format based on the data format information.Type: GrantFiled: September 6, 2019Date of Patent: November 8, 2022Assignee: Intel CorporationInventors: Naveen Mellempudi, Dipankar Das, Chunhui Mei, Kristopher Wong, Dhiraj D. Kalamkar, Hong H. Jiang, Subramaniam Maiyuran, Varghese George
-
Publication number: 20220309124Abstract: Matrix multiply units can take advantage of input sparsity by zero gating ALUs, which saves power consumption, but compute throughput does not increase. To improve compute throughput from sparsity, processing resources in a matrix accelerator can skip computation with zero involved in input or output. If zeros in input can be skipped, the processing units can focus calculations on generating meaningful non-zero output.Type: ApplicationFiled: March 24, 2021Publication date: September 29, 2022Applicant: Intel CorporationInventors: Chunhui Mei, Hong Jiang, Jiasheng Chen, Yongsheng Liu, Yan Li
-
Publication number: 20210081201Abstract: An apparatus to facilitate utilizing structured sparsity in systolic arrays is disclosed. The apparatus includes a processor comprising a systolic array to receive data from a plurality of source registers, the data comprising unpacked source data, structured source data that is packed based on sparsity, and metadata corresponding to the structured source data; identify portions of the unpacked source data to multiply with the structured source data, the portions of the unpacked source data identified based on the metadata; and output, to a destination register, a result of multiplication of the portions of the unpacked source data and the structured source data.Type: ApplicationFiled: November 30, 2020Publication date: March 18, 2021Applicant: Intel CorporationInventors: Subramaniam Maiyuran, Jorge Parra, Ashutosh Garg, Chandra Gurram, Chunhui Mei, Durgesh Borkar, Shubra Marwaha, Supratim Pal, Varghese George, Wei Xiong, Yan Li, Yongsheng Liu, Dipankar Das, Sasikanth Avancha, Dharma Teja Vooturi, Naveen K. Mellempudi
-
Publication number: 20210072955Abstract: An apparatus to facilitate a computer number format conversion is disclosed. The apparatus comprises a control unit to receive to receive data format information indicating a first precision data format that input data is to be received and converter hardware to receive the input data and convert the first precision data format to a second precision data format based on the data format information.Type: ApplicationFiled: September 6, 2019Publication date: March 11, 2021Applicant: Intel CorporationInventors: Naveen MELLEMPUDI, Dipankar DAS, Chunhui MEI, Kristopher WONG, Dhiraj D. KALAMKAR, Hong H. JIANG, Subramaniam Maiyuran, Varghese George
-
Patent number: 10504462Abstract: A device for graphics processing includes a memory and at least one processor. The at least one processor is configured to generate image data for an image, fetch, for each two-dimensional matrix of multiple two-dimensional matrices of units of the image, a respective portion of the image data, and process each two-dimensional matrix of the multiple two-dimensional matrices based on the respective portion of the image data to generate pixel data for the image. To process each two-dimensional matrix of the multiple two-dimensional matrices, the at least one processor is configured to process multiple units arranged in a first two-dimensional matrix of the multiple two-dimensional matrices and process, after processing the multiple units arranged in the first two-dimensional matrix, multiple units arranged in a second two-dimensional matrix of the multiple two-dimensional matrices.Type: GrantFiled: January 25, 2018Date of Patent: December 10, 2019Assignee: QUALCOMM IncorporatedInventor: Chunhui Mei
-
Publication number: 20190228723Abstract: A device for graphics processing includes a memory and at least one processor. The at least one processor is configured to generate image data for an image, fetch, for each two-dimensional matrix of multiple two-dimensional matrices of units of the image, a respective portion of the image data, and process each two-dimensional matrix of the multiple two-dimensional matrices based on the respective portion of the image data to generate pixel data for the image. To process each two-dimensional matrix of the multiple two-dimensional matrices, the at least one processor is configured to process multiple units arranged in a first two-dimensional matrix of the multiple two-dimensional matrices and process, after processing the multiple units arranged in the first two-dimensional matrix, multiple units arranged in a second two-dimensional matrix of the multiple two-dimensional matrices.Type: ApplicationFiled: January 25, 2018Publication date: July 25, 2019Inventor: Chunhui Mei
-
Patent number: 10089708Abstract: A texture unit of a graphics processing unit (GPU) may receive a texture data. The texture unit may receive the texture data from the memory. The texture unit may also multiply, by a multiplier circuit of the texture unit, the texture data by at least one constant, where the constant is not associated with a filtering operation, and where the texture data comprises at least one texel. The texture unit may also output, by the texture unit, a result of multiplying the texture data by the at least one constant.Type: GrantFiled: April 28, 2016Date of Patent: October 2, 2018Assignee: QUALCOMM IncorporatedInventors: Andrew Evan Gruber, Lin Chen, Liang Li, Chunhui Mei
-
Patent number: 10026145Abstract: Techniques for allowing for concurrent execution of multiple different tasks and preempted prioritized execution of tasks on a shader processor. In an example operation, a driver executed by a central processing unit (CPU) configures GPU resources based on needs of a first “host” shader to allow the first shader to execute “normally” on the GPU. The GPU may observe two sets of tasks, “guest” tasks. Based on, for example, detecting an availability of resources, the GPU may determine a “guest” task may be run while the “host” task is running. A second “guest” shader executes on a GPU by using resources that were configured for the first “host” shader if there are available resources and, in some examples, additional resources are obtained through software-programmable means.Type: GrantFiled: December 13, 2016Date of Patent: July 17, 2018Assignee: QUALCOMM IncorporatedInventors: Alexei Vladimirovich Bourd, Maxim Kazakov, Chunhui Mei, Sumesh Udayakumaran
-
Publication number: 20180165786Abstract: Techniques for allowing for concurrent execution of multiple different tasks and preempted prioritized execution of tasks on a shader processor. In an example operation, a driver executed by a central processing unit (CPU) configures GPU resources based on needs of a first “host” shader to allow the first shader to execute “normally” on the GPU. The GPU may observe two sets of tasks, “guest” tasks. Based on, for example, detecting an availability of resources, the GPU may determine a “guest” task may be run while the “host” task is running. A second “guest” shader executes on a GPU by using resources that were configured for the first “host” shader if there are available resources and, in some examples, additional resources are obtained through software-programmable means.Type: ApplicationFiled: December 13, 2016Publication date: June 14, 2018Inventors: Alexei Vladimirovich Bourd, Maxim Kazakov, Chunhui Mei, Sumesh Udayakumaran
-
Publication number: 20170316540Abstract: A texture unit of a graphics processing unit (GPU) may receive a texture data. The texture unit may receive the texture data from the memory. The texture unit may also multiply, by a multiplier circuit of the texture unit, the texture data by at least one constant, where the constant is not associated with a filtering operation, and where the texture data comprises at least one texel. The texture unit may also output, by the texture unit, a result of multiplying the texture data by the at least one constant.Type: ApplicationFiled: April 28, 2016Publication date: November 2, 2017Inventors: Andrew Evan Gruber, Lin Chen, Liang Li, Chunhui Mei
-
Patent number: 9679347Abstract: A graphics processing unit (GPU) may allocate a shared data channel in on-chip graphics memory of the GPU that is shared by at least two stages of a graphics processing pipeline. Shader units in the GPU may execute the at least two stages of the graphics processing pipeline. The GPU may store, in the shared data channel in on-chip graphics memory, data produced by each of the at least two stages of the graphics processing pipeline executing on the shader units.Type: GrantFiled: February 18, 2014Date of Patent: June 13, 2017Assignee: QUALCOMM IncorporatedInventors: Chunhui Mei, Vineet Goel, Donghyun Kim
-
Patent number: 9652284Abstract: A device includes a memory, and at least one programmable processor configured to determine, for each warp of a plurality of warps, whether a Boolean expression is true for a corresponding thread of each warp, pause execution of each warp having a corresponding thread for which the expression is true, determine a number of active threads for each of the plurality of warps for which the expression is true, sort the plurality of warps for which the expression is true based on the number of active threads in each of the plurality of warps, swap thread data of an active thread of a first warp of the plurality of warps with thread data of an inactive thread of a second warp of the plurality of warps, and resume execution of the at least one of the plurality of warps for which the expression is true.Type: GrantFiled: October 1, 2013Date of Patent: May 16, 2017Assignee: QUALCOMM IncorporatedInventors: Chunhui Mei, Alexei Vladimirovich Bourd, Lin Chen
-
Patent number: 9626762Abstract: Techniques are described for stochastic rasterization. A graphics processing unit (GPU) may discard samples of bounding polygons that together indicate movement of one or more primitives before a pixel shader process the samples. The GPU may leverage a stencil buffer and stencil test for discarding of such samples.Type: GrantFiled: April 14, 2015Date of Patent: April 18, 2017Assignee: QUALCOMM IncorporatedInventors: Chunhui Mei, Tao Wang, Young In Yeo, Vineet Goel