Patents by Inventor Peinan Zhang

Peinan Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220391248
    Abstract: Examples relate to a monitoring apparatus, a monitoring device, a monitoring method, and to a corresponding computer program and system. The monitoring apparatus is configured to obtain a first compute kernel to be monitored and to obtain one or more second compute kernels. The monitoring apparatus is configured to provide instructions, using interface circuitry, to control circuitry of a computing device comprising a plurality of execution units, to instruct the control circuitry to execute the first compute kernel using a first slice of the plurality of execution units and to execute the one or more second compute kernels concurrently with the first compute kernel using one or more second slices of the plurality of execution units, and to instruct the control circuitry to provide information on a change of a status of at least one hardware counter associated with the first slice that is caused by the execution of the first compute kernel.
    Type: Application
    Filed: August 12, 2022
    Publication date: December 8, 2022
    Inventors: Prashant CHAUDHARI, Stanislav BRATANOV, Peinan ZHANG, Jeffery S. BOLES
  • Patent number: 9588826
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: March 7, 2017
    Assignee: Intel Corporation
    Inventors: Hu Chen, Ying Gao, Xiaocheng Zhou, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 9400702
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Grant
    Filed: July 1, 2014
    Date of Patent: July 26, 2016
    Assignee: Intel Corporation
    Inventors: Hu Chen, Ying Gao, Xiaocheng Zhou, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Publication number: 20150123978
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: December 10, 2014
    Publication date: May 7, 2015
    Inventors: Hu Chen, Ying Gao, Xiaocheng Zhou, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 8997114
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: February 5, 2014
    Date of Patent: March 31, 2015
    Assignee: Intel Corporation
    Inventors: Xiaocheng Zhou, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20140375662
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: July 1, 2014
    Publication date: December 25, 2014
    Inventors: HU CHEN, YING GAO, XIAOCHENG ZHOU, SHOUMENG YAN, PEINAN ZHANG, MOHAN RAJAGOPALAN, JESSE FANG, AVI MENDELSON, Bratin Saha
  • Publication number: 20140306972
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Application
    Filed: February 5, 2014
    Publication date: October 16, 2014
    Inventors: Xiaocheng Zhou, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Patent number: 8683487
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: March 25, 2014
    Assignee: Intel Corporation
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Gao Ying, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20140049550
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: September 4, 2013
    Publication date: February 20, 2014
    Inventors: Hu Chen, Gao Ying, Zhou Xiaocheng, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 8531471
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: September 10, 2013
    Assignee: Intel Corporation
    Inventors: Hu Chen, Ying Gao, Zhou Xiaocheng, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Publication number: 20130187936
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Application
    Filed: March 11, 2013
    Publication date: July 25, 2013
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Gao Ying, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Patent number: 8397241
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: March 12, 2013
    Assignee: Intel Corporation
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Patent number: 8296748
    Abstract: A method to provide effective control and data flow information in an Intermediate Representation (IR) form. A Path Sensitive single Assignment (PSA) IR form with effective and explicit control and data path information supports control flow sensitive optimizations such as path sensitive symbolic substitution, array privatization and speculative multi threading. In the definition of PSA form, besides defining new versioned variables, the gamma functions keep control path information. The gamma function in PSA form keeps the basic attribute of SSA IR form and only one definition exists for each use. Therefore, all existing Single Static Assignment (SSA) IR form based analysis can be applied in PSA form. The gamma function in PSA form keeps all essential control flow information and eliminates unnecessary predicates at the same time.
    Type: Grant
    Filed: July 24, 2008
    Date of Patent: October 23, 2012
    Assignee: Intel Corporation
    Inventors: Buqi Cheng, Tin-Fook Ngai, Zhaohui Du, PeiNan Zhang
  • Publication number: 20100118041
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: December 30, 2008
    Publication date: May 13, 2010
    Inventors: Hu Chen, Ying Gao, Zhou Xiaocheng, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Publication number: 20100122264
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Application
    Filed: December 30, 2008
    Publication date: May 13, 2010
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20100023931
    Abstract: A method to provide effective control and data flow information in an Intermediate Representation (IR) form. A Path Sensitive single Assignment (PSA) IR form with effective and explicit control and data path information supports control flow sensitive optimizations such as path sensitive symbolic substitution, array privatization and speculative multi threading. In the definition of PSA form, besides defining new versioned variables, the gamma functions keep control path information. The gamma function in PSA form keeps the basic attribute of SSA IR form and only one definition exists for each use. Therefore, all existing Single Static Assignment (SSA) IR form based analysis can be applied in PSA form. The gamma function in PSA form keeps all essential control flow information and eliminates unnecessary predicates at the same time.
    Type: Application
    Filed: July 24, 2008
    Publication date: January 28, 2010
    Inventors: Buqi Cheng, Tin-Fook Ngai, Zhaohui Du, PeiNan Zhang
  • Patent number: 7363211
    Abstract: The invention relates to a method for modeling a device in a topology including defining a managed object corresponding to the device, defining a managed resource using the managed object, and creating a node in the topology, wherein creating the node comprises associating the node with the managed resource.
    Type: Grant
    Filed: March 3, 2004
    Date of Patent: April 22, 2008
    Assignee: Sun Microsystems, Inc.
    Inventors: Narayani Naganathan, Peinan Zhang