Patents by Inventor Shoumeng Yan

Shoumeng Yan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150019825
    Abstract: A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit (GPU) and a shared virtual memory supported by a physical private memory space of at least one heterogeneous processor or a physical shared memory shared by the heterogeneous processor. The CPU (producer) may create shared multi-version data and store such shared multi-version data in the physical private memory space or the physical shared memory. The GPU (consumer) may acquire or access the shared multi-version data.
    Type: Application
    Filed: October 1, 2014
    Publication date: January 15, 2015
    Inventors: Ying GAO, Hu CHEN, Shoumeng YAN, Xiaocheng ZHOU, Sai LUO, Bratin SAHA
  • Publication number: 20150012281
    Abstract: An audio accelerator includes a decoder to decode first and second sets of data blocks, a processor to process the first and second sets of decoded data blocks, a storage area to store the first and second sets of processed data blocks, and a controller to generate interrupt signals for controlling operation of the decoder. The controller may control a rate at which data blocks are to be decoded by the decoder to reduce a time gap between outputting adjacent ones of the data blocks from the first and second sets in the storage area.
    Type: Application
    Filed: September 29, 2012
    Publication date: January 8, 2015
    Inventors: Xiaocheng Zhou, Shoumeng Yan
  • Publication number: 20140375662
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: July 1, 2014
    Publication date: December 25, 2014
    Inventors: HU CHEN, YING GAO, XIAOCHENG ZHOU, SHOUMENG YAN, PEINAN ZHANG, MOHAN RAJAGOPALAN, JESSE FANG, AVI MENDELSON, Bratin Saha
  • Publication number: 20140379946
    Abstract: An apparatus may include a processor arranged to receive an input signal from an input device and a first event conversion module. The first event conversion module may receive an input event from the input device as an operating system (OS)-specific event arranged in a format operable by a first operating system, convert the OS-specific event into a converted event having an OS-independent format, and dispatch the converted event for processing. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: November 3, 2011
    Publication date: December 25, 2014
    Inventors: Danf Zhang, Shoumeng Yan, Peng Guo, Gansha Wu
  • Patent number: 8868848
    Abstract: A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit (GPU) and a shared virtual memory supported by a physical private memory space of at least one heterogeneous processor or a physical shared memory shared by the heterogeneous processor. The CPU (producer) may create shared multi-version data and store such shared multi-version data in the physical private memory space or the physical shared memory. The GPU (consumer) may acquire or access the shared multi-version data.
    Type: Grant
    Filed: December 21, 2009
    Date of Patent: October 21, 2014
    Assignee: Intel Corporation
    Inventors: Ying Gao, Hu Chen, Shoumeng Yan, Xiaocheng Zhou, Sai Luo, Bratin Saha
  • Publication number: 20140306972
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Application
    Filed: February 5, 2014
    Publication date: October 16, 2014
    Inventors: Xiaocheng Zhou, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20140137137
    Abstract: Systems and methods may provide for using audio output device driver logic to maintain one or more states of an audio accelerator in a memory store, detect a suspend event, and deactivate the audio accelerator in response to the suspend event. In addition, firmware logic of the audio accelerator may be used to detect a resume event with respect to the audio output accelerator, and retrieve one or more states of the audio accelerator directly from the memory store in response to the resume. Thus, the retrieval of the one or more states can bypass the driver logic.
    Type: Application
    Filed: December 30, 2011
    Publication date: May 15, 2014
    Inventors: Shoumeng Yan, Xiaocheng Zhou, Lomesh Agarwal
  • Patent number: 8719839
    Abstract: A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU, for example. The GPU may be coupled to a GPU compiler and a GPU linker/loader and the CPU may be coupled to a CPU compiler and a CPU linker/loader. The user may create a shared object in an object oriented language and the shared object may include virtual functions. The shared object may be fine grain partitioned between the heterogeneous processors. The GPU compiler may allocate the shared object to the CPU and may create a first and a second enabling path to allow the GPU to invoke virtual functions of the shared object. Thus, the shared object that may include virtual functions may be shared seamlessly between the CPU and the GPU.
    Type: Grant
    Filed: October 30, 2009
    Date of Patent: May 6, 2014
    Assignee: Intel Corporation
    Inventors: Shoumeng Yan, Xiaocheng Zhou, Ying Gao, Mohan Rajagopalan, Rajiv Deodhar, David Putzolu, Clark Nelson, Milind Girkar, Robert Geva, Tiger Chen, Sai Luo, Stephen Junkins, Bratin Saha, Ravi Narayanaswamy, Patrick Xi
  • Patent number: 8683487
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: March 25, 2014
    Assignee: Intel Corporation
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Gao Ying, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20140049550
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: September 4, 2013
    Publication date: February 20, 2014
    Inventors: Hu Chen, Gao Ying, Zhou Xiaocheng, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Publication number: 20130346978
    Abstract: Disclosed is a method that may include hosting, by a virtual machine manager of a local machine, a virtual machine having a device driver. The method may include obtaining, by the virtual machine manager, from a stub driver on the remote machine, information about the I/O device on the remote machine. The I/O device on the remote machine may be bound to the stub driver on the remote machine. The method may include instantiating, by the virtual machine manager, a virtual I/O device on the local machine corresponding to the I/O device on the remote machine. The method may include collaborating, by the virtual machine manager, with the stub driver on the remote machine to effectuate a real access to the I/O device on the remote machine for an access to the virtual I/O device by the device driver on behalf of a program on the local machine. Other embodiments may be described and claimed.
    Type: Application
    Filed: March 30, 2012
    Publication date: December 26, 2013
    Inventors: Zhefu Jiang, Shoumeng Yan, Gansha Wu
  • Patent number: 8531471
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: September 10, 2013
    Assignee: Intel Corporation
    Inventors: Hu Chen, Ying Gao, Zhou Xiaocheng, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 8516220
    Abstract: A page table entry dirty bit system may be utilized to record dirty information for a software distributed shared memory system. In some embodiments, this may improve performance without substantially increasing overhead because the dirty bit recording system is already available in certain processors. By providing extra bits, coherence can be obtained with respect to all the other uses of the existing page table entry dirty bits.
    Type: Grant
    Filed: May 11, 2010
    Date of Patent: August 20, 2013
    Assignee: Intel Corporation
    Inventors: Shoumeng Yan, Ying Gao, Xiaocheng Zhou, Hu Chen, Sai Luo, Bratin Saha
  • Publication number: 20130187936
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Application
    Filed: March 11, 2013
    Publication date: July 25, 2013
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Gao Ying, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20130173894
    Abstract: A computing platform may include heterogeneous processors (e.g., CPU and a GPU) to support sharing of virtual functions between such processors. In one embodiment, a CPU side vtable pointer used to access a shared object from the CPU 110 may be used to determine a GPU vtable if a GPU-side table exists. In other embodiment, a shared non-coherent region, which may not maintain data consistency, may be created within the shared virtual memory. The CPU and the GPU side data stored within the shared non-coherent region may have a same address as seen from the CPU and the GPU side. However, the contents of the CPU-side data may be different from that of GPU-side data as shared virtual memory may not maintain coherency during the run-time. In one embodiment, the vptr may be modified to point to the CPU vtable and GPU vtable stored in the shared virtual memory.
    Type: Application
    Filed: September 24, 2010
    Publication date: July 4, 2013
    Inventors: Shoumeng Yan, Sai Luo, Xiaocheng Zhou, Ying Gao, Hu Chen, Bratin Saha
  • Patent number: 8397241
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: March 12, 2013
    Assignee: Intel Corporation
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20130061240
    Abstract: A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU, for example. The GPU may be coupled to a GPU compiler and a GPU linker/loader and the CPU may be coupled to a CPU compiler and a CPU linker/loader. The user may create a shared object in an object oriented language and the shared object may include virtual functions. The shared object may be fine grain partitioned between the heterogeneous processors. The GPU compiler may allocate the shared object to the CPU and may create a first and a second enabling path to allow the GPU to invoke virtual functions of the shared object. Thus, the shared object that may include virtual functions may be shared seamlessly between the CPU and the GPU.
    Type: Application
    Filed: October 30, 2009
    Publication date: March 7, 2013
    Inventors: Shoumeng Yan, Xiaocheng Zhou, Ying Gao, Mohan Rajagopalan, Rajiv Deodhar, David Putzolu, Clark Nelson, Milind Girkar, Robert Geva, Tiger Chen, Sai Luo, Stephen Junkins, Bratin Saha, Ravi Narayanaswamy, Patrick Xi
  • Publication number: 20120023296
    Abstract: A page table entry dirty bit system may be utilized to record dirty information for a software distributed shared memory system. In some embodiments, this may improve performance without substantially increasing overhead because the dirty bit recording system is already available in certain processors. By providing extra bits, coherence can be obtained with respect to all the other uses of the existing page table entry dirty bits.
    Type: Application
    Filed: May 11, 2010
    Publication date: January 26, 2012
    Inventors: Shoumeng Yan, Ying Gao, Xiaocheng Zhou, Hu Chen, Sai Luo, Bratin Saha
  • Publication number: 20110153957
    Abstract: A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU and a shared virtual memory supported by a physical private memory space of at least one heterogeneous processor or a physical shared memory shared by the heterogeneous processor. The CPU (producer) may create shared multi-version data and store such shared multi-version data in the physical private memory space or the physical shared memory. The GPU (consumer) may acquire or access the shared multi-version data.
    Type: Application
    Filed: December 21, 2009
    Publication date: June 23, 2011
    Inventors: Ying Gao, Hu Chen, Shoumeng Yan, Xiaocheng Zhou, Sai Luo, Bratin Saha
  • Publication number: 20100122264
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Application
    Filed: December 30, 2008
    Publication date: May 13, 2010
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha