Patents by Inventor Kun Tian

Kun Tian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200294183
    Abstract: Systems and methods for container access to graphics processing unit (GPU) resources are disclosed herein. In some embodiments, a computing system may include a physical GPU and kernel-mode driver circuitry, to communicatively couple with the physical GPU to create a plurality of emulated GPUs and a corresponding plurality of device nodes. Each device node may be associated with a single corresponding user-side container to enable communication between the user-side container and the corresponding emulated GPU. Other embodiments may be disclosed and/or claimed.
    Type: Application
    Filed: February 14, 2020
    Publication date: September 17, 2020
    Inventors: Kun Tian, Yao Zu Dong, Zhiuyuan LV
  • Patent number: 10769078
    Abstract: An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Niranjan L. Cooray, Abhishek R. Appu, Altug Koker, Joydeep Ray, Balaji Vembu, Pattabhiraman K, David Puffer, David J. Cowperthwaite, Rajesh M. Sankaran, Satyeshwar Singh, Sameer Kp, Ankur N. Shah, Kun Tian
  • Publication number: 20200267557
    Abstract: Embodiments of the present disclosure relate to a method, a device, and a computer-readable storage medium for allocating system resources among a plurality of network slices in a communication network. In the communication network, a network device determines utility values of two network slices. A network slice corresponds to a set of network functions of a set of services served thereby. A utility value of the network slice indicates the level of demand for system resources by the set of services associated with the network slice. The network device determines a difference between the utility values of the two network slices. If the difference exceeds a threshold utility value, the network device re-allocates system resources among the plurality of network slices in the communication network.
    Type: Application
    Filed: September 30, 2017
    Publication date: August 20, 2020
    Applicant: Nokia Shanghai Bell Co., Ltd.
    Inventors: Kun ZHAO, Lingfeng LIN, Ling LV, Shaoshuai FAN, Hui TIAN, Pengtao ZHAO, Zhiqian HUANG, Gang LING, Bi WANG
  • Publication number: 20200258191
    Abstract: An apparatus to facilitate data prefetching is disclosed. The apparatus includes a memory, one or more execution units (EUs) to execute a plurality of processing threads and prefetch logic to prefetch pages of data from the memory to assist in the execution of the plurality of processing threads.
    Type: Application
    Filed: February 14, 2020
    Publication date: August 13, 2020
    Applicant: Intel Corporation
    Inventors: Adam T. Lake, Guei-Yuan Lueh, Balaji Vembu, Murali Ramadoss, Prasoonkumar Surti, Abhishek R. Appu, Altug Koker, Subramaniam M. Maiyuran, Eric C. Samson, David J. Cowperthwaite, Zhi Wang, Kun Tian, David Puffer, Brian T. Lewis
  • Publication number: 20200254426
    Abstract: In embodiments a metal or metal alloy nanowires are assembled into a nanoporous membrane that can be used in methods for catalyzing various reactions under low pressures and achieving high flow rate of the reactions. In embodiments, the membranes of the disclosure can catalyze CuAAC reactions with high efficiency and minimum leaching of active Cu species.
    Type: Application
    Filed: February 13, 2019
    Publication date: August 13, 2020
    Inventors: Xiao-Min Lin, Jiangwei Wen, Kun Wu, Jun Tian
  • Publication number: 20200192687
    Abstract: Examples may include a determining a policy for primary and secondary virtual machines based on output-packet-similarities. The output-packet-similarities may be based on a comparison of time intervals via which content matched for packets outputted from the primary and secondary virtual machines. A mode may then be selected based, at least in part, on the determined policy.
    Type: Application
    Filed: February 26, 2020
    Publication date: June 18, 2020
    Applicant: INTEL CORPORATION
    Inventors: KUN TIAN, YAO ZU DONG
  • Publication number: 20200174819
    Abstract: Methods and apparatus to process commands from virtual machines, said methods include: accessing, by a virtual nonvolatile memory device in a virtual machine monitor executing on one or more processors, a first command submitted to a guest queue by a native nonvolatile memory driver executing in a guest virtual machine; generating, by the virtual nonvolatile memory device, a translated command based on the first command by translating a virtual parameter of the first command to a physical parameter associated with a physical nonvolatile memory device; submitting, by the virtual nonvolatile memory device, the translated command to a shadow queue to be processed by the physical nonvolatile memory device based on the physical parameter; and submitting, by the virtual nonvolatile memory device, a completion status entry to the guest queue, the completion status entry indicative of completion of a direct memory access operation that copies data between the physical nonvolatile memory device and a guest memory buff
    Type: Application
    Filed: September 26, 2017
    Publication date: June 4, 2020
    Inventors: Yao Zu Dong, Yuankai Guo, Haozhong Zhang, Kun Tian
  • Publication number: 20200117624
    Abstract: Implementations of the disclosure provide processing device comprising: an interrupt managing circuit to receive an interrupt message directed to an application container from an assignable interface (AI) of an input/output (I/O) device. The interrupt message comprises an address space identifier (ASID), an interrupt handle and a flag to distinguish the interrupt message from a direct memory access (DMA) message. Responsive to receiving the interrupt message, a data structure associated with the interrupt managing circuit is identified. An interrupt entry from the data structure is selected based on the interrupt handle. It is determined that the ASID associated with the interrupt message matches an ASID in the interrupt entry. Thereupon, an interrupt in the interrupt entry is forwarded to the application container.
    Type: Application
    Filed: March 31, 2017
    Publication date: April 16, 2020
    Inventors: Sanjay KUMAR, Rajesh M. SANKARAN, Philip R. LANTZ, Utkarsh Y. KAKAIYA, Kun TIAN
  • Patent number: 10621692
    Abstract: An apparatus and method are described for performing virtualization using virtual machine (VM) sets. For example, one embodiment of an apparatus comprises: graphics processing unit (GPU) to process graphics commands and responsively render a plurality of image frames; a hypervisor to virtualize the GPU to share the GPU among a plurality of virtual machines (VMs); and VM set management logic to establish a plurality of VM sets, each set comprising a plurality of VMs, the VM set management logic to partition graphics memory address (GMADR) space across each of the VM sets but to share the GMADR space between VMs within each VM set.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian
  • Patent number: 10620979
    Abstract: Examples may include a determining a checkpointing/delivery policy for primary and secondary virtual machines based on output-packet-similarities. The output-packet-similarities may be based on a comparison of time intervals via which content matched for packets outputted from the primary and secondary virtual machines. A checkpointing/delivery mode may then be selected based, at least in part, on the determined checkpointing/delivery policy.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: April 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Kun Tian, Yao Zu Dong
  • Patent number: 10580108
    Abstract: An apparatus and method for best effort quality of service scheduling in a graphics processing architecture. For example, one embodiment of an apparatus comprises: a graphics processing unit (GPU) to perform graphics processing operations for a plurality of guests; a plurality of buffers to store one or more graphics commands associated with each guest to be executed by the GPU; and a scheduler to evaluate commands in the buffers of a first guest to estimate a cost of executing the commands, the scheduler to select all or a subset of the buffers of the first guest for execution on the GPU based on a determination that the selected buffers can be executed by the GPU within a remaining time slice allocated to the first guest.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian, Tian Zhang, Yulei Zhang
  • Patent number: 10580105
    Abstract: Systems and methods for container access to graphics processing unit (GPU) resources are disclosed herein. In some embodiments, a computing system may include a physical GPU and kernel-mode driver circuitry, to communicatively couple with the physical GPU to create a plurality of emulated GPUs and a corresponding plurality of device nodes. Each device node may be associated with a single corresponding user-side container to enable communication between the user-side container and the corresponding emulated GPU. Other embodiments may be disclosed and/or claimed.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Kun Tian, Yao Zu Dong, Zhiyuan Lv
  • Patent number: 10572288
    Abstract: An apparatus and method are described for efficient inter-virtual machine (VM) communication. For example, an apparatus comprises inter-VM communication logic to map a first specified set of device virtual memory addresses of a first VM to a first set of physical memory addresses in a shared system memory and to further map a second specified set of device virtual memory addresses of a second VM to the first set physical memory addresses in the shared system memory.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventors: Kun Tian, Yao Zu Dong
  • Patent number: 10565676
    Abstract: An apparatus to facilitate data prefetching is disclosed. The apparatus includes a memory, one or more execution units (EUs) to execute a plurality of processing threads and prefetch logic to prefetch pages of data from the memory to assist in the execution of the plurality of processing threads.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: February 18, 2020
    Assignee: INTEL CORPORATION
    Inventors: Adam T. Lake, Guei-Yuan Lueh, Balaji Vembu, Murali Ramadoss, Prasoonkumar Surti, Abhishek R. Appu, Altug Koker, Subramaniam M. Maiyuran, Eric C. Samson, David J. Cowperthwaite, Zhi Wang, Kun Tian, David Puffer, Brian T. Lewis
  • Patent number: 10565127
    Abstract: An apparatus and method are described for managing a virtual graphics processor unit (GPU). For example, one embodiment of an apparatus comprises: a dynamic addressing module to map portions of an address space required by the virtual machine to matching free address spaces of a host if such matching free address spaces are available, and to select non-matching address spaces for those portions of the address space required by the virtual machine which cannot be matched with free address spaces of the host; and a balloon module to perform address space ballooning (ASB) techniques for those portions of the address space required by the virtual machine which have been mapped to matching address spaces of the host; and address remapping logic to perform address remapping techniques for those portions of the address space required by the virtual machine which have not been mapped to matching address spaces of the host.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian
  • Publication number: 20200047205
    Abstract: Disclosed is a method for calculating instantaneous sprinkler strength comprising: ensuring that a translational sprinkler (1) maintains a stable operating state, placing b rain barrels (3) at a distance of a metres from the translational sprinkler (1), and moving the translational sprinkler (1) to obtain measurement data; calculating movement time, and the average sprayed water depth received by the rain barrels (3); assuming the distribution form of the amount of water of the translational sprinkler (1), establishing a function relationship between an instantaneous sprinkler strength ht and the movement time t, and calculating a variable in the function relationship; and substituting into the established function relationship a specific numerical value of an instantaneous point in time t of the movement of the translational sprinkler (1), so that the value of ht obtained is a numerical value of the instantaneous sprinkler strength of the translational sprinkler (1).
    Type: Application
    Filed: November 22, 2016
    Publication date: February 13, 2020
    Inventors: Xingye ZHU, Jungping LIU, Shouqi YUAN, Kun TIAN, Jinghong WAN
  • Publication number: 20200012530
    Abstract: Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.
    Type: Application
    Filed: March 12, 2019
    Publication date: January 9, 2020
    Inventors: Utkarsh Y. KAKAIYA, Rajesh SANKARAN, Sanjay KUMAR, Kun TIAN, Philip LANTZ
  • Patent number: 10521354
    Abstract: Apparatuses, methods and storage medium associated with computing that include usage and backup of persistent memory are disclosed herein. In embodiments, an apparatus for computing may comprise one or more processors and persistent memory to host operation of one or more virtual machines; and one or more page tables to store a plurality of mappings to map a plurality of virtual memory pages of a virtualization of the persistent memory of the one or more virtual machines to a plurality of physical memory pages of the persistent memory allocated to the one or more virtual machines. The apparatus may further include a memory manager to manage accesses of the persistent memory that includes a copy-on-write mechanism to service write instructions that address virtual memory pages mapped to physical memory pages that are marked as read-only. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: December 31, 2019
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian
  • Publication number: 20190391937
    Abstract: An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
    Type: Application
    Filed: June 26, 2019
    Publication date: December 26, 2019
    Inventors: NIRANJAN L. COORAY, ABHISHEK R. APPU, ALTUG KOKER, JOYDEEP RAY, BALAJI VEMBU, PATTABHIRAMAN K, DAVID PUFFER, DAVID J. COWPERTHWAITE, RAJESH M. SANKARAN, SATYESHWAR SINGH, SAMEER KP, ANKUR N. SHAH, KUN TIAN
  • Patent number: D886772
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: June 9, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Tao Xiong, Zhe Wang, Xiaodong Li, Yuan Tian, Xingfei Ge, Kun Jing, Kaihua Zhu