Patents by Inventor Kun Tian

Kun Tian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10620979
    Abstract: Examples may include a determining a checkpointing/delivery policy for primary and secondary virtual machines based on output-packet-similarities. The output-packet-similarities may be based on a comparison of time intervals via which content matched for packets outputted from the primary and secondary virtual machines. A checkpointing/delivery mode may then be selected based, at least in part, on the determined checkpointing/delivery policy.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: April 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Kun Tian, Yao Zu Dong
  • Patent number: 10621692
    Abstract: An apparatus and method are described for performing virtualization using virtual machine (VM) sets. For example, one embodiment of an apparatus comprises: graphics processing unit (GPU) to process graphics commands and responsively render a plurality of image frames; a hypervisor to virtualize the GPU to share the GPU among a plurality of virtual machines (VMs); and VM set management logic to establish a plurality of VM sets, each set comprising a plurality of VMs, the VM set management logic to partition graphics memory address (GMADR) space across each of the VM sets but to share the GMADR space between VMs within each VM set.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian
  • Patent number: 10580105
    Abstract: Systems and methods for container access to graphics processing unit (GPU) resources are disclosed herein. In some embodiments, a computing system may include a physical GPU and kernel-mode driver circuitry, to communicatively couple with the physical GPU to create a plurality of emulated GPUs and a corresponding plurality of device nodes. Each device node may be associated with a single corresponding user-side container to enable communication between the user-side container and the corresponding emulated GPU. Other embodiments may be disclosed and/or claimed.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Kun Tian, Yao Zu Dong, Zhiyuan Lv
  • Patent number: 10580108
    Abstract: An apparatus and method for best effort quality of service scheduling in a graphics processing architecture. For example, one embodiment of an apparatus comprises: a graphics processing unit (GPU) to perform graphics processing operations for a plurality of guests; a plurality of buffers to store one or more graphics commands associated with each guest to be executed by the GPU; and a scheduler to evaluate commands in the buffers of a first guest to estimate a cost of executing the commands, the scheduler to select all or a subset of the buffers of the first guest for execution on the GPU based on a determination that the selected buffers can be executed by the GPU within a remaining time slice allocated to the first guest.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian, Tian Zhang, Yulei Zhang
  • Patent number: 10572288
    Abstract: An apparatus and method are described for efficient inter-virtual machine (VM) communication. For example, an apparatus comprises inter-VM communication logic to map a first specified set of device virtual memory addresses of a first VM to a first set of physical memory addresses in a shared system memory and to further map a second specified set of device virtual memory addresses of a second VM to the first set physical memory addresses in the shared system memory.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventors: Kun Tian, Yao Zu Dong
  • Patent number: 10565676
    Abstract: An apparatus to facilitate data prefetching is disclosed. The apparatus includes a memory, one or more execution units (EUs) to execute a plurality of processing threads and prefetch logic to prefetch pages of data from the memory to assist in the execution of the plurality of processing threads.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: February 18, 2020
    Assignee: INTEL CORPORATION
    Inventors: Adam T. Lake, Guei-Yuan Lueh, Balaji Vembu, Murali Ramadoss, Prasoonkumar Surti, Abhishek R. Appu, Altug Koker, Subramaniam M. Maiyuran, Eric C. Samson, David J. Cowperthwaite, Zhi Wang, Kun Tian, David Puffer, Brian T. Lewis
  • Patent number: 10565127
    Abstract: An apparatus and method are described for managing a virtual graphics processor unit (GPU). For example, one embodiment of an apparatus comprises: a dynamic addressing module to map portions of an address space required by the virtual machine to matching free address spaces of a host if such matching free address spaces are available, and to select non-matching address spaces for those portions of the address space required by the virtual machine which cannot be matched with free address spaces of the host; and a balloon module to perform address space ballooning (ASB) techniques for those portions of the address space required by the virtual machine which have been mapped to matching address spaces of the host; and address remapping logic to perform address remapping techniques for those portions of the address space required by the virtual machine which have not been mapped to matching address spaces of the host.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian
  • Publication number: 20200047205
    Abstract: Disclosed is a method for calculating instantaneous sprinkler strength comprising: ensuring that a translational sprinkler (1) maintains a stable operating state, placing b rain barrels (3) at a distance of a metres from the translational sprinkler (1), and moving the translational sprinkler (1) to obtain measurement data; calculating movement time, and the average sprayed water depth received by the rain barrels (3); assuming the distribution form of the amount of water of the translational sprinkler (1), establishing a function relationship between an instantaneous sprinkler strength ht and the movement time t, and calculating a variable in the function relationship; and substituting into the established function relationship a specific numerical value of an instantaneous point in time t of the movement of the translational sprinkler (1), so that the value of ht obtained is a numerical value of the instantaneous sprinkler strength of the translational sprinkler (1).
    Type: Application
    Filed: November 22, 2016
    Publication date: February 13, 2020
    Inventors: Xingye ZHU, Jungping LIU, Shouqi YUAN, Kun TIAN, Jinghong WAN
  • Publication number: 20200012530
    Abstract: Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.
    Type: Application
    Filed: March 12, 2019
    Publication date: January 9, 2020
    Inventors: Utkarsh Y. KAKAIYA, Rajesh SANKARAN, Sanjay KUMAR, Kun TIAN, Philip LANTZ
  • Patent number: 10521354
    Abstract: Apparatuses, methods and storage medium associated with computing that include usage and backup of persistent memory are disclosed herein. In embodiments, an apparatus for computing may comprise one or more processors and persistent memory to host operation of one or more virtual machines; and one or more page tables to store a plurality of mappings to map a plurality of virtual memory pages of a virtualization of the persistent memory of the one or more virtual machines to a plurality of physical memory pages of the persistent memory allocated to the one or more virtual machines. The apparatus may further include a memory manager to manage accesses of the persistent memory that includes a copy-on-write mechanism to service write instructions that address virtual memory pages mapped to physical memory pages that are marked as read-only. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: December 31, 2019
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian
  • Publication number: 20190391937
    Abstract: An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
    Type: Application
    Filed: June 26, 2019
    Publication date: December 26, 2019
    Inventors: NIRANJAN L. COORAY, ABHISHEK R. APPU, ALTUG KOKER, JOYDEEP RAY, BALAJI VEMBU, PATTABHIRAMAN K, DAVID PUFFER, DAVID J. COWPERTHWAITE, RAJESH M. SANKARAN, SATYESHWAR SINGH, SAMEER KP, ANKUR N. SHAH, KUN TIAN
  • Publication number: 20190378238
    Abstract: Embodiments described herein provide techniques enable a compute unit to continue processing operations when all dispatched threads are blocked. One embodiment provides for a graphics processor comprising a compute unit to execute multiple concurrent threads and a memory coupled with and on a same package as the compute unit. The memory can store thread state for a suspended thread and the compute unit can detect that multiple concurrent threads of the compute unit are blocked from execution. Upon detection, the compute unit can select a victim thread from the multiple concurrent threads, suspend the victim thread, store thread state of the victim thread to the memory, and select an additional thread to be executed. The compute unit can then replace the victim thread with an additional thread to be executed. The additional thread to be executed can be based on a blocking event for the additional thread.
    Type: Application
    Filed: August 20, 2019
    Publication date: December 12, 2019
    Applicant: Intel Corporation
    Inventors: Murali Ramadoss, Balaji Vembu, Eric C. Samson, Kun Tian, David J. Cowperthwaite, Altug Koker, Zhi Wang, Joydeep Ray, Subramaniam M. Maiyuran, Abhishek R. Appu
  • Publication number: 20190370050
    Abstract: A processing device comprises an address translation circuit to intercept a work request from an I/O device. The work request comprises a first ASID to map to a work queue. A second ASID of a host is allocated for the first ASID based on the work queue. The second ASID is allocated to at least one of: an ASID register for a dedicated work queue (DWQ) or an ASID translation table for a shared work queue (SWQ). Responsive to receiving a work submission from the SVM client to the I/O device, the first ASID of the application container is translated to the second ASID of the host machine for submission to the I/O device using at least one of: the ASID register for the DWQ or the ASID translation table for the SWQ based on the work queue associated with the I/O device.
    Type: Application
    Filed: February 22, 2017
    Publication date: December 5, 2019
    Inventors: Sanjay KUMAR, Rajesh M. SANKARAN, Gilbert NEIGER, Philip R. LANTZ, Jason W. BRANDT, Vedvyas SHANBHOGUE, Utkarsh Y. KAKAIYA, Kun TIAN
  • Publication number: 20190361728
    Abstract: Techniques for transferring virtual machines and resource management in a virtualized computing environment are described. In one embodiment, for example, an apparatus may include at least one memory, at least one processor, and logic for transferring a virtual machine (VM), at least a portion of the logic comprised in hardware coupled to the at least one memory and the at least one processor, the logic to generate a plurality of virtualized capability registers for a virtual device (VDEV) by virtualizing a plurality of device-specific capability registers of a physical device to be virtualized by the VM, the plurality of virtualized capability registers comprising a plurality of device-specific capabilities of the physical device, determine a version of the physical device to support via a virtual machine monitor (VMM), and expose a subset of the virtualized capability registers associated with the version to the VM. Other embodiments are described and claimed.
    Type: Application
    Filed: March 31, 2017
    Publication date: November 28, 2019
    Applicant: INTEL CORPORATION
    Inventors: SANJAY KUMAR, PHILIP R. LANTZ, KUN TIAN, UTKARSH Y. KAKAIYA, RAJESH M. SANKARAN
  • Patent number: 10482567
    Abstract: An apparatus and method are described for intelligent resource provisioning for shadow structures. For example, one embodiment of an apparatus comprises: graphics processing unit (GPU) to process graphics commands and responsively render a plurality of image frames in a graphics memory address space; shadow structure management logic to reserve one or more shadow slots in the graphics memory address space in which to store shadow instances of different GPU contexts; and the shadow structure management logic to implement a partial shadowing policy for shadowing GPU contexts in the shadow slots, the partial shadowing policy based on characteristics of pages of the GPU contexts.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: November 19, 2019
    Assignee: Intel Corporation
    Inventors: Zhiyuan Lv, Kun Tian
  • Patent number: 10474489
    Abstract: Examples may include techniques to run one or more containers on a virtual machine (VM). Examples include cloning a first VM to result in a second VM. The cloned first VM may run at least a set of containers capable of separately executing one or more applications. In some examples, some cloned containers are stopped at either the first or second VMs to allow for at least some resources provisioned to support the first or second VMs to be reused or recycled at a hosting node. In other examples, the second VM is migrated from the hosting node to a destination hosting node to further enable resources to be reused or recycled at the hosting node.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: November 12, 2019
    Assignee: INTEL CORPORATION
    Inventors: Yao Zu Dong, Kun Tian
  • Patent number: 10467048
    Abstract: Examples may include techniques for virtual machine (VM) migration. Examples may include selecting a VM for live migration from a source node to a destination node, predicting a time period associated with the live migration, and selecting another VM from which allocated source node bandwidth may borrowed to facilitate the live migration within the predicted time.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: November 5, 2019
    Assignee: INTEL CORPORATION
    Inventors: Yao Zu Dong, Kun Tian
  • Patent number: 10467033
    Abstract: Embodiments of the invention enable dynamic level boosting of operations across virtualization layers to enable efficient nested virtualization. Embodiments of the invention execute a first virtual machine monitor (VMM) to virtualize system hardware. A nested virtualization environment is created by executing a plurality of upper level VMMs via virtual machines (VMs). These upper level VMMs are used to execute an upper level virtualization layer including an operating system (OS). During operation of the above described nested virtualization environment, a privileged instruction issued from an OS is trapped and emulated via the respective upper level VMM (i.e., the VMM that creates the VM for that OS). Embodiments of the invention enable the emulation of the privileged instruction via a lower level VMM. In some embodiments, the emulated instruction is executed via the first VMM with little to no involvement of any intermediate virtualization layers residing between the first and upper level VMMs.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: November 5, 2019
    Assignee: INTEL CORPORATION
    Inventors: Kun Tian, Yao Zu Dong
  • Patent number: 10460417
    Abstract: Embodiments described herein provide techniques enable a compute unit to continue processing operations when all dispatched threads are blocked. One embodiment provides for an apparatus comprising a thread dispatcher to dispatch a thread for execution; a compute unit having a single instruction, multiple thread architecture, the compute unit to execute multiple concurrent threads; and a memory coupled with the compute unit, the memory to store thread state for a suspended thread, wherein the compute unit is to: detect that all threads on the compute unit are blocked from execution, select a victim thread from the multiple concurrent threads, suspend the victim thread, store thread state of the victim thread to the memory, and replace the victim thread with an additional thread to be executed.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: October 29, 2019
    Assignee: Intel Corporation
    Inventors: Murali Ramadoss, Balaji Vembu, Eric C. Samson, Kun Tian, David J. Cowperthwaite, Altug Koker, Zhi Wang, Joydeep Ray, Subramaniam M. Maiyuran, Abhishek R. Appu
  • Patent number: 10452495
    Abstract: It includes techniques to provide for reliable primary and secondary containers arranged to separately execute an application that receives request packets for processing by the application. The request packets may be received from a client coupled with a server arranged to host the primary container or the secondary container. The client coupled with the server through a network. Coarse-grained lock-stepping (COLO) methods may be utilized to facilitate in providing the reliable primary and secondary containers.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: October 22, 2019
    Assignee: INTEL CORPORATION
    Inventors: Yao Zu Dong, Yunhong Jiang, Kun Tian