Patents by Inventor Tsung-Yuan C. Tai

Tsung-Yuan C. Tai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180164868
    Abstract: A computer-implemented method can include receiving a queue depth for a receive queue of a network interface controller (NIC), determining whether a power state of a central processing unit (CPU) core mapped to the receive queue should be adjusted based on the queue depth, and adjusting the power state of the CPU core responsive to a determination that the power state of the CPU core should be adjusted.
    Type: Application
    Filed: December 12, 2016
    Publication date: June 14, 2018
    Applicant: Intel Corporation
    Inventors: Brian J. Skerry, Ira Weiny, Patrick Connor, Tsung-Yuan C. Tai, Alexander W. Min
  • Patent number: 9992299
    Abstract: Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: June 5, 2018
    Assignee: Intel Corporation
    Inventors: Ren Wang, Sameh Gobriel, Christian Maciocco, Tsung-Yuan C. Tai, Ben-Zion Friedman, Hang T. Nguyen, Namakkal N. Venkatesan, Michael A. O'Hanlon, Shrikant M. Shah, Sanjeev Jain
  • Publication number: 20180109460
    Abstract: The present disclosure describes a process and apparatus for improving insertions of entries into a hash table. A large number of smaller virtual buckets may be combined together and associated with buckets used for hash table entry lookups and/or entry insertion. On insertion of an entry, hash table entries associated with a hashed-to virtual bucket may be moved between groups of buckets associated with the virtual bucket, to better distribute entries across the available buckets to reduce the number of entries in the largest buckets and the standard deviation of the bucket sizes across the entire hash table.
    Type: Application
    Filed: March 29, 2017
    Publication date: April 19, 2018
    Inventors: BYRON MAROHN, CHRISTIAN MACIOCCO, SAMEH GOBRIEL, REN WANG, TSUNG-YUAN C. TAI
  • Publication number: 20180083866
    Abstract: Methods and apparatus for facilitating efficient Quality of Service (QoS) support for software-based packet processing by offloading QoS rate-limiting to NIC hardware. Software-based packet processing is performed on packet flows received at a compute platform, such as a general purpose server, and/or packet flows generated by local applications running on the compute platform. The packet processing includes packet classification that associates packets with packet flows using flow IDs, and identifying a QoS class for the packet and packet flow. NIC Tx queues are dynamically configured or pre-configured to effect rate limiting for forwarding packets enqueued in the NIC Tx queues. New packet flows are detected, and mapping data is created to map flow IDs associated with flows to the NIC Tx queues used to forward the packets associated with the flows.
    Type: Application
    Filed: September 20, 2016
    Publication date: March 22, 2018
    Inventors: Sameh Gobriel, Ren Wang, Eric K. Mann, Christian Maciocco, Tsung-Yuan C. Tai
  • Patent number: 9866498
    Abstract: Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: January 9, 2018
    Assignee: Intel Corporation
    Inventors: Ren Wang, Sameh Gobriel, Christian Maciocco, Tsung-Yuan C. Tai, Ben-Zion Friedman, Hang T. Nguyen, Namakkal N. Venkatesan, Michael A. O'Hanlon, Shrikant M. Shah, Sanjeev Jain
  • Patent number: 9866479
    Abstract: Technologies for supporting concurrency of a flow lookup table at a network device. The flow lookup table includes a plurality of candidate buckets that each includes one or more entries. The network device includes a flow lookup table write module configured to perform a displacement operation of a key/value pair to move the key/value pair from one bucket to another bucket via an atomic instruction and increment a version counter associated with the buckets affected by the displacement operation. The network device additionally includes a flow lookup table read module to check the version counters during a lookup operation on the flow lookup table to determine whether a displacement operation is affecting the presently read value of the buckets. Other embodiments are described herein and claimed.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: January 9, 2018
    Assignee: Intel Corporation
    Inventors: Ren Wang, Dong Zhou, Bruce Richardson, George W. Kennedy, Christian Maciocco, Sameh Gobriel, Tsung-Yuan C. Tai
  • Publication number: 20180007011
    Abstract: In an embodiment, a method includes registering applications and network services for notification of an out-of-band introduction, and using the out-of-band introduction to bootstrap secure in-band provisioning of credentials and policies that are used to control subsequent access and resource sharing on an in-band channel. In another embodiment, an apparatus implements the method.
    Type: Application
    Filed: March 20, 2017
    Publication date: January 4, 2018
    Inventors: Victor B. Lortz, Jesse R. Walker, Shriharsha S. Hegde, Amol A. Kulkarni, Tsung-Yuan C. Tai
  • Patent number: 9829949
    Abstract: Methods and apparatus relating to adaptive interrupt coalescing for energy efficient mobile platforms are discussed herein. In one embodiment, one or more interrupts are buffered based on communication throughput. At least one of the one or more interrupts are released in response to expiration of an interrupt coalescing time period. Other embodiments are also claimed and described.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: November 28, 2017
    Assignee: Intel Corporation
    Inventors: Alexander W. Min, Ren Wang, Jr-Shian Tsai, Mesut A. Ergin, Tsung-Yuan C. Tai
  • Patent number: 9830676
    Abstract: In accordance with some embodiments, a continuous thread is operated on the graphics processing unit. A continuous thread is launched one time from the central processing unit and then it runs continuously until an application on the central processing unit decides to terminate the thread. For example, the application may decide to terminate the thread in one of a variety of situations which may be programmed in advance. For example, upon error detection, a desire to change the way that the thread on the graphics processing unit operates, or in power off, the thread may terminate. But unless actively terminated by the central processing unit, the continuous thread generally runs uninterrupted.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: November 28, 2017
    Assignee: Intel Corporation
    Inventors: Janet Tseng, Felix J. Degrood, Alexander W. Min, Jr-Shian Tsai, Tsung-Yuan C. Tai
  • Patent number: 9817684
    Abstract: In the present disclosure, functions associated with the central office of an evolved packet core network are co-located onto a computer platform or sub-components through virtualized function instances. This reduces and/or eliminates the physical interfaces between equipment and permits functional operation of the evolved packet core to occur at a network edge.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: November 14, 2017
    Assignee: INTEL CORPORATION
    Inventors: Ashok Sunder Rajan, Richard A. Uhlig, Rajendra S. Yavatkar, Tsung-Yuan C. Tai, Christian Maciocco, Jeffrey R. Jackson, Daniel J. Dahle
  • Publication number: 20170286295
    Abstract: An apparatus and method are described for a triggered prefetch operation. For example, one embodiment of a processor comprises: a first core comprising a first cache to store a first set of cache lines; a second core comprising a second cache to store a second set of cache lines; a cache management circuit to maintain coherency between one or more cache lines in the first cache and the second cache, the cache management circuit to allocate a lock on a first cache line to the first cache; a prefetch circuit comprising a prefetch request buffer to store a plurality of prefetch request entries including a first prefetch request entry associated with the first cache line, the prefetch circuit to cause the first cache line to be prefetched to the second cache in response to an invalidate command detected for the first cache line.
    Type: Application
    Filed: April 1, 2016
    Publication date: October 5, 2017
    Inventors: CHRISTOPHER B. WILERKSON, REN WANG, ANTOINE KAUFMANN, ANIL VASUDEVAN, ROBERT G. BLANKENSHIP, VENKATA KRISHNAN, TSUNG-YUAN C. TAI
  • Patent number: 9740635
    Abstract: Computer-readable storage media, computing devices and methods associated with file cache management are discussed herein. In embodiments, a computing device may include a file cache and a file cache manager coupled with the file cache. The file cache manager may be configured to implement a context-aware eviction policy to identify a candidate file for deletion from the file cache, from a plurality of individual files contained within the file cache, based at least in part on file-level context information associated with the individual files. In embodiments, the file-level context information may include an indication of access recency and access frequency associated with the individual files. In such embodiments, identifying the candidate file for deletion from the file cache may be based, at least in part, on both the access recency and the access frequency of the individual files. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 12, 2015
    Date of Patent: August 22, 2017
    Assignee: Intel Corporation
    Inventors: Ren Wang, Weishuang Zhao, Wei Shen, Michael P. Mesnier, Tsung-Yuan C. Tai, Mesut A. Ergin
  • Publication number: 20170206177
    Abstract: Embodiments of an invention interrupts between virtual machines are disclosed. In an embodiment, a processor includes an instruction unit and an execution unit, both implemented at least partially in hardware of the processor. The instruction unit is to receive an instruction to send an interrupt to a target virtual machine. The execution unit is to execute the instruction on a sending virtual machine without exiting the sending virtual machine. Execution of the instruction includes using a handle specified by the instruction to find a posted interrupt descriptor.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 20, 2017
    Inventors: Jr-Shian Tsai, Ravi L. Sahita, Mesut A. Ergin, Rajesh M. Sankaran, Gilbert Neiger, Jun Nakajima, Edwin Verplanke, Barry E. Huntley, Tsung-Yuan C. Tai
  • Patent number: 9710380
    Abstract: Systems and methods for managing shared cache by multi-core processor. An example processing system comprises: a plurality of processing cores, each processing core communicatively coupled to a last level cache (LLC) slice; and a cache control logic coupled to the plurality of processing cores, the cache control logic configured to perform one of: making an LLC slice of an inactive processing core available to an active processing core or power gating the LLC slice, based on estimating cache requirements by active processing cores.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: July 18, 2017
    Assignee: Intel Corporation
    Inventors: Ren Wang, Kevin B. Theobald, Zeshan A. Chishti, Zhaojuan Bian, Aamer Jaleel, Tsung-Yuan C. Tai
  • Publication number: 20170192921
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: January 4, 2016
    Publication date: July 6, 2017
    Inventors: Ren Wang, Yipeng Wang, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Patent number: 9681332
    Abstract: Apparatuses, methods and storage media associated with file compression and transmission, or file reception and decompression. Specifically, one or more compression/decompression or transmission/reception parameters associated with transmission or reception may be identified. Based on the identified parameters, energy consumption of compression and transmission, or reception and decompression, of the data over a wireless communication link may be predicted. Based on that prediction, a compression configuration may be identified. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: June 13, 2017
    Assignee: Intel Corporation
    Inventors: Alexander W. Min, Guan-Yu Lin, Tsung-Yuan C. Tai, JR-Shian James Tsai
  • Publication number: 20170163575
    Abstract: Methods and apparatus to support multiple-writer/multiple-reader concurrency for software flow/packet classification on general purpose multi-core systems. A flow table with rows mapped to respective hash buckets with multiple entry slots is implemented in memory of a host platform with multiple cores, with each bucket being associated with a version counter. Multiple writer and reader threads are run on the cores, with writers providing updates to the flow table data. In connection with inserting new key data, a determination is made to which buckets will be changed, and access rights to those buckets are acquired prior to making any changes. For example, under a flow table employing cuckoo hashing, access rights are acquired to buckets along a full cuckoo path.
    Type: Application
    Filed: December 7, 2015
    Publication date: June 8, 2017
    Inventors: Ren Wang, Christian Maciocco, Namakkal N. Venkatesan, Tsung-Yuan C. Tai
  • Publication number: 20170149926
    Abstract: Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
    Type: Application
    Filed: February 7, 2017
    Publication date: May 25, 2017
    Inventors: Ren Wang, Sameh Gobriel, Christian Maciocco, Tsung-Yuan C. Tai, Ben-Zion Friedman, Hang T. Nguyen, Namakkal N. Venkatesan, Michael A. O'Hanlon, Shrikant M. Shah, Sanjeev Jain
  • Patent number: 9619006
    Abstract: A method and apparatus for selectively parking routers used for routing traffic in mesh interconnects. Various router parking (RP) algorithms are disclosed, including an aggressive RP algorithm where a minimum number of routers are kept active to ensure adequate network connectivity between active nodes and/or intercommunicating nodes, leading to a maximum reduction in static power consumption, and a conservative RP algorithm that favors network latency considerations over static power consumption while also reducing power. An adaptive RP algorithm is also disclosed that implements aspects of the aggressive and conservative RP algorithms to balance power consumption and latency considerations in response to ongoing node utilization and associated traffic. The techniques may be implemented in internal network structures, such as for single chip computers, as well as external network structures, such as computing clusters and massively parallel computer architectures.
    Type: Grant
    Filed: January 10, 2012
    Date of Patent: April 11, 2017
    Assignee: Intel Corporation
    Inventors: Ahmad Samih, Ren Wang, Christian Maciocco, Tsung-Yuan C. Tai
  • Patent number: 9602471
    Abstract: In an embodiment, a method includes registering applications and network services for notification of an out-of-band introduction, and using the out-of-band introduction to bootstrap secure in-band provisioning of credentials and policies that are used to control subsequent access and resource sharing on an in-band channel. In another embodiment, an apparatus implements the method.
    Type: Grant
    Filed: March 26, 2012
    Date of Patent: March 21, 2017
    Assignee: Intel Corporation
    Inventors: Victor B. Lortz, Jesse R. Walker, Shriharsha S. Hegde, Amol A. Kulkarni, Tsung-Yuan C. Tai