Patents by Inventor Tsung-Yuan C. Tai

Tsung-Yuan C. Tai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11943340
    Abstract: In some examples, for process-to-process communication, such as in function linking, a virtual channel can be provisioned to provide virtual machine to virtual machine communications. In response to a transmit request from a source virtual machine, the virtual channel can cause a data copy from a source buffer associated with the source virtual machine without decryption or encryption. The virtual channel provisions a key identifier for the copied data. The destination virtual machine can receive an indication data is available and can cause the data to be decrypted using a key accessed using the key identifier and source address of the copied data. In addition, the data can be encrypted using a second, different key for storage in a destination buffer associated with the destination virtual machine. In some examples, the key identifier and source address is managed by the virtual channel and is not visible to virtual machine or hypervisor.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: March 26, 2024
    Assignee: Intel Corporation
    Inventors: Bo Cui, Cunming Liang, Jr-Shian Tsai, Ping Yu, Xiaobing Qian, Xuekun Hu, Lin Luo, Shravan Nagraj, Xiaowen Zhang, Mesut A. Ergin, Tsung-Yuan C. Tai, Andrew J. Herdrich
  • Patent number: 11829789
    Abstract: In the present disclosure, functions associated with the central office of an evolved packet core network are co-located onto a computer platform or sub-components through virtualized function instances. This reduces and/or eliminates the physical interfaces between equipment and permits functional operation of the evolved packet core to occur at a network edge.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: November 28, 2023
    Assignee: Intel Corporation
    Inventors: Ashok Sunder Rajan, Richard A. Uhlig, Rajendra S. Yavatkar, Tsung-Yuan C. Tai, Christian Maciocco, Jeffrey R. Jackson, Daniel J. Dahle
  • Publication number: 20230379271
    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
    Type: Application
    Filed: July 31, 2023
    Publication date: November 23, 2023
    Applicant: Intel Corporation
    Inventors: Ren Wang, Mia PRIMORAC, Tsung-Yuan C. Tai, Saikrishna EDUPUGANTI, John J. Browne
  • Patent number: 11811660
    Abstract: Apparatus, methods, and systems for tuple space search-based flow classification using cuckoo hash tables and unmasked packet headers are described herein. A device can communicate with one or more hardware switches. The device can include memory to store hash table entries of a hash table. The device can include processing circuitry to perform a hash lookup in the hash table. The lookup can be based on an unmasked key include in a packet header corresponding to a received data packet. The processing circuitry can retrieve an index pointing to a sub-table, the sub-table including a set of rules for handling the data packet. Other embodiments are also described.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: November 7, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Yipeng Wang, Sameh Gobriel
  • Patent number: 11797076
    Abstract: A computer-implemented method can include receiving a queue depth for a receive queue of a network interface controller (NIC), determining whether a power state of a central processing unit (CPU) core mapped to the receive queue should be adjusted based on the queue depth, and adjusting the power state of the CPU core responsive to a determination that the power state of the CPU core should be adjusted.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: October 24, 2023
    Assignee: Intel Corporation
    Inventors: Brian J. Skerry, Ira Weiny, Patrick Connor, Tsung-Yuan C. Tai, Alexander W. Min
  • Patent number: 11757802
    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: September 12, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Mia Primorac, Tsung-Yuan C. Tai, Saikrishna Edupuganti, John J. Browne
  • Patent number: 11709774
    Abstract: Examples described herein relates to a network interface apparatus that includes packet processing circuitry and a bus interface. In some examples, the packet processing circuitry to: process a received packet that includes data, a request to perform a write operation to write the data to a cache, and an indicator that the data is to be durable and based at least on the received packet including the request and the indicator, cause the data to be written to the cache and non-volatile memory. In some examples, the packet processing circuitry is to issue a command to an input output (IO) controller to cause the IO controller to write the data to the cache and the non-volatile memory.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: July 25, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yifan Yuan, Yipeng Wang, Tsung-Yuan C. Tai, Tony Hurson
  • Patent number: 11698929
    Abstract: A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: July 11, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Andrew J. Herdrich, Tsung-Yuan C. Tai, Yipeng Wang, Raghu Kondapalli, Alexander Bachmutsky, Yifan Yuan
  • Publication number: 20230070309
    Abstract: Examples disclosed herein include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data.
    Type: Application
    Filed: October 28, 2022
    Publication date: March 9, 2023
    Applicant: Tahoe Research, Ltd.
    Inventors: Ren WANG, Zhonghong Ou, Arvind Kumar, Kristoffer Fleming, Tsung-Yuan C. Tai, Timothy J. Gresham, John C. Weast, Corey Kukis
  • Publication number: 20230030060
    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
    Type: Application
    Filed: June 13, 2022
    Publication date: February 2, 2023
    Inventors: Ren Wang, Mia PRIMORAC, Tsung-Yuan C. Tai, Saikrishna EDUPUGANTI, John J. Browne
  • Patent number: 11570123
    Abstract: In an embodiment, an apparatus is provided that may include circuitry to generate, at least in part, and/or receive, at least in part, at least one request that at least one network node generate, at least in part, information. The information may be to permit selection, at least in part, of (1) at least one power consumption state of the at least one network node, and (2) at least one time period. The at least one time period may be to elapse, after receipt by at least one other network node of at least one packet, prior to requesting at least one change in the at least one power consumption state. The at least one packet may be to be transmitted to the at least one network node. Of course, many alternatives, modifications, and variations are possible without departing from this embodiment.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: January 31, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Jr-Shian Tsai
  • Patent number: 11513957
    Abstract: Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: November 29, 2022
    Assignee: Intel Corporation
    Inventors: Ren Wang, Andrew J. Herdrich, Yen-cheng Liu, Herbert H. Hum, Jong Soo Park, Christopher J. Hughes, Namakkal N. Venkatesan, Adrian C. Moga, Aamer Jaleel, Zeshan A. Chishti, Mesut A. Ergin, Jr-shian Tsai, Alexander W. Min, Tsung-yuan C. Tai, Christian Maciocco, Rajesh Sankaran
  • Patent number: 11486719
    Abstract: Examples disclosed herein include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: November 1, 2022
    Assignee: Tahoe Research, LTD.
    Inventors: Ren Wang, Zhonghong Ou, Arvind Kumar, Kristoffer Fleming, Tsung-Yuan C. Tai, Timothy J. Gresham, John C. Weast, Corey Kukis
  • Patent number: 11431600
    Abstract: Technologies for monitoring network traffic include a computing device that monitors network traffic at a graphics processing unit (GPU) of the computing device. The computing device manages computing resources of the computing device based on results of the monitored network traffic. The computing resources may include one or more virtual machines to process network traffic that is to be monitored at the GPU of the computing device. Other embodiments are described and claimed.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: August 30, 2022
    Assignee: Intel Corporation
    Inventors: Alexander W. Min, Jr-Shian Tsai, Janet Tseng, Kapil Sood, Tsung-Yuan C. Tai
  • Patent number: 11409506
    Abstract: Examples may include a method of compiling a declarative language program for a virtual switch. The method includes parsing the declarative language program, the program defining a plurality of match-action tables (MATs), translating the plurality of MATs into intermediate code, and parsing a core identifier (ID) assigned to each one of the plurality of MATs. When the core IDs of the plurality of MATs are the same, the method includes connecting intermediate code of the plurality of MATs using function calls, and translating the intermediate code of the plurality of MATs into machine code to be executed by a core identified by the core IDs.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Tsung-Yuan C. Tai, Jr-Shian Tsai, Xiangyang Guo
  • Patent number: 11392298
    Abstract: Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Sameh Gobriel, Tsung-Yuan C. Tai
  • Patent number: 11379342
    Abstract: There is disclosed in one example a computing apparatus, including: a processor; a multilevel cache including a plurality of cache levels; a peripheral device configured to write data directly to a selected cache level; and a cache monitoring circuit, including a cache counter to track cache lines evicted from the selected cache level without being processed; and logic to provide a direct write policy according to the cache counter.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: July 5, 2022
    Assignee: Intel Corporation
    Inventors: Ren Wang, Bin Li, Andrew J. Herdrich, Tsung-Yuan C. Tai, Ramakrishna Huggahalli
  • Patent number: 11362968
    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: June 14, 2022
    Assignee: Intel Corporation
    Inventors: Ren Wang, Mia Primorac, Tsung-Yuan C. Tai, Saikrishna Edupuganti, John J. Browne
  • Publication number: 20220171643
    Abstract: In the present disclosure, functions associated with the central office of an evolved packet core network are co-located onto a computer platform or sub-components through virtualized function instances. This reduces and/or eliminates the physical interfaces between equipment and permits functional operation of the evolved packet core to occur at a network edge.
    Type: Application
    Filed: February 14, 2022
    Publication date: June 2, 2022
    Applicant: Intel Corporation
    Inventors: Ashok Sunder Rajan, Richard A. Uhlig, Rajendra S. Yavatkar, Tsung-Yuan C. Tai, Christian Maciocco, Jeffrey R. Jackson, Daniel J. Dahle
  • Publication number: 20220150055
    Abstract: In some examples, for process-to-process communication, such as in function linking, a virtual channel can be provisioned to provide virtual machine to virtual machine communications. In response to a transmit request from a source virtual machine, the virtual channel can cause a data copy from a source buffer associated with the source virtual machine without decryption or encryption. The virtual channel provisions a key identifier for the copied data. The destination virtual machine can receive an indication data is available and can cause the data to be decrypted using a key accessed using the key identifier and source address of the copied data. In addition, the data can be encrypted using a second, different key for storage in a destination buffer associated with the destination virtual machine. In some examples, the key identifier and source address is managed by the virtual channel and is not visible to virtual machine or hypervisor.
    Type: Application
    Filed: April 19, 2019
    Publication date: May 12, 2022
    Inventors: Bo CUI, Cunming LIANG, Jr-Shian TSAI, Ping YU, Xiaobing QIAN, Xuekun HU, Lin LUO, Shravan NAGRAJ, Xiaowen ZHANG, Mesut A. ERGIN, Tsung-Yuan C. TAI, Andrew J. HERDRICH