Patents by Inventor Tsung-Yuan C. Tai

Tsung-Yuan C. Tai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190104150
    Abstract: A computing apparatus for providing a node within a distributed network function, including: a hardware platform; a network interface to communicatively couple to at least one other peer node of the distributed network function; a distributor function including logic to operate on the hardware platform, including a hashing module configured to receive an incoming network packet via the network interface and perform on the incoming network packet a first-level hash of a two-level hash, the first level hash being a lightweight hash with respect to a second-level hash, the first level hash to deterministically direct a packet to one of the nodes of the distributed network function as a directed packet; and a denial of service (DoS) mitigation engine to receive notification of a DoS attack, identify a DoS packet via the first-level hash, and prevent the DoS packet from reaching the second-level hash.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: Intel Corporation
    Inventors: Sameh Gobriel, Christian Maciocco, Byron Marohn, Ren Wang, Tsung-Yuan C. Tai
  • Publication number: 20190102303
    Abstract: Apparatus, method, and system for implementing a software-transparent hardware predictor for core-to-core data communication optimization are described herein. An embodiment of the apparatus includes a plurality of hardware processor cores each including a private cache; a shared cache that is communicatively coupled to and shared by the plurality of hardware processor cores; and a predictor circuit. The predictor circuit is to track activities relating to a plurality of monitored cache lines in the private cache of a producer hardware processor core (producer core) and to enable a cache line push operation upon determining a target hardware processor core (target core) based on the tracked activities. An execution of the cache line push operation is to cause a plurality of unmonitored cache lines in the private cache of the producer core to be moved to the private cache of the target core.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Ren Wang, Joseph Nuzman, Samantika S. Sury, Andrew J. Herdrich, Namakkal N. Venkatesan, Anil Vasudevan, Tsung-Yuan C. Tai, Niall D. McDonnell
  • Publication number: 20190102346
    Abstract: A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.
    Type: Application
    Filed: November 30, 2018
    Publication date: April 4, 2019
    Inventors: Ren WANG, Andrew J. HERDRICH, Tsung-Yuan C. TAI, Yipeng WANG, Raghu KONDAPALLI, Alexander BACHMUTSKY, Yifan YUAN
  • Patent number: 10237171
    Abstract: Methods and apparatus for facilitating efficient Quality of Service (QoS) support for software-based packet processing by offloading QoS rate-limiting to NIC hardware. Software-based packet processing is performed on packet flows received at a compute platform, such as a general purpose server, and/or packet flows generated by local applications running on the compute platform. The packet processing includes packet classification that associates packets with packet flows using flow IDs, and identifying a QoS class for the packet and packet flow. NIC Tx queues are dynamically configured or pre-configured to effect rate limiting for forwarding packets enqueued in the NIC Tx queues. New packet flows are detected, and mapping data is created to map flow IDs associated with flows to the NIC Tx queues used to forward the packets associated with the flows.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: March 19, 2019
    Assignee: Intel Corporation
    Inventors: Sameh Gobriel, Ren Wang, Eric K. Mann, Christian Maciocco, Tsung-Yuan C. Tai
  • Patent number: 10218647
    Abstract: Methods and apparatus to support multiple-writer/multiple-reader concurrency for software flow/packet classification on general purpose multi-core systems. A flow table with rows mapped to respective hash buckets with multiple entry slots is implemented in memory of a host platform with multiple cores, with each bucket being associated with a version counter. Multiple writer and reader threads are run on the cores, with writers providing updates to the flow table data. In connection with inserting new key data, a determination is made to which buckets will be changed, and access rights to those buckets are acquired prior to making any changes. For example, under a flow table employing cuckoo hashing, access rights are acquired to buckets along a full cuckoo path.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: February 26, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Christian Maciocco, Namakkal N. Venkatesan, Tsung-Yuan C. Tai
  • Publication number: 20190056232
    Abstract: Technologies for providing information to a user while traveling include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data. Other embodiments are described and claimed.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Inventors: Ren Wang, Zhonghong Ou, Arvind Kumar, Kristoffer Fleming, Tsung-Yuan C. Tai, Timothy J. Gresham, John C. Weast, Corey Kukis
  • Publication number: 20190052719
    Abstract: Technologies for flow rule aware exact match cache compression include multiple computing devices in communication over a network. A computing device reads a network packet from a network port and extracts one or more key fields from the packet to generate a lookup key. The key fields are identified by a key field specification of an exact match flow cache. The computing device may dynamically configure the key field specification based on an active flow rule set. The computing device may compress the key field specification to match a union of non-wildcard fields of the active flow rule set. The computing device may expand the key field specification in response to insertion of a new flow rule. The computing device looks up the lookup key in the exact match flow cache and, if a match is found, applies the corresponding action. Other embodiments are described and claimed.
    Type: Application
    Filed: January 4, 2018
    Publication date: February 14, 2019
    Inventors: Yipeng Wang, Ren Wang, Antonio Fischetti, Sameh Gobriel, Tsung-Yuan C. Tai
  • Publication number: 20190042388
    Abstract: There is disclosed in one example a computing apparatus, including: a processor; a multilevel cache including a plurality of cache levels; a peripheral device configured to write data directly to a directly writable cache; and a cache monitoring circuit, including cache counters La to be incremented when a cache line is allocated into the directly writable cache, Lp to be incremented when a cache line is processed by the processor and deallocated from the directly writable cache, and Le to be incremented when a cache line is evicted from the directly writable cache to the memory, wherein the cache monitoring circuit is to determine a direct write policy according to the cache counters.
    Type: Application
    Filed: June 28, 2018
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Ren Wang, Bin Li, Andrew J. Herdrich, Tsung-Yuan C. Tai, Ramakrishna Huggahalli
  • Publication number: 20190012200
    Abstract: A computing platform, including: an execution unit to execute a program, the program including a first phase and a second phase; and a quick response module (QRM) to: receive a program phase signature for the first phase; store the program phase signature in a pattern match action (PMA) table; identify entry of the program into the first phase via the PMA; and apply an optimization to the computing platform.
    Type: Application
    Filed: July 10, 2017
    Publication date: January 10, 2019
    Applicant: INTEL CORPORATION
    Inventors: Christopher B. Wilkerson, Karl I. Taht, Ren Wang, James J. Greensky, Tsung-Yuan C. Tai
  • Publication number: 20190007349
    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Ren Wang, Mia Primorac, Tsung-Yuan C. Tai, Saikrishna Edupuganti, John J. Browne
  • Publication number: 20180373633
    Abstract: Method and apparatus for per-agent control and quality of service of shared resources in a chip multiprocessor platform is described herein. One embodiment of a system includes: a plurality of core and non-core requestors of shared resources, the shared resources to be provided by one or more resource providers, each of the plurality of core and non-core requestors to be associated with a resource-monitoring tag and a resource-control tag; a mapping table to store the resource monitoring and control tags associated with each non-core requestor; and a tagging circuitry to receive a resource request sent from a non-core requestor to a resource provider, the tagging circuitry to responsively modify the resource request to include the resource-monitoring and resource-control tags associated with the non-core requestor in accordance to the mapping table and send the modified resource request to the resource provider.
    Type: Application
    Filed: June 27, 2017
    Publication date: December 27, 2018
    Inventors: Andrew J. Herdrich, Edwin Verplanke, Stephen R. Van Doren, Ravishankar Iyer, Eric R. Wehage, Rupin H. Vakharwala, Rajesh M. Sankaran, Jeffrey D. Chamberlain, Julius Mandelblat, Yen-Cheng Liu, Stephen T. Palermo, Tsung-Yuan C. Tai
  • Publication number: 20180373632
    Abstract: An apparatus and method are described for a triggered prefetch operation. For example, one embodiment of a processor comprises: a first core comprising a first cache to store a first set of cache lines; a second core comprising a second cache to store a second set of cache lines; a cache management circuit to maintain coherency between one or more cache lines in the first cache and the second cache, the cache management circuit to allocate a lock on a first cache line to the first cache; a prefetch circuit comprising a prefetch request buffer to store a plurality of prefetch request entries including a first prefetch request entry associated with the first cache line, the prefetch circuit to cause the first cache line to be prefetched to the second cache in response to an invalidate command detected for the first cache line.
    Type: Application
    Filed: August 6, 2018
    Publication date: December 27, 2018
    Inventors: Christopher WILKERSON, Ren WANG, Antoine KAUFMANN, Anil VASUDEVAN, Robert G. BLANKENSHIP, Venkata KRISHNAN, Tsung-Yuan C. Tai
  • Publication number: 20180375773
    Abstract: Technologies for efficient network flow classification include a computing device that receives a network packet that includes a header. The computing device generates a vector Bloom filter (VBF) key as a function of the header and searches multiple VBFs for a VBF that matches the VBF key. Each VBF is associated with a flow sub-table that includes one or more flow rules. Each flow sub-table is associated with a mask length. If a matching VBF is found, the computing device searches the corresponding flow sub-table for a flow rule that matches a masked header of the network packet. If no matching VBF is found or if no matching flow rule is found, the computing device searches all of the flow sub-tables for a flow rule that matches the header. The computing device applies a flow action of a matching flow rule. Other embodiments are described and claimed.
    Type: Application
    Filed: June 26, 2017
    Publication date: December 27, 2018
    Inventors: Sameh Gobriel, Wei Shen, Tsung-Yuan C. Tai, Ren Wang
  • Patent number: 10145694
    Abstract: Technologies for providing information to a user while traveling include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: December 4, 2018
    Assignee: Intel Corporation
    Inventors: Ren Wang, Zhonghong Ou, Arvind Kumar, Kristoffer Fleming, Tsung-Yuan C. Tai, Timothy J. Gresham, John C. Weast, Corey Kukis
  • Patent number: 10133336
    Abstract: Systems and methods may provide for identifying runtime information associated with an active workload of a platform, and making an active idle state determination for the platform based on at least in part the runtime information. In addition, a low power state of a shared resource on the platform may be controlled concurrently with an execution of the active workload based on at least in part the active idle state determination.
    Type: Grant
    Filed: November 27, 2012
    Date of Patent: November 20, 2018
    Assignee: Intel Corporation
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Jr-Shian Tsai, Bruce L. Fleming, Rajeev D. Muralidhar, Mesut A. Ergin, Prakash N. Iyer, Harinarayanan Seshadri
  • Publication number: 20180285151
    Abstract: A network interface card (NIC) can be configured to monitor a first central processing unit (CPU) core mapped to a first receive queue having a receive queue length. The NIC can also be configured to determine whether the CPU core is overloaded based on the receive queue length. The NIC can also be configured to redirect data packets that were targeted from the first receive queue to the CPU core to another CPU core responsive to a determination that the CPU core is overloaded.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Ren Wang, Daniel P. Daly, Antoine Kaufmann, Saikrishna Edupuganti, Tsung-Yuan C. Tai
  • Patent number: 10091063
    Abstract: Technologies to monitor and manage platform, device, processor and power characteristics throughout a system utilizing a remote entity such as controller node. By remotely monitoring and managing system operation and performance over time, future system performance requirements may be anticipated, allowing system parameters to be adjusted proactively in a more coordinated way. The controller node may monitor, control and predict traffic flows in the system and provide performance modification instructions to any of the computer nodes and a network switch to better optimize performance. The target systems collaborate with the controller node by respectively monitoring internal resources, such as resource availability and performance requirements to provide necessary resources for optimizing operating parameters of the system.
    Type: Grant
    Filed: December 27, 2014
    Date of Patent: October 2, 2018
    Assignee: INTEL CORPORATION
    Inventors: Alexander W. Min, Ira Weiny, Patrick Connor, Jr-Shian Tsai, Tsung-Yuan C. Tai, Brian J. Skerry, Jr., Iosif Gasparakis, Steven R. Carbonari, Daniel J. Dahle, Thomas M. Slaight, Nrupal R. Jani
  • Patent number: 10073775
    Abstract: An apparatus and method are described for a triggered prefetch operation. For example, one embodiment of a processor comprises: a first core comprising a first cache to store a first set of cache lines; a second core comprising a second cache to store a second set of cache lines; a cache management circuit to maintain coherency between one or more cache lines in the first cache and the second cache, the cache management circuit to allocate a lock on a first cache line to the first cache; a prefetch circuit comprising a prefetch request buffer to store a plurality of prefetch request entries including a first prefetch request entry associated with the first cache line, the prefetch circuit to cause the first cache line to be prefetched to the second cache in response to an invalidate command detected for the first cache line.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: September 11, 2018
    Assignee: Intel Corporation
    Inventors: Christopher B. Wilerkson, Ren Wang, Antoine Kaufmann, Anil Vasudevan, Robert G. Blankenship, Venkata Krishnan, Tsung-Yuan C. Tai
  • Publication number: 20180225136
    Abstract: In the present disclosure, functions associated with the central office of an evolved packet core network are co-located onto a computer platform or sub-components through virtualized function instances. This reduces and/or eliminates the physical interfaces between equipment and permits functional operation of the evolved packet core to occur at a network edge.
    Type: Application
    Filed: November 14, 2017
    Publication date: August 9, 2018
    Applicant: INTEL CORPORATION
    Inventors: Ashok Sunder Rajan, Richard A. Uhlig, Rajendra S. Yavatkar, Tsung-Yuan C. Tai, Christian Maciocco, Jeffrey R. Jackson, Daniel J. Dahle
  • Publication number: 20180205653
    Abstract: Apparatus, methods, and systems for tuple space search-based flow classification using cuckoo hash tables and unmasked packet headers are described herein. A device can communicate with one or more hardware switches. The device can include memory to store hash table entries of a hash table. The device can include processing circuitry to perform a hash lookup in the hash table. The lookup can be based on an unmasked key include in a packet header corresponding to a received data packet. The processing circuitry can retrieve an index pointing to a sub-table, the sub-table including a set of rules for handling the data packet. Other embodiments are also described.
    Type: Application
    Filed: June 29, 2017
    Publication date: July 19, 2018
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Yipeng Wang, Sameh Gobriel