Patents by Inventor Yipeng Wang

Yipeng Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12293231
    Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: May 6, 2025
    Assignee: Intel Corporation
    Inventors: Chenmin Sun, Yipeng Wang, Rahul R. Shah, Ren Wang, Sameh Gobriel, Hongjun Ni, Mrittika Ganguli, Edwin Verplanke
  • Patent number: 12210434
    Abstract: An apparatus and method for closed loop dynamic resource allocation.
    Type: Grant
    Filed: June 27, 2020
    Date of Patent: January 28, 2025
    Assignee: Intel Corporation
    Inventors: Bin Li, Ren Wang, Kshitij Arun Doshi, Francesc Guim Bernat, Yipeng Wang, Ravishankar Iyer, Andrew Herdrich, Tsung-Yuan Tai, Zhu Zhou, Rasika Subramanian
  • Patent number: 12197601
    Abstract: Examples described herein relate to offload circuitry comprising one or more compute engines that are configurable to perform a workload offloaded from a process executed by a processor based on a descriptor particular to the workload. In some examples, the offload circuitry is configurable to perform the workload, among multiple different workloads. In some examples, the multiple different workloads include one or more of: data transformation (DT) for data format conversion, Locality Sensitive Hashing (LSH) for neural network (NN), similarity search, sparse general matrix-matrix multiplication (SpGEMM) acceleration of hash based sparse matrix multiplication, data encode, data decode, or embedding lookup.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: January 14, 2025
    Assignee: Intel Corporation
    Inventors: Ren Wang, Sameh Gobriel, Somnath Paul, Yipeng Wang, Priya Autee, Abhirupa Layek, Shaman Narayana, Edwin Verplanke, Mrittika Ganguli, Jr-Shian Tsai, Anton Sorokin, Suvadeep Banerjee, Abhijit Davare, Desmond Kirkpatrick, Rajesh M. Sankaran, Jaykant B. Timbadiya, Sriram Kabisthalam Muthukumar, Narayan Ranganathan, Nalini Murari, Brinda Ganesh, Nilesh Jain
  • Patent number: 12186400
    Abstract: The present disclosure discloses a micromolecular compound specifically degrading tau protein, and an application thereof. The chemical structure of the micromolecular compound specifically degrading tau protein is TBM-L-ULM or a pharmaceutically acceptable salt, enantiomer, stereoisomer, solvate, polymorph or N-oxide thereof, TBM being a tau protein-binding moiety, L being a linking group, and ULM being a ubiquitin ligase-binding moiety, the tau protein-binding moiety and the ubiquitin ligase-binding moiety being connected by means of the linking group. The micromolecular compound specifically degrading tau protein may increase tau protein degradation in a cell, thereby decreasing tau protein content.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: January 7, 2025
    Inventor: Yipeng Wang
  • Publication number: 20240220262
    Abstract: Techniques for conditional test or comparison using a single instruction are described. An example instruction includes one or more fields to identify a first source operand location, one or more fields to identify a first source operand location, and an opcode to indicate execution circuitry is to conditionally perform a comparison of data from the identified first source operand to the identified second source operand based at least in part on an evaluation of a source condition code and update a flags register, wherein a payload of the prefix is to provide most significant bits to identify at least one of the first and second source operand locations.
    Type: Application
    Filed: December 30, 2022
    Publication date: July 4, 2024
    Inventors: Jason AGRON, Ching-Tsun CHOU, Sebastian WINKEL, Tyler SONDAG, David SHEFFIELD, Leela Kamalesh YADLAPALLI, Yipeng WANG
  • Publication number: 20240220260
    Abstract: Techniques for accessing 32 general purpose registers, suppressing flags, and/or using a new data destination for an instance of a single instruction are described. An example of a single instruction to at least include a prefix and an opcode to indicate execution circuitry is to do perform a particular operation, wherein the prefix comprises at least two bytes and a second of the two bytes of the prefix is to provide most significant bits for at least register identifier.
    Type: Application
    Filed: December 30, 2022
    Publication date: July 4, 2024
    Inventors: Jason AGRON, Ching-Tsun CHOU, Sebastian WINKEL, Tyler SONDAG, David SHEFFIELD, Leela Kamalesh YADLAPALLI, Yipeng WANG, Jonathan COMBS, Jeff WIEDEMEIER
  • Publication number: 20240220261
    Abstract: Techniques for conditional move operations using a single instruction are described. An example instruction at least includes a prefix, one or more fields to identify a first source operand location, one or more fields to identify a destination operand location, and an opcode to indicate execution circuitry is to conditionally move data from the identified first source operand to the identified destination operand based at least in part on evaluation of a condition code, wherein a payload of the prefix is to provide most significant bits to identify at least one of the first and second source operand locations.
    Type: Application
    Filed: December 30, 2022
    Publication date: July 4, 2024
    Inventors: Jason AGRON, Ching-Tsun CHOU, Sebastian WINKEL, Tyler SONDAG, David SHEFFIELD, Leela Kamalesh YADLAPALLI, Yipeng WANG
  • Publication number: 20240220257
    Abstract: Techniques for push or pop operations using a single instruction are described. An example instruction at least include a prefix, one or more fields to identify a first source operand location, one or more fields to identify a second source operand location, and an opcode to indicate execution circuitry is to do push data from the identified first source operand and the identified second source operand onto a stack, wherein a payload of the prefix to provide most significant bits to identify at least one of the first and second source operand locations.
    Type: Application
    Filed: December 30, 2022
    Publication date: July 4, 2024
    Inventors: Jason AGRON, Ching-Tsun CHOU, Sebastian WINKEL, Tyler SONDAG, David SHEFFIELD, Leela Kamalesh YADLAPALLI, Yipeng WANG
  • Publication number: 20240212703
    Abstract: A method of processing audio data, which relates to a field of speech synthesis technology. The method includes: decomposing original audio data to obtain voice audio data and background audio data; performing electroacoustic processing on the voice audio data to obtain electroacoustic voice data; and combining the electroacoustic voice data and the background audio data to obtain target audio data. An electronic device and a storage medium are further provided.
    Type: Application
    Filed: March 22, 2022
    Publication date: June 27, 2024
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Yipeng WANG, Yunfeng LIU
  • Publication number: 20240038251
    Abstract: An audio data processing method is provided. The method includes: obtaining human voice audio data to be adjusted and reference human voice audio data; performing framing on the human voice audio data to be adjusted and the reference human voice audio data respectively so as to obtain a first audio frame set and a second audio frame set respectively; recognizing a pronunciation unit corresponding to each audio frame respectively; determining, based on a timestamp of each audio frame, a timestamp of each pronunciation unit in the human voice audio data to be adjusted and the reference human voice audio data respectively; and adjusting the timestamp of at least one pronunciation unit to make the timestamp of the pronunciation unit in the human voice audio data to be adjusted to be consistent with the timestamp of the corresponding pronunciation unit in the reference human voice audio data.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventor: Yipeng WANG
  • Patent number: 11824534
    Abstract: A transmit driver architecture with a test mode (e.g., a JTAG configuration mode), extended equalization range, and/or multiple power supply domains. One example transmit driver circuit generally includes one or more driver unit cells having a differential input node pair configured to receive an input data signal and having a differential output node pair configured to output an output data signal; a plurality of power switches coupled between the differential output node pair and one or more power supply rails; a first set of one or more drivers coupled between a first test node of a differential test data path and a first output node of the differential output node pair; and a second set of one or more drivers coupled between a second test node of the differential test data path and a second output node of the differential output node pair.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: November 21, 2023
    Assignee: XILINX, INC.
    Inventors: Nakul Narang, Siok Wei Lim, Luhui Chen, Yipeng Wang, Kee Hian Tan
  • Patent number: 11811660
    Abstract: Apparatus, methods, and systems for tuple space search-based flow classification using cuckoo hash tables and unmasked packet headers are described herein. A device can communicate with one or more hardware switches. The device can include memory to store hash table entries of a hash table. The device can include processing circuitry to perform a hash lookup in the hash table. The lookup can be based on an unmasked key include in a packet header corresponding to a received data packet. The processing circuitry can retrieve an index pointing to a sub-table, the sub-table including a set of rules for handling the data packet. Other embodiments are also described.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: November 7, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Yipeng Wang, Sameh Gobriel
  • Patent number: 11709774
    Abstract: Examples described herein relates to a network interface apparatus that includes packet processing circuitry and a bus interface. In some examples, the packet processing circuitry to: process a received packet that includes data, a request to perform a write operation to write the data to a cache, and an indicator that the data is to be durable and based at least on the received packet including the request and the indicator, cause the data to be written to the cache and non-volatile memory. In some examples, the packet processing circuitry is to issue a command to an input output (IO) controller to cause the IO controller to write the data to the cache and the non-volatile memory.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: July 25, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yifan Yuan, Yipeng Wang, Tsung-Yuan C. Tai, Tony Hurson
  • Patent number: 11698929
    Abstract: A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: July 11, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Andrew J. Herdrich, Tsung-Yuan C. Tai, Yipeng Wang, Raghu Kondapalli, Alexander Bachmutsky, Yifan Yuan
  • Publication number: 20230155591
    Abstract: A transmit driver architecture with a test mode (e.g., a JTAG configuration mode), extended equalization range, and/or multiple power supply domains. One example transmit driver circuit generally includes one or more driver unit cells having a differential input node pair configured to receive an input data signal and having a differential output node pair configured to output an output data signal; a plurality of power switches coupled between the differential output node pair and one or more power supply rails; a first set of one or more drivers coupled between a first test node of a differential test data path and a first output node of the differential output node pair; and a second set of one or more drivers coupled between a second test node of the differential test data path and a second output node of the differential output node pair.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 18, 2023
    Inventors: Nakul NARANG, Siok Wei LIM, Luhui CHEN, Yipeng WANG, Kee Hian TAN
  • Publication number: 20230082780
    Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Inventors: Chenmin SUN, Yipeng WANG, Rahul R. SHAH, Ren WANG, Sameh GOBRIEL, Hongjun NI, Mrittika GANGULI, Edwin VERPLANKE
  • Patent number: 11601531
    Abstract: One embodiment provides a network system. The network system includes an application layer to execute one or more networking applications to generate or receive data packets having flow identification (ID) information; and a packet processing layer having profiling circuitry to generate a sketch table indicative of packet flow count data; the sketch table having a plurality of buckets, each bucket includes a first section including a plurality of data fields, each data field of the first section to store flow ID and packet count data, each bucket also having a second section having a plurality of data fields, each data field of the second section to store packet count data.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: March 7, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Tsung-Yuan Tai
  • Patent number: 11500825
    Abstract: Techniques and apparatus for dynamic data access mode processes are described. In one embodiment, for example, an apparatus may a processor, at least one memory coupled to the processor, the at least one memory comprising an indication of a database and instructions, the instructions, when executed by the processor, to cause the processor to determine a database utilization value for a database, perform a comparison of the database utilization value to at least one utilization threshold, and set an active data access mode to one of a low-utilization data access mode or a high-utilization data access mode based on the comparison. Other embodiments are described.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: November 15, 2022
    Assignee: INTEL CORPORATION
    Inventors: Ren Wang, Bruce Richardson, Tsung-Yuan Tai, Yipeng Wang, Pablo De Lara Guarch
  • Patent number: 11409506
    Abstract: Examples may include a method of compiling a declarative language program for a virtual switch. The method includes parsing the declarative language program, the program defining a plurality of match-action tables (MATs), translating the plurality of MATs into intermediate code, and parsing a core identifier (ID) assigned to each one of the plurality of MATs. When the core IDs of the plurality of MATs are the same, the method includes connecting intermediate code of the plurality of MATs using function calls, and translating the intermediate code of the plurality of MATs into machine code to be executed by a core identified by the core IDs.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Tsung-Yuan C. Tai, Jr-Shian Tsai, Xiangyang Guo
  • Patent number: 11392298
    Abstract: Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Sameh Gobriel, Tsung-Yuan C. Tai