Patents by Inventor Yipeng Wang

Yipeng Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240159132
    Abstract: An electrically-driven fracturing system, which includes a main power generation device, a first auxiliary power generation device, a switch device, and an electrically-driven fracturing device; the switch device includes a low-voltage switch group and a high-voltage switch group; the electrically-driven fracturing device includes a fracturing motor and a fracturing auxiliary device; a rated generation power of the main power generation device is greater than a rated generation power of the first auxiliary power generation device, a rated output voltage of the main power generation device is greater than a rated output voltage of the first auxiliary power generation device, the high-voltage switch group includes an input end and an output end, and the low-voltage switch group includes an input end and an output end, the input end of the high-voltage switch group is connected to the main power generation device, the output end of the high-voltage switch group is connected to the fracturing motor, the input end
    Type: Application
    Filed: February 16, 2022
    Publication date: May 16, 2024
    Inventors: Jifeng ZHONG, Jihua WANG, Liang LV, Shouzhe LI, Yipeng WU, Xincheng LI
  • Publication number: 20240142583
    Abstract: This application relates to a galvanometer-based laser synchronization controlling method, calibration method and apparatus, and a LiDAR. The galvanometer-based laser synchronization controlling method includes obtaining a fast-axis feedback signal when a galvanometer scans; obtaining a first phase difference between a fast-axis drive signal and the fast-axis feedback signal and obtaining a second phase difference between an emission period of a laser beam and the fast-axis drive signal; and setting a phase for the fast-axis drive signal based on the first phase difference and the second phase difference.
    Type: Application
    Filed: October 9, 2023
    Publication date: May 2, 2024
    Applicant: SUTENG INNOVATION TECHNOLOGY CO., LTD.
    Inventors: Hankui ZHANG, Yipeng LI, Peng WANG, Guanxing PEI, Xiao SHEN, Feng ZHANG
  • Patent number: 11925067
    Abstract: Disclosed are a display panel and a display device. The display panel includes a base substrate including a plurality of sub-pixels, at least one of the plurality of sub-pixels including a pixel circuit; a first conductive layer located on a side, facing away from the base substrate, of a first insulating layer; a second insulating layer located on a side, facing away from the base substrate, of the first conductive layer; a second conductive layer located on a side, facing away from the base substrate, of the second insulating layer; a fourth insulating layer located on a side, facing away from the base substrate, of the second conductive layer; and a third conductive layer located on a side, facing away from the base substrate, of the fourth insulating layer, the third conductive layer including a plurality of data wires arranged at intervals.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: March 5, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Yang Yu, Yipeng Chen, Ling Shi, Jingquan Wang
  • Publication number: 20240038251
    Abstract: An audio data processing method is provided. The method includes: obtaining human voice audio data to be adjusted and reference human voice audio data; performing framing on the human voice audio data to be adjusted and the reference human voice audio data respectively so as to obtain a first audio frame set and a second audio frame set respectively; recognizing a pronunciation unit corresponding to each audio frame respectively; determining, based on a timestamp of each audio frame, a timestamp of each pronunciation unit in the human voice audio data to be adjusted and the reference human voice audio data respectively; and adjusting the timestamp of at least one pronunciation unit to make the timestamp of the pronunciation unit in the human voice audio data to be adjusted to be consistent with the timestamp of the corresponding pronunciation unit in the reference human voice audio data.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventor: Yipeng WANG
  • Patent number: 11824534
    Abstract: A transmit driver architecture with a test mode (e.g., a JTAG configuration mode), extended equalization range, and/or multiple power supply domains. One example transmit driver circuit generally includes one or more driver unit cells having a differential input node pair configured to receive an input data signal and having a differential output node pair configured to output an output data signal; a plurality of power switches coupled between the differential output node pair and one or more power supply rails; a first set of one or more drivers coupled between a first test node of a differential test data path and a first output node of the differential output node pair; and a second set of one or more drivers coupled between a second test node of the differential test data path and a second output node of the differential output node pair.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: November 21, 2023
    Assignee: XILINX, INC.
    Inventors: Nakul Narang, Siok Wei Lim, Luhui Chen, Yipeng Wang, Kee Hian Tan
  • Patent number: 11811660
    Abstract: Apparatus, methods, and systems for tuple space search-based flow classification using cuckoo hash tables and unmasked packet headers are described herein. A device can communicate with one or more hardware switches. The device can include memory to store hash table entries of a hash table. The device can include processing circuitry to perform a hash lookup in the hash table. The lookup can be based on an unmasked key include in a packet header corresponding to a received data packet. The processing circuitry can retrieve an index pointing to a sub-table, the sub-table including a set of rules for handling the data packet. Other embodiments are also described.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: November 7, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Yipeng Wang, Sameh Gobriel
  • Patent number: 11709774
    Abstract: Examples described herein relates to a network interface apparatus that includes packet processing circuitry and a bus interface. In some examples, the packet processing circuitry to: process a received packet that includes data, a request to perform a write operation to write the data to a cache, and an indicator that the data is to be durable and based at least on the received packet including the request and the indicator, cause the data to be written to the cache and non-volatile memory. In some examples, the packet processing circuitry is to issue a command to an input output (IO) controller to cause the IO controller to write the data to the cache and the non-volatile memory.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: July 25, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yifan Yuan, Yipeng Wang, Tsung-Yuan C. Tai, Tony Hurson
  • Patent number: 11698929
    Abstract: A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: July 11, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Andrew J. Herdrich, Tsung-Yuan C. Tai, Yipeng Wang, Raghu Kondapalli, Alexander Bachmutsky, Yifan Yuan
  • Publication number: 20230155591
    Abstract: A transmit driver architecture with a test mode (e.g., a JTAG configuration mode), extended equalization range, and/or multiple power supply domains. One example transmit driver circuit generally includes one or more driver unit cells having a differential input node pair configured to receive an input data signal and having a differential output node pair configured to output an output data signal; a plurality of power switches coupled between the differential output node pair and one or more power supply rails; a first set of one or more drivers coupled between a first test node of a differential test data path and a first output node of the differential output node pair; and a second set of one or more drivers coupled between a second test node of the differential test data path and a second output node of the differential output node pair.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 18, 2023
    Inventors: Nakul NARANG, Siok Wei LIM, Luhui CHEN, Yipeng WANG, Kee Hian TAN
  • Publication number: 20230082780
    Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Inventors: Chenmin SUN, Yipeng WANG, Rahul R. SHAH, Ren WANG, Sameh GOBRIEL, Hongjun NI, Mrittika GANGULI, Edwin VERPLANKE
  • Patent number: 11601531
    Abstract: One embodiment provides a network system. The network system includes an application layer to execute one or more networking applications to generate or receive data packets having flow identification (ID) information; and a packet processing layer having profiling circuitry to generate a sketch table indicative of packet flow count data; the sketch table having a plurality of buckets, each bucket includes a first section including a plurality of data fields, each data field of the first section to store flow ID and packet count data, each bucket also having a second section having a plurality of data fields, each data field of the second section to store packet count data.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: March 7, 2023
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Tsung-Yuan Tai
  • Patent number: 11500825
    Abstract: Techniques and apparatus for dynamic data access mode processes are described. In one embodiment, for example, an apparatus may a processor, at least one memory coupled to the processor, the at least one memory comprising an indication of a database and instructions, the instructions, when executed by the processor, to cause the processor to determine a database utilization value for a database, perform a comparison of the database utilization value to at least one utilization threshold, and set an active data access mode to one of a low-utilization data access mode or a high-utilization data access mode based on the comparison. Other embodiments are described.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: November 15, 2022
    Assignee: INTEL CORPORATION
    Inventors: Ren Wang, Bruce Richardson, Tsung-Yuan Tai, Yipeng Wang, Pablo De Lara Guarch
  • Patent number: 11409506
    Abstract: Examples may include a method of compiling a declarative language program for a virtual switch. The method includes parsing the declarative language program, the program defining a plurality of match-action tables (MATs), translating the plurality of MATs into intermediate code, and parsing a core identifier (ID) assigned to each one of the plurality of MATs. When the core IDs of the plurality of MATs are the same, the method includes connecting intermediate code of the plurality of MATs using function calls, and translating the intermediate code of the plurality of MATs into machine code to be executed by a core identified by the core IDs.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Tsung-Yuan C. Tai, Jr-Shian Tsai, Xiangyang Guo
  • Patent number: 11392298
    Abstract: Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Sameh Gobriel, Tsung-Yuan C. Tai
  • Publication number: 20220222118
    Abstract: Methods, apparatus, and systems for adaptive collaborative memory with the assistance of programmable networking devices. Under one example, the programmable networking device is a switch that is deployed in a system or cluster of servers comprising a plurality of nodes. The switch selects one or more nodes to be remote memory server nodes and allocate one or more portions of memory on those nodes to be used as remote memory for one or more remote memory client nodes. The switch receives memory access request messages originating from remote memory client nodes containing indicia identifying memory to be accessed, determines which remote memory server node is to be used for servicing a given memory access request, and sends a memory access request message containing indicia identifying memory to be accessed to the remote memory server node that is determined. The switch also facilitates return of messages containing remote memory access responses to the client nodes.
    Type: Application
    Filed: March 31, 2022
    Publication date: July 14, 2022
    Inventors: Ren WANG, Christian MACIOCCO, Yipeng WANG, Kshitij A. DOSHI, Vesh Raj SHARMA BANJADE, Satish C. JHA, S M Iftekharul ALAM, Srikathyayani SRIKANTESWARA, Alexander BACHMUTSKY
  • Publication number: 20220114270
    Abstract: Examples described herein relate to offload circuitry comprising one or more compute engines that are configurable to perform a workload offloaded from a process executed by a processor based on a descriptor particular to the workload. In some examples, the offload circuitry is configurable to perform the workload, among multiple different workloads. In some examples, the multiple different workloads include one or more of: data transformation (DT) for data format conversion, Locality Sensitive Hashing (LSH) for neural network (NN), similarity search, sparse general matrix-matrix multiplication (SpGEMM) acceleration of hash based sparse matrix multiplication, data encode, data decode, or embedding lookup.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Ren WANG, Sameh GOBRIEL, Somnath PAUL, Yipeng WANG, Priya AUTEE, Abhirupa LAYEK, Shaman NARAYANA, Edwin VERPLANKE, Mrittika GANGULI, Jr-Shian TSAI, Anton SOROKIN, Suvadeep BANERJEE, Abhijit DAVARE, Desmond KIRKPATRICK
  • Publication number: 20220001017
    Abstract: The present disclosure discloses a micromolecular compound specifically degrading tau protein, and an application thereof. The chemical structure of the micromolecular compound specifically degrading tau protein is TBM-L-ULM or a pharmaceutically acceptable salt, enantiomer, stereoisomer, solvate, polymorph or N-oxide thereof, TBM being a tau protein-binding moiety, L being a linking group, and ULM being a ubiquitin ligase-binding moiety, the tau protein-binding moiety and the ubiquitin ligase-binding moiety being connected by means of the linking group. The micromolecular compound specifically degrading tau protein may increase tau protein degradation in a cell, thereby decreasing tau protein content.
    Type: Application
    Filed: November 9, 2018
    Publication date: January 6, 2022
    Applicant: Shanghai Qiangrui Biotech Co., Ltd.
    Inventor: Yipeng WANG
  • Publication number: 20210406147
    Abstract: An apparatus and method for closed loop dynamic resource allocation.
    Type: Application
    Filed: June 27, 2020
    Publication date: December 30, 2021
    Inventors: BIN LI, REN WANG, KSHITIJ ARUN DOSHI, FRANCESC GUIM BERNAT, YIPENG WANG, RAVISHANKAR IYER, ANDREW HERDRICH, TSUNG-YUAN TAI, ZHU ZHOU, RASIKA SUBRAMANIAN
  • Patent number: 11201940
    Abstract: Technologies for flow rule aware exact match cache compression include multiple computing devices in communication over a network. A computing device reads a network packet from a network port and extracts one or more key fields from the packet to generate a lookup key. The key fields are identified by a key field specification of an exact match flow cache. The computing device may dynamically configure the key field specification based on an active flow rule set. The computing device may compress the key field specification to match a union of non-wildcard fields of the active flow rule set. The computing device may expand the key field specification in response to insertion of a new flow rule. The computing device looks up the lookup key in the exact match flow cache and, if a match is found, applies the corresponding action. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: December 14, 2021
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Antonio Fischetti, Sameh Gobriel, Tsung-Yuan C. Tai
  • Publication number: 20210367887
    Abstract: Apparatus, methods, and systems for tuple space search-based flow classification using cuckoo hash tables and unmasked packet headers are described herein. A device can communicate with one or more hardware switches. The device can include memory to store hash table entries of a hash table. The device can include processing circuitry to perform a hash lookup in the hash table. The lookup can be based on an unmasked key include in a packet header corresponding to a received data packet. The processing circuitry can retrieve an index pointing to a sub-table, the sub-table including a set of rules for handling the data packet. Other embodiments are also described.
    Type: Application
    Filed: August 6, 2021
    Publication date: November 25, 2021
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Yipeng Wang, Sameh Gobriel