Patents by Inventor Zhihua Wu

Zhihua Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250128249
    Abstract: Methods and apparatus for the regeneration of catalysts used in catalytic pyrolysis of waste plastics, polymers, and other waste materials to useful chemical and fuel products such as paraffins, olefins, and aromatics such as BTX is described in which minerals are removed by washing to restore catalytic activity and selectivity. A catalytic pyrolysis process that includes the regeneration of a portion of the catalyst used for the catalytic pyrolysis with a washing process is described.
    Type: Application
    Filed: October 22, 2023
    Publication date: April 24, 2025
    Inventors: Zhihua Wu, Torren Carlson, Omar Basha, Leslaw Mleczko
  • Publication number: 20250070579
    Abstract: A method for calibrating a battery level, an electronic device and a storage medium. The method includes steps of: obtaining a turn-off type when the electronic device is last turned off after the electronic device is turned on; and obtaining a current battery level of the electronic device based on turn-off time and a turn-off remaining battery level when the electronic device is last turned off in case that the turn-off type is a first type. The first type includes that the first battery supplies power to the charging management module but not to peripheral circuit module and the coulometer after the electronic device is turned off. Compared with the case in the prior art of the current battery level obtained by using an OCV curve, the current battery level obtained by using the turn-off time and turn-off remaining battery level at the last turn-off is more accurate.
    Type: Application
    Filed: October 18, 2022
    Publication date: February 27, 2025
    Inventors: Chenghe YANG, Zhihua WU
  • Patent number: 12229667
    Abstract: A method and an apparatus for generating a shared encoder are provided, which belongs to a field of computer technology and deep learning. The method includes: sending by a master node a shared encoder training instruction to child nodes, so that each child node obtains training samples based on a type of a target shared encoder included in the training instruction; sending an initial parameter set of the target shared encoder to be trained to each child node after obtaining a confirmation message returned by each child node; obtaining an updated parameter set of the target shared encoder returned by each child node; determining a target parameter set corresponding to the target shared encoder based on a first preset rule and the updated parameter set of the target shared encoder returned by each child node.
    Type: Grant
    Filed: March 23, 2021
    Date of Patent: February 18, 2025
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD
    Inventors: Daxiang Dong, Wenhui Zhang, Zhihua Wu, Dianhai Yu, Yanjun Ma, Haifeng Wang
  • Publication number: 20250036920
    Abstract: The present disclosure provides a mixture-of-experts (MoE) model implementation method and system, an electronic device, and a storage medium, and relates to the field of artificial intelligence (AI) such as deep learning and distributed storage. The method includes: constructing a communication group, the communication group including a tensor-parallelism communication group, the tensor-parallelism communication group including at least two computing devices, tensor-parallelism segmentation being adopted for sparse parameters of each of the computing devices in a same tensor-parallelism communication group; and training an MoE model based on the communication group. By use of the solutions of the present disclosure, normal operation of model training can be guaranteed.
    Type: Application
    Filed: September 20, 2022
    Publication date: January 30, 2025
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Liang SHEN, Haifeng WANG, Huachao WU, Weibao GONG, Zhihua WU, Dianhai YU
  • Publication number: 20240394190
    Abstract: The present application provides a method of training a deep learning model. A specific implementation solution of the method of training the deep learning model includes: determining, according to first training data for a current training round, a first target parameter required to be written into a target memory in a first network parameter required by an embedding of the first training data, wherein the target memory is a memory contained in a target processor; determining a remaining storage slot in the target memory according to a first mapping relationship between a storage slot of the target memory and a network parameter; and writing, in response to the remaining storage slot meeting a storage requirement of the first target parameter, the first target parameter into the target memory so that a computing core contained in the target processor adjusts the first network parameter according to the first training data.
    Type: Application
    Filed: September 27, 2022
    Publication date: November 28, 2024
    Inventors: Minxu ZHANG, Haifeng WANG, Fan ZHANG, Xinxuan WU, Xuefeng YAO, Danlei FENG, Zhihua WU, Zhipeng TAN, Jie DING, Dianhai YU
  • Publication number: 20240354987
    Abstract: Disclosed are methods, apparatuses, and computer readable media for localization. An example apparatus may include at least one processor and at least one memory. The at least one memory may include computer program code, and the at least one memory and the computer program code may be configured to, with the at least one processor, cause the apparatus to perform determining positions of a plurality of objects of a first type to form a first polygon, determining distances between devices of a plurality of devices to form at least one second polygon, and matching the first polygon with the at least one second polygon to determine one of the at least one second polygon as a third polygon corresponding to the first polygon.
    Type: Application
    Filed: July 16, 2021
    Publication date: October 24, 2024
    Inventors: Xiaobing LENG, Fei GAO, Zhihua WU, Pengfei GUI, Nan HU
  • Publication number: 20240276432
    Abstract: There is provided a method comprising receiving at least one measured signal characteristic from a user equipment, the user equipment being located at a user equipment location; comparing the at least one measured signal characteristic to at least one of a plurality of signal characteristics, each signal characteristic being associated with a respective measurement point; and determining, based on the comparing, a probability that the user equipment location is a first location.
    Type: Application
    Filed: April 3, 2024
    Publication date: August 15, 2024
    Inventors: Jun WANG, Gang SHEN, Liuhai LI, Liang CHEN, Kan LIN, Zhihua WU, Chaojun XU, Jiexing GAO
  • Publication number: 20240275848
    Abstract: The present disclosure provides a content initialization method and apparatus, an electronic device and a storage medium, which relates to a field of computer technology, in particular to fields of deep learning and distributed computing. The content initialization method is applied to any one of a plurality of devices included in a distributed system. A specific implementation scheme of the content initialization method is: determining, according to a size information of a resource space for the distributed system and an identification information of the any one of the plurality of devices, a space information of a first sub-space for the any one of the plurality of devices in the resource space, wherein the space information includes a position information of the first sub-space for the resource space; and determining an initialization content for the first sub-space according to a random seed and the position information.
    Type: Application
    Filed: August 1, 2022
    Publication date: August 15, 2024
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Guoxia WANG, Long LI, Zhihua WU
  • Patent number: 12041571
    Abstract: There is provided a method comprising receiving at least one measured signal characteristic from a user equipment, the user equipment being located at a user equipment location; comparing the at least one measured signal characteristic to at least one of a plurality of signal characteristics, each signal characteristic being associated with a respective measurement point; and determining, based on the comparing, a probability that the user equipment location is a first location.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: July 16, 2024
    Assignee: NOKIA SOLUTIONS AND NETWORKS OY
    Inventors: Jun Wang, Gang Shen, Liuhai Li, Liang Chen, Kan Lin, Zhihua Wu, Chaojun Xu, Jiexing Gao
  • Publication number: 20230206080
    Abstract: A model training system includes at least one first cluster and a second cluster communicating with the at least first cluster. The at least one first cluster is configured to acquire a sample data set, generate training data according to the sample data set, and send the training data to the second cluster; and the second cluster is configured to train a pre-trained model according to the training data sent by the at least one first cluster.
    Type: Application
    Filed: March 7, 2023
    Publication date: June 29, 2023
    Inventors: Shuohuan WANG, Weibao GONG, Zhihua WU, Yu SUN, Siyu DING, Yaqian HAN, Yanbin ZHAO, Yuang LIU, Dianhai YU
  • Publication number: 20230206024
    Abstract: A resource allocation method, including: determining a neural network model to be allocated resources, and determining a set of devices capable of providing resources for the neural network model; determining, based on the set of devices and the neural network model, first set of evaluation points including first number of evaluation points, each of which corresponds to one resource allocation scheme and resource use cost corresponding to the resource allocation scheme; updating and iterating first set of evaluation points to obtain second set of evaluation points including second number of evaluation points, each of which corresponds to one resource allocation scheme and resource use cost corresponding to the resource allocation scheme, and second number being greater than first number; and selecting a resource allocation scheme with minimum resource use cost from the second set of evaluation points as a resource allocation scheme for allocating resources to the neural network model.
    Type: Application
    Filed: August 19, 2022
    Publication date: June 29, 2023
    Inventors: Ji Liu, Zhihua Wu, Danlei Feng, Chendi Zhou, Minxu Zhang, Xinxuan Wu, Xuefeng Yao, Dejing Dou, Dianhai Yu, Yanjun Ma
  • Publication number: 20230206075
    Abstract: A method for distributing network layers in a neural network model includes: acquiring a to-be-processed neural network model and a computing device set; generating a target number of distribution schemes according to network layers in the to-be-processed neural network model and computing devices in the computing device set, the distribution schemes including corresponding relationships between the network layers and the computing devices; according to device types of the computing devices, combining the network layers corresponding to the same device type in each distribution scheme into one stage, to obtain a combination result of each distribution scheme; obtaining an adaptive value of each distribution scheme according to the combination result of each distribution scheme; and determining a target distribution scheme from the distribution schemes according to respective adaptive value, and taking the target distribution scheme as a distribution result of the network layers in the to-be-processed neural n
    Type: Application
    Filed: November 21, 2022
    Publication date: June 29, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ji LIU, Zhihua WU, Danlei FENG, Minxu ZHANG, Xinxuan WU, Xuefeng YAO, Beichen MA, Dejing DOU, Dianhai YU, Yanjun MA
  • Publication number: 20230169351
    Abstract: A distributed training method based on end-to-end adaption, a device and a storage medium. The method includes: obtaining slicing results by slicing a model to be trained; obtaining an attribute of computing resources allocated to the model for training by parsing the computing resources, in which the computing resources are determined based on a computing resource requirement of the model, computing resources occupied by another model being trained, and idle computing resources, and the attribute of the computing resources is configured to represent at least one of a topology relation and a task processing capability of the computing resources; determining a distribution strategy of each of the slicing results in the computing resources based on the attributes of the computing resources; and performing distributed training on the model using the computing resources based on the distribution strategy.
    Type: Application
    Filed: December 1, 2022
    Publication date: June 1, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Haifeng Wang, Zhihua Wu, Dianhai Yu, Yanjun Ma, Tian Wu
  • Publication number: 20220382441
    Abstract: A method and apparatus for constructing a virtual assembly, and a computer-readable storage medium are provided. The method includes: receiving a cutting operation instruction input on a substrate by a user, displaying, on the substrate, a cutting path indicated by the cutting operation instruction, and cutting the substrate into at least two parts, and by means of receiving an assembly instruction input by the user, assembling the at least two parts, and displaying a virtual assembly formed by means of assembly. Since the parts are determined by the cutting path indicated by the cutting operation instruction input by the user, the “parts” in the method are not limited by material or shape, and the virtual assembly formed by assembling the parts is not limited by materials, parts, space, etc. Therefore, the construction of a virtual assembly is highly flexible, thereby improving the user experience.
    Type: Application
    Filed: August 10, 2022
    Publication date: December 1, 2022
    Inventors: Xin HUANG, Zhihua Wu, Jiaqi Fan, Shentao Wang
  • Publication number: 20220374704
    Abstract: The disclosure provides a neural network training method and apparatus, an electronic device, a medium and a program product, and relates to the field of artificial intelligence, in particular to the fields of deep learning and distributed learning.
    Type: Application
    Filed: December 21, 2021
    Publication date: November 24, 2022
    Applicant: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Danlei FENG, Long LIAN, Dianhai YU, Xuefeng YAO, Xinxuan WU, Zhihua WU, Yanjun MA
  • Publication number: 20220374713
    Abstract: The present disclosure provides a method and apparatus for performing distributed training on a deep learning model. The method may include: generating a distributed computation view based on data information of a to-be-trained deep learning model; generating a cluster resource view based on property information of a cluster hardware resource corresponding to the to-be-trained deep learning model; determining a target segmentation strategy of a distributed training task based on the distributed computation view and the cluster resource view; and performing distributed training on the to-be-trained deep learning model based on the target segmentation strategy.
    Type: Application
    Filed: August 3, 2022
    Publication date: November 24, 2022
    Inventors: Zhihua WU, Dianhai YU, Yulong AO, Weibao GONG
  • Publication number: 20220058222
    Abstract: The present disclosure provides a method of processing information, an apparatus of processing information, a method of recommending information, an electronic device, and a storage medium. The method includes: obtaining a tree structure parameter of a tree structure, wherein the tree structure is configured to index an object set used for recommendation; obtaining a classifier parameter of a classifier, wherein the classifier is configured to sequentially predict, from a top layer of the tree structure to a bottom layer of the tree structure, a preference node set whose probability of being preferred by a user is ranked higher in each layer, and a preference node set of each layer subsequent to the top layer of the tree structure is determined based on a preference node set of a previous layer of the each layer; and constructing a recalling model based on the tree structure parameter and the classifier parameter.
    Type: Application
    Filed: November 3, 2021
    Publication date: February 24, 2022
    Inventors: Mo CHENG, Dianhai YU, Lin MA, Zhihua WU, Daxiang DONG, Wei TANG
  • Publication number: 20220061013
    Abstract: There is provided a method comprising receiving at least one measured signal characteristic from a user equipment, the user equipment being located at a user equipment location; comparing the at least one measured signal characteristic to at least one of a plurality of signal characteristics, each signal characteristic being associated with a respective measurement point; and determining, based on the comparing, a probability that the user equipment location is a first location.
    Type: Application
    Filed: September 17, 2018
    Publication date: February 24, 2022
    Inventors: Jun WANG, Gang SHEN, Liuhai LI, Liang CHEN, Kan LIN, Zhihua WU, Chaojun XU, Jiexing GAO
  • Patent number: D954502
    Type: Grant
    Filed: May 16, 2021
    Date of Patent: June 14, 2022
    Inventor: Zhihua Wu
  • Patent number: D1024494
    Type: Grant
    Filed: December 6, 2022
    Date of Patent: April 30, 2024
    Inventor: Zhihua Wu