Patents by Inventor Zhihua Wu

Zhihua Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230206024
    Abstract: A resource allocation method, including: determining a neural network model to be allocated resources, and determining a set of devices capable of providing resources for the neural network model; determining, based on the set of devices and the neural network model, first set of evaluation points including first number of evaluation points, each of which corresponds to one resource allocation scheme and resource use cost corresponding to the resource allocation scheme; updating and iterating first set of evaluation points to obtain second set of evaluation points including second number of evaluation points, each of which corresponds to one resource allocation scheme and resource use cost corresponding to the resource allocation scheme, and second number being greater than first number; and selecting a resource allocation scheme with minimum resource use cost from the second set of evaluation points as a resource allocation scheme for allocating resources to the neural network model.
    Type: Application
    Filed: August 19, 2022
    Publication date: June 29, 2023
    Inventors: Ji Liu, Zhihua Wu, Danlei Feng, Chendi Zhou, Minxu Zhang, Xinxuan Wu, Xuefeng Yao, Dejing Dou, Dianhai Yu, Yanjun Ma
  • Publication number: 20230206080
    Abstract: A model training system includes at least one first cluster and a second cluster communicating with the at least first cluster. The at least one first cluster is configured to acquire a sample data set, generate training data according to the sample data set, and send the training data to the second cluster; and the second cluster is configured to train a pre-trained model according to the training data sent by the at least one first cluster.
    Type: Application
    Filed: March 7, 2023
    Publication date: June 29, 2023
    Inventors: Shuohuan WANG, Weibao GONG, Zhihua WU, Yu SUN, Siyu DING, Yaqian HAN, Yanbin ZHAO, Yuang LIU, Dianhai YU
  • Publication number: 20230206075
    Abstract: A method for distributing network layers in a neural network model includes: acquiring a to-be-processed neural network model and a computing device set; generating a target number of distribution schemes according to network layers in the to-be-processed neural network model and computing devices in the computing device set, the distribution schemes including corresponding relationships between the network layers and the computing devices; according to device types of the computing devices, combining the network layers corresponding to the same device type in each distribution scheme into one stage, to obtain a combination result of each distribution scheme; obtaining an adaptive value of each distribution scheme according to the combination result of each distribution scheme; and determining a target distribution scheme from the distribution schemes according to respective adaptive value, and taking the target distribution scheme as a distribution result of the network layers in the to-be-processed neural n
    Type: Application
    Filed: November 21, 2022
    Publication date: June 29, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ji LIU, Zhihua WU, Danlei FENG, Minxu ZHANG, Xinxuan WU, Xuefeng YAO, Beichen MA, Dejing DOU, Dianhai YU, Yanjun MA
  • Publication number: 20230169351
    Abstract: A distributed training method based on end-to-end adaption, a device and a storage medium. The method includes: obtaining slicing results by slicing a model to be trained; obtaining an attribute of computing resources allocated to the model for training by parsing the computing resources, in which the computing resources are determined based on a computing resource requirement of the model, computing resources occupied by another model being trained, and idle computing resources, and the attribute of the computing resources is configured to represent at least one of a topology relation and a task processing capability of the computing resources; determining a distribution strategy of each of the slicing results in the computing resources based on the attributes of the computing resources; and performing distributed training on the model using the computing resources based on the distribution strategy.
    Type: Application
    Filed: December 1, 2022
    Publication date: June 1, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Haifeng Wang, Zhihua Wu, Dianhai Yu, Yanjun Ma, Tian Wu
  • Publication number: 20220382441
    Abstract: A method and apparatus for constructing a virtual assembly, and a computer-readable storage medium are provided. The method includes: receiving a cutting operation instruction input on a substrate by a user, displaying, on the substrate, a cutting path indicated by the cutting operation instruction, and cutting the substrate into at least two parts, and by means of receiving an assembly instruction input by the user, assembling the at least two parts, and displaying a virtual assembly formed by means of assembly. Since the parts are determined by the cutting path indicated by the cutting operation instruction input by the user, the “parts” in the method are not limited by material or shape, and the virtual assembly formed by assembling the parts is not limited by materials, parts, space, etc. Therefore, the construction of a virtual assembly is highly flexible, thereby improving the user experience.
    Type: Application
    Filed: August 10, 2022
    Publication date: December 1, 2022
    Inventors: Xin HUANG, Zhihua Wu, Jiaqi Fan, Shentao Wang
  • Publication number: 20220374704
    Abstract: The disclosure provides a neural network training method and apparatus, an electronic device, a medium and a program product, and relates to the field of artificial intelligence, in particular to the fields of deep learning and distributed learning.
    Type: Application
    Filed: December 21, 2021
    Publication date: November 24, 2022
    Applicant: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Danlei FENG, Long LIAN, Dianhai YU, Xuefeng YAO, Xinxuan WU, Zhihua WU, Yanjun MA
  • Publication number: 20220374713
    Abstract: The present disclosure provides a method and apparatus for performing distributed training on a deep learning model. The method may include: generating a distributed computation view based on data information of a to-be-trained deep learning model; generating a cluster resource view based on property information of a cluster hardware resource corresponding to the to-be-trained deep learning model; determining a target segmentation strategy of a distributed training task based on the distributed computation view and the cluster resource view; and performing distributed training on the to-be-trained deep learning model based on the target segmentation strategy.
    Type: Application
    Filed: August 3, 2022
    Publication date: November 24, 2022
    Inventors: Zhihua WU, Dianhai YU, Yulong AO, Weibao GONG
  • Publication number: 20220061013
    Abstract: There is provided a method comprising receiving at least one measured signal characteristic from a user equipment, the user equipment being located at a user equipment location; comparing the at least one measured signal characteristic to at least one of a plurality of signal characteristics, each signal characteristic being associated with a respective measurement point; and determining, based on the comparing, a probability that the user equipment location is a first location.
    Type: Application
    Filed: September 17, 2018
    Publication date: February 24, 2022
    Inventors: Jun WANG, Gang SHEN, Liuhai LI, Liang CHEN, Kan LIN, Zhihua WU, Chaojun XU, Jiexing GAO
  • Publication number: 20220058222
    Abstract: The present disclosure provides a method of processing information, an apparatus of processing information, a method of recommending information, an electronic device, and a storage medium. The method includes: obtaining a tree structure parameter of a tree structure, wherein the tree structure is configured to index an object set used for recommendation; obtaining a classifier parameter of a classifier, wherein the classifier is configured to sequentially predict, from a top layer of the tree structure to a bottom layer of the tree structure, a preference node set whose probability of being preferred by a user is ranked higher in each layer, and a preference node set of each layer subsequent to the top layer of the tree structure is determined based on a preference node set of a previous layer of the each layer; and constructing a recalling model based on the tree structure parameter and the classifier parameter.
    Type: Application
    Filed: November 3, 2021
    Publication date: February 24, 2022
    Inventors: Mo CHENG, Dianhai YU, Lin MA, Zhihua WU, Daxiang DONG, Wei TANG
  • Publication number: 20220036241
    Abstract: The present disclosure discloses a method, an apparatus and a storage medium for training a deep learning framework, and relates to the artificial intelligence field such as deep learning and big data processing. The specific implementation solution is: acquiring at least one task node in a current task node cluster, that meets a preset opening condition when a target task meets a training start condition; judging whether a number of nodes of the at least one task node is greater than or equal to a preset number; synchronously training the deep learning framework of the target task by the at least one task node according to sample data if the number of nodes is greater than the preset number; and acquiring a synchronously trained target deep learning framework when the target task meets a training completion condition.
    Type: Application
    Filed: October 14, 2021
    Publication date: February 3, 2022
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Tianjian He, Dianhai Yu, Zhihua Wu, Daxiang Dong, Yanjun Ma
  • Publication number: 20210374542
    Abstract: The invention discloses a method and an apparatus for updating parameters of a multi-task model. The method includes: obtaining a training sample set, in which the training sample set comprises a plurality of samples and a task to which each sample belongs; putting each sample into a corresponding sample queue sequentially according to the task to which each sample belongs; training a shared network layer in the multi-task model and a target sub-network layer of tasks associated with the sample queue with samples in the sample queue in case that the number of the samples in the sample queue reaches a training data requirement, so as to generate a model parameter update gradient corresponding to the tasks associated with the sample queue; and updating parameters of the shared network layer and the target sub-network layer in a parameter server according to the model parameter update gradient.
    Type: Application
    Filed: August 9, 2021
    Publication date: December 2, 2021
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Wenhui ZHANG, Dianhai YU, Zhihua WU
  • Publication number: 20210357814
    Abstract: The present disclosure provides a method and apparatus for distributed training a model, an electronic device, and a computer readable storage medium. The method may include: performing, for each batch of training samples acquired by a distributed first trainer, model training through a distributed second trainer to obtain gradient information; updating a target parameter in a distributed built-in parameter server according to the gradient information; and performing, in response to determining that training for a preset number of training samples is completed, a parameter exchange between the distributed built-in parameter server and a distributed parameter server through the distributed first trainer to perform a parameter update on the initial model until training for the initial model is completed.
    Type: Application
    Filed: June 29, 2021
    Publication date: November 18, 2021
    Inventors: Xinxuan WU, Xuefeng YAO, Dianhai YU, Zhihua WU, Yanjun MA, Tian WU, Haifeng WANG
  • Publication number: 20210326762
    Abstract: The present disclosure discloses an apparatus and method for distributedly training a model, an electronic device, and a computer readable storage medium. The apparatus may include: a distributed reader, a distributed trainer and a distributed parameter server that are mutually independent. A reader in the distributed reader is configured to acquire a training sample, and load the acquired training sample to a corresponding trainer in the distributed trainer; the trainer in the distributed trainer is configured to perform model training based on the loaded training sample to obtain gradient information; and a parameter server in the distributed parameter server is configured to update a parameter of an initial model based on the gradient information of the distributed trainer to obtain a trained target model.
    Type: Application
    Filed: June 17, 2021
    Publication date: October 21, 2021
    Inventors: Zhihua Wu, Dianhai Yu, Xuefeng Yao, Wei Tang, Xinxuan Wu, Mo Cheng, Lin Ma, Yanjun Ma, Tian Wu, Haifeng Wang
  • Publication number: 20210209417
    Abstract: A method and an apparatus for generating a shared encoder are provided, which belongs to a field of computer technology and deep learning. The method includes: sending by a master node a shared encoder training instruction to child nodes, so that each child node obtains training samples based on a type of a target shared encoder included in the training instruction; sending an initial parameter set of the target shared encoder to be trained to each child node after obtaining a confirmation message returned by each child node; obtaining an updated parameter set of the target shared encoder returned by each child node; determining a target parameter set corresponding to the target shared encoder based on a first preset rule and the updated parameter set of the target shared encoder returned by each child node.
    Type: Application
    Filed: March 23, 2021
    Publication date: July 8, 2021
    Inventors: Daxiang DONG, Wenhui ZHANG, Zhihua WU, Dianhai YU, Yanjun MA, Haifeng WANG
  • Publication number: 20210199287
    Abstract: Processes and apparatus are described for removing impurities from solid biomass while preserving hydrogen and carbon content. Examples are provided of processes using acidified aqueous solutions in a countercurrent extraction process that includes the pneumatic transport of slurries between process units, or a mechanical dewatering step, or both, to produce a washed biomass suitable for various upgrading and conversion processes. Compositions related to the processes are also described.
    Type: Application
    Filed: August 13, 2020
    Publication date: July 1, 2021
    Inventors: Gregory Coil, Charles M. Sorensen, William McDonald, William Igoe, Zhihua Wu, Robert McIntire, Steven Striziver
  • Patent number: 10674249
    Abstract: An external speaker assembly and an audio apparatus are disclosed. The external speaker assembly comprises a housing, a speaker unit and a driving circuit; the external speaker assembly is detachably connected to an audio device; when the external speaker assembly is connected to the audio device, a sealed cavity is formed by the housing and a rear cover of the audio device; the speaker unit and the driving circuit are received in the sealed cavity; the driving circuit is configured to connect to the audio device, and receive an audio signal from the audio device to drive the speaker unit to produce sounds.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: June 2, 2020
    Assignee: JRD COMMUNICATION (SHENZHEN) LTD
    Inventors: Zhihua Wu, Xiulu Jin, Linfang Li, Wenfei Wu, Siqin Feng
  • Publication number: 20200092635
    Abstract: An external speaker assembly and an audio apparatus are disclosed. The external speaker assembly comprises a housing, a speaker unit and a driving circuit; the external speaker assembly is detachably connected to an audio device; when the external speaker assembly is connected to the audio device, a sealed cavity is formed by the housing and a rear cover of the audio device; the speaker unit and the driving circuit are received in the sealed cavity; the driving circuit is configured to connect to the audio device, and receive an audio signal from the audio device to drive the speaker unit to produce sounds.
    Type: Application
    Filed: April 7, 2017
    Publication date: March 19, 2020
    Applicant: JRD COMMUNICATION (SHENZHEN) LTD
    Inventors: Zhihua Wu, Xiulu Jin, Linfang Li, Wenfei Wu, Siqin Feng
  • Patent number: D954502
    Type: Grant
    Filed: May 16, 2021
    Date of Patent: June 14, 2022
    Inventor: Zhihua Wu
  • Patent number: D1016710
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: March 5, 2024
    Assignee: CITIC Dicastal Co., Ltd.
    Inventors: Zhichong Liu, Zuo Xu, Hanqi Wu, Zhihua Zhu
  • Patent number: D1016711
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: March 5, 2024
    Assignee: CITIC Dicastal Co., Ltd.
    Inventors: Zhichong Liu, Zuo Xu, Hanqi Wu, Zhihua Zhu