Patents by Inventor Hongsheng Wang

Hongsheng Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230351212
    Abstract: The disclosure provides a semi-supervised method and apparatus for public opinion text analysis. The semi-supervised method includes: first acquiring a public opinion data set, and preprocessing the data set; performing a data augmentation algorithm on preprocessed samples to generate data augmented samples; generating category labels for the unlabeled samples in the data set in an unsupervised extraction and clustering manner; calculating similarities of word vector latent semantic spaces and performing linear interpolation operation to generate, according to an operation result, similarity interpolation samples; constructing a final training sample set; adopting a semi-supervised method, inputting the final training sample set into a pre-trained language model to train the model to obtain a classification model; and predicting the test set by using the classification model to obtain a classification result.
    Type: Application
    Filed: June 10, 2022
    Publication date: November 2, 2023
    Inventors: Hongsheng WANG, Qing LIAO, Hujun BAO, Guang CHEN
  • Patent number: 11805025
    Abstract: The present disclosure provides a neural network computing-oriented modeling method and apparatus for distributed data routing. The method includes the following steps: S1, designing the distributed attribute of a physical tensor: abstracting a mapping relationship between a logic tensor and the physical tensor into three distributed attributes including a broadcast attribute, a scatter attribute and a local reduction attribute; S2, deducing the distributed attribute of an output tensor: specifying the distributed attribute of an input tensor, and then deducing the legal distributed attribute of the output tensor according to the known distributed attribute of the input tensor; and S3, judging, according to the distributed attribute situation, whether an intermediate communication primitive needs to be inserted to obtain the distributed attribute of a local physical tensor.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: October 31, 2023
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Shuibing He, Hujun Bao, Guang Chen
  • Publication number: 20230334334
    Abstract: The disclosure discloses a method of executing dynamic graph for neural network computation and the apparatus thereof. The method of executing dynamic graph includes the following steps: S1: constructing and distributing an operator and a tensor; S2: deducing an operator executing process by an operator interpreter; S3: constructing an instruction of a virtual machine at runtime by the operator interpreter; S4: sending the instruction to the virtual machine at runtime by the operator interpreter; S5: scheduling the instruction by the virtual machine; and S6: releasing an executed instruction by the virtual machine. According to the method of executing dynamic graph for neural network computation and the apparatus thereof provided by the disclosure, runtime is abstracted to be the virtual machine, and the virtual machine acquires a sub-graph of each step constructed by a user in real time through the interpreter and schedules, the virtual machines issues, and executes each sub-graph.
    Type: Application
    Filed: June 6, 2022
    Publication date: October 19, 2023
    Inventors: Hongsheng WANG, Hujun BAO, Guang CHEN
  • Patent number: 11790264
    Abstract: The present disclosure is directed to methods and systems for knowledge distillation.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: October 17, 2023
    Assignee: GOOGLE LLC
    Inventors: Thomas J. Duerig, Hongsheng Wang, Scott Alexander Rudkin
  • Patent number: 11782723
    Abstract: Disclosed are an intermediate representation method and apparatus for parallel execution of graph computation. The method includes the following steps: S1: compiling a neural network into a computational graph on a computer; S2: defining branch states of tensor variables in the computational graph; S3: defining a data dependency relationship of the tensor variables in the computational graph; S4: defining a control dependency relationship of the tensor variables in the computational graph; S5: building a data dependency relationship graph of the tensor variables in the computational graph; S6: building a control dependency relationship graph of the tensor variables in the computational graph; and S7: transforming control dependencies into data dependencies. The present application derives, based on the dependency relationship, a parallel computing method that can execute the branch threads in parallel in the global computational graph, and optimizes the compilation efficiency of the computational graph.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: October 10, 2023
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen, Lingfang Zeng, Aimin Pan
  • Publication number: 20230274129
    Abstract: The present disclosure discloses a method for execution of a computational graph in a neural network model and an apparatus thereof, including: creating task execution bodies on a native machine according to a physical computational graph compiled and generated by a deep learning framework, and designing a solution for allocating a plurality of idle memory blocks to each task execution body, so that the entire computational graph participates in deep learning training tasks of different batches of data in a pipelining and parallelizing manner.
    Type: Application
    Filed: March 29, 2022
    Publication date: August 31, 2023
    Inventors: Hongsheng WANG, Hujun BAO, Guang CHEN, Lingfang ZENG, Hongcai CHENG, Yong LI, Jian ZHU, Huanbo ZHENG
  • Publication number: 20230259774
    Abstract: The disclosure discloses a method of neural network model computation-oriented intermediate representation and apparatus thereof. The method includes the following steps: S1, parsing an input model file so as to acquire topological structure information of a neural network; S2, constructing a logical computation graph; S21, inferring physical layout information of each operator in the logical computation graph; S22, inferring meta attributes of each operator in the logical computation graph; S23, inferring description information of input and output logical tensors of each operator in the logical computation graph; S3, constructing a physical computation graph; S31, generating a physical computation graph, etc.
    Type: Application
    Filed: April 6, 2022
    Publication date: August 17, 2023
    Inventors: Hongsheng WANG, Wei HUA, Weiqiang JIA, Hujun BAO
  • Patent number: 11714995
    Abstract: Disclosed is a method for distributed type training adaptation and apparatus in a deep learning framework and an AI accelerator card. The method includes the following steps: S1: the deep learning framework supports single-card configuration in a newly added AI accelerator card, and sub-steps thereof are as follows: S11: the deep learning framework supports new hardware; S12: the deep learning framework supports a device thread of the new hardware; S13: the deep learning framework supports a memory operation of the new hardware; and S14: the deep learning framework supports an operator kernel function of the new hardware; S2: the deep learning framework supports multi-card configuration in the newly added AI accelerator card; S3: the deep learning framework supports tensor segmentation and multi-card distribution; and S4: the deep learning framework supports multi-card collective communication in the newly added AI accelerator card.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: August 1, 2023
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Wei Hua, Weiqiang Jia
  • Patent number: 11699290
    Abstract: Disclosed are a pedestrian re-identification method and apparatus based on local feature attention. The method includes the following steps: S1: obtaining an original surveillance video image data set, and dividing the original surveillance video image data set into a training set and a test set in proportion; and S2: performing image enhancement on the original surveillance video image training set to obtain enhanced images, and converting the enhanced images into sequence data.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: July 11, 2023
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen
  • Publication number: 20230177312
    Abstract: Disclosed is a method for distributed type training adaptation and apparatus in a deep learning framework and an AI accelerator card. The method includes the following steps: S1: the deep learning framework supports single-card configuration in a newly added AI accelerator card, and sub-steps thereof are as follows: S11: the deep learning framework supports new hardware; S12: the deep learning framework supports a device thread of the new hardware; S13: the deep learning framework supports a memory operation of the new hardware; and S14: the deep learning framework supports an operator kernel function of the new hardware; S2: the deep learning framework supports multi-card configuration in the newly added AI accelerator card; S3: the deep learning framework supports tensor segmentation and multi-card distribution; and S4: the deep learning framework supports multi-card collective communication in the newly added AI accelerator card.
    Type: Application
    Filed: May 9, 2022
    Publication date: June 8, 2023
    Inventors: Hongsheng WANG, Hujun BAO, Wei HUA, Weiqiang JIA
  • Patent number: 11669741
    Abstract: Disclosed is a method for meta-knowledge fine-tuning and platform based on domain-invariant features. According to the method, highly transferable common knowledge, i.e., domain-invariant features, in different data sets of the same kind of tasks is learnt, the common domain features in different domains corresponding to different data sets of the same kind of tasks learnt in the network set are fine-tuned to be quickly adapted to any different domains. According to the present application, the parameter initialization ability and generalization ability of the universal language model of the same kind of tasks are improved, and finally a common compression framework of the universal language model of the same kind of downstream tasks is obtained through fine tuning. In the meta-knowledge fine-tuning network, a loss function of the domain-invariant features is designed in the present application, and domain-independent universal knowledge is learn.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: June 6, 2023
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Haijun Shan, Shengjian Hu
  • Patent number: 11615247
    Abstract: Disclosed are a labeling method and apparatus for named entity recognition of a legal instrument. The method includes steps: step S1: acquiring a legal text, and transforming the legal text into an index table; step S2: outputting a sentence feature encoding result; step S3: performing training and prediction; step S4: obtaining a set; step S5: obtaining a multi-head score transfer matrix; step S6: obtaining a score transfer matrix corresponding to the legal text; step S7: determining a recognized nested entity; and S8: constructing an entity labeling template by using the recognized nested entity. According to the present disclosure, a user tries to complete recognition of nested entity labeling by changing an input of the BERT model, and a multi-head selection matrix labeling thought of the present disclosure is used to relieve the difficulty in recognizing a long text and a nested entity in an NER task to a larger extent.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: March 28, 2023
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Guang Chen, Chao Ma, Qing Liao
  • Publication number: 20230062827
    Abstract: A system that includes: a force sensor assembly adapted to monitor a load as applied on a subject’s knee joint when the force sensor assembly remains in direct contact with the subject’s lower extremity and the load is monitored from inside a main magnet of an MRI scanner; a mobile unit comprising tracks configured to adjust a position of the force sensor assembly; a stationary base on which the mobile unit and the force sensor assembly are located, the mobile unit translatable solely axially on the stationary base; and a processor coupled to the force sensor assembly and programmed to read information encoding the load being monitored by the force sensor assembly, wherein an MRI scan of the knee j oint is initiated only when a predetermined load has been applied to the subject’s knee joint for a pre-determined period of time.
    Type: Application
    Filed: August 22, 2022
    Publication date: March 2, 2023
    Inventors: Suzanne Maher, Scott A. Rodeo, Russell F. Warren, Hollis Potter, Matthew F. Koff, Hongsheng Wang
  • Patent number: 11526774
    Abstract: Disclosed is a method for automatically compressing multi-task oriented pre-trained language model and a platform thereof. According to the method, a meta-network of a structure generator is designed, a knowledge distillation coding vector is constructed based on a knowledge distillation method of Transformer layer sampling, and a distillation structure model corresponding to a currently input coding vector is generated by using the structure generator; at the same time, a Bernoulli distribution sampling method is provided for training the structure generator; in each iteration, each encoder unit is transferred by Bernoulli distribution sampling to form a corresponding coding vector; by changing the coding vector input to the structure generator and a small batch of training data, the structure generator and the corresponding distillation structure are jointly trained, and a structure generator capable of generating weights for different distillation structures can be acquired.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: December 13, 2022
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Haijun Shan, Jiaqing Fu
  • Patent number: 11501171
    Abstract: Disclosed are an automatic compression method and platform for a pre-trained language model based on multilevel knowledge distillation. The method includes the following steps: step 1, constructing multilevel knowledge distillation, and distilling a knowledge structure of a large model at three different levels: a self-attention unit, a hidden layer state and an embedded layer; step 2, training a knowledge distillation network of meta-learning to generate a general compression architecture of a plurality of pre-trained language models; and step 3, searching for an optimal compression structure based on an evolutionary algorithm.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: November 15, 2022
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Enping Wang, Zailiang Yu
  • Patent number: 11432734
    Abstract: A system that includes: a force sensor assembly adapted to monitor a load as applied on a subject's knee joint when the force sensor assembly remains in direct contact with the subject's lower extremity and the load is monitored from inside a main magnet of an MRI scanner; a mobile unit comprising tracks configured to adjust a position of the force sensor assembly; a stationary base on which the mobile unit and the force sensor assembly are located, the mobile unit translatable solely axially on the stationary base; and a processor coupled to the force sensor assembly and programmed to read information encoding the load being monitored by the force sensor assembly, wherein an MRI scan of the knee joint is initiated only when a pre-determined load has been applied to the subject's knee joint for a pre-determined period of time.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: September 6, 2022
    Assignee: NEW YORK SOCIETY FOR THE RELIEF OF THE RUPTURED AND CRIPPLED, MAINTAINING THE HOSPITAL FOR SPECIAL SURGERY
    Inventors: Suzanne Maher, Scott A. Rodeo, Russell F. Warren, Hollis Potter, Matthew F. Koff, Hongsheng Wang
  • Publication number: 20220222529
    Abstract: Disclosed is a method for meta-knowledge fine-tuning and platform based on domain-invariant features. According to the method, highly transferable common knowledge, i.e., domain-invariant features, in different data sets of the same kind of tasks is learnt, the common domain features in different domains corresponding to different data sets of the same kind of tasks learnt in the network set are fine-tuned to be quickly adapted to any different domains. According to the present application, the parameter initialization ability and generalization ability of the universal language model of the same kind of tasks are improved, and finally a common compression framework of the universal language model of the same kind of downstream tasks is obtained through fine tuning. In the meta-knowledge fine-tuning network, a loss function of the domain-invariant features is designed in the present application, and domain-independent universal knowledge is learn.
    Type: Application
    Filed: February 18, 2022
    Publication date: July 14, 2022
    Inventors: Hongsheng WANG, Haijun SHAN, Shengjian HU
  • Publication number: 20220198276
    Abstract: Disclosed are an automatic compression method and platform for a pre-trained language model based on multilevel knowledge distillation. The method includes the following steps: step 1, constructing multilevel knowledge distillation, and distilling a knowledge structure of a large model at three different levels: a self-attention unit, a hidden layer state and an embedded layer; step 2, training a knowledge distillation network of meta-learning to generate a general compression architecture of a plurality of pre-trained language models; and step 3, searching for an optimal compression structure based on an evolutionary algorithm.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 23, 2022
    Inventors: Hongsheng WANG, Enping WANG, Zailiang YU
  • Publication number: 20220188658
    Abstract: Disclosed is a method for automatically compressing multi-task oriented pre-trained language model and a platform thereof. According to the method, a meta-network of a structure generator is designed, a knowledge distillation coding vector is constructed based on a knowledge distillation method of Transformer layer sampling, and a distillation structure model corresponding to a currently input coding vector is generated by using the structure generator; at the same time, a Bernoulli distribution sampling method is provided for training the structure generator; in each iteration, each encoder unit is transferred by Bernoulli distribution sampling to form a corresponding coding vector; by changing the coding vector input to the structure generator and a small batch of training data, the structure generator and the corresponding distillation structure are jointly trained, and a structure generator capable of generating weights for different distillation structures can be acquired.
    Type: Application
    Filed: December 28, 2021
    Publication date: June 16, 2022
    Inventors: Hongsheng WANG, Haijun SHAN, Jiaqing FU
  • Patent number: 11354499
    Abstract: Disclosed is a meta-knowledge fine tuning method and platform for a multi-task language model. The method is to obtain highly transferable shared knowledge, that is, meta-knowledge, on different data sets of tasks of the same category, perform interrelation and mutual reinforcement on the learning processes of the tasks of the same category that correspond to different data sets and are in different domains, so as to improve the fine tuning effect of downstream tasks of the same category on data sets of different domains in the application of the language model, and improve the parameter initialization ability and the generalization ability of a general language model for the tasks of the same category.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: June 7, 2022
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Haijun Shan, Shengjian Hu