Patents by Inventor Dongchao Wen

Dongchao Wen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12249183
    Abstract: The present disclosure discloses an apparatus and a method for detecting a facial pose, an image processing system, and a storage medium. The apparatus comprises: an obtaining unit to obtain at least three keypoints of at least one face from an input image based on a pre-generated neural network, wherein coordinates of the keypoints obtained via a layer in the neural network for obtaining coordinates are three-dimensional coordinates; and a determining unit to determine, for the at least one face, a pose of the face based on the obtained keypoints, wherein the determined facial pose includes at least an angle. According to the present disclosure, the accuracy of the three-dimensional coordinates of the facial keypoints can be improved, thus the detection precision of a facial pose can be improved.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: March 11, 2025
    Assignee: Canon Kabushiki Kaisha
    Inventors: Qiao Wang, Deyu Wang, Kotaro Kitajima, Naoko Watazawa, Tsewei Chen, Wei Tao, Dongchao Wen
  • Publication number: 20250013870
    Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and a storage medium. In a forward propagation of the multi-layer neural network model, the number of input feature maps is expanded and a data computation is performed by using the expanded input feature maps.
    Type: Application
    Filed: September 23, 2024
    Publication date: January 9, 2025
    Inventors: Hongxing Gao, Wei Tao, Tse-Wei Chen, Dongchao Wen, Junjie Liu
  • Patent number: 12165398
    Abstract: The present disclosure relates to training method and apparatus for an object recognition model. There provides a training sample optimization apparatus for a neural network model for object recognition, comprising: for each training sample in a training sample database, a fluctuation determination unit configured to determine a fluctuation of model prediction of the training sample relative to a corresponding labeled identity of the training sample in a case of training the neural network model; and an optimization unit configured to determine whether the training sample can be used for training of the neural network model in the next training epoch, based on the fluctuation of the training sample.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: December 10, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventors: Dongyue Zhao, Dongchao Wen, Xian Li, Weihong Deng, Jiani Hu
  • Patent number: 12147901
    Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and a storage medium. In a forward propagation of the multi-layer neural network model, the number of input feature maps is expanded and a data computation is performed by using the expanded input feature maps.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: November 19, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hongxing Gao, Wei Tao, Tsewei Chen, Dongchao Wen, Junjie Liu
  • Patent number: 12026974
    Abstract: The present invention relates to method and apparatus for training a neural network for object recognition. A training method which includes inputting a training image set containing an object to be recognized, dividing the image samples in the training image set into simple samples and hard samples, for each kind of the image sample and the variation image sample, performing, a transitive transfer, calculating a distillation loss of the transferred student feature of the image sample relative to a teacher feature extracted from corresponding image sample of the other kind, classifying, the image sample, and calculating a classification loss of the image sample, calculating a total loss related to the training image set, and updating parameters of the neural network according to the calculated total loss.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: July 2, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventors: Dongyue Zhao, Dongchao Wen, Xian Li, Weihong Deng, Jiani Hu
  • Patent number: 11847569
    Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and storage medium. A number of channels of a filter in at least one convolutional layer in the multi-layer neural network model is expanded, and a convolution computation is performed by using the filter after expanding the number of channels, so that the performance of the network model does not degrade while simplifying the network model.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: December 19, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Wei Tao, Hongxing Gao, Tsewei Chen, Dongchao Wen, Junjie Liu
  • Patent number: 11755880
    Abstract: A method and an apparatus for optimizing and applying a multilayer neural network model, and a storage medium are provided. The optimization method includes, dividing out at least one sub-structure from the multilayer neural network model to be optimized, wherein a tail layer of the divided sub-structure is a quantization layer, and transferring operation parameters in layers other than the quantization layer to the quantization layer for each of the divided sub-structures and updating quantization threshold parameters in the quantization layer based on the transferred operation parameters. When a multilayer neural network model optimized based on the optimization method is operated, the necessary processor resources can be reduced.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: September 12, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hongxing Gao, Wei Tao, Tsewei Chen, Dongchao Wen
  • Publication number: 20220366259
    Abstract: Provided are a method, an apparatus and a system for training a neural network, and a storage medium storing instructions. The neural network comprises a first neural network and a second neural network, training of the first neural network has not yet completed and training of the second neural network does not start. The method comprises: obtaining a first output by subjecting a sample image to the current first neural network, and obtaining a second output by subjecting the sample image to the current second neural network; and updating the current first neural network according to a first loss function value, and updating the current second neural network according to a second loss function value. The performance of the second neural network can be improved, and the overall training time of the first neural network and the second neural network can be reduced.
    Type: Application
    Filed: October 30, 2020
    Publication date: November 17, 2022
    Inventors: Deyu Wang, Tse-wei Chen, Dongchao Wen, Junjie Liu, Wei Tao
  • Publication number: 20220309779
    Abstract: The invention provides a neural network training and application method, device and storage medium. The training method comprises: an obtaining step of obtaining a processing result and a loss function value of the processing result for at least one task after a sample image is processed in a neural network; wherein the neural network comprises at least one network structure; a determination step of determining importance of the processing result thereof based on the obtained loss function value; an adjustment step of adjusting a weight of the loss function for obtaining the loss function value based on the determined importance; and an update step of updating the neural network according to the loss function after the weight is adjusted.
    Type: Application
    Filed: March 24, 2022
    Publication date: September 29, 2022
    Inventors: Deyu Wang, Dongchao Wen, Wei Tao, Lingxiao Yin
  • Publication number: 20220292878
    Abstract: The present disclosure discloses an apparatus and a method for detecting a facial pose, an image processing system, and a storage medium. The apparatus comprises: an obtaining unit to obtain at least three keypoints of at least one face from an input image based on a pre-generated neural network, wherein coordinates of the keypoints obtained via a layer in the neural network for obtaining coordinates are three-dimensional coordinates; and a determining unit to determine, for the at least one face, a pose of the face based on the obtained keypoints, wherein the determined facial pose includes at least an angle. According to the present disclosure, the accuracy of the three-dimensional coordinates of the facial keypoints can be improved, thus the detection precision of a facial pose can be improved.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 15, 2022
    Inventors: Qiao Wang, Deyu Wang, Kotaro Kitajima, Naoko Watazawa, Tsewei Chen, Wei Tao, Dongchao Wen
  • Publication number: 20220180627
    Abstract: The present disclosure relates to training method and apparatus for an object recognition model. There provides a training sample optimization apparatus for a neural network model for object recognition, comprising: for each training sample in a training sample database, a fluctuation determination unit configured to determine a fluctuation of model prediction of the training sample relative to a corresponding labeled identity of the training sample in a case of training the neural network model; and an optimization unit configured to determine whether the training sample can be used for training of the neural network model in the next training epoch, based on the fluctuation of the training sample.
    Type: Application
    Filed: December 6, 2021
    Publication date: June 9, 2022
    Inventors: Dongyue Zhao, Dongchao Wen, Xian Li, Weihong Deng, Jiani Hu
  • Publication number: 20220138454
    Abstract: The present invention relates to method and apparatus for training a neural network for object recognition. A training method which includes inputting a training image set containing an object to be recognized, dividing the image samples in the training image set into simple samples and hard samples, for each kind of the image sample and the variation image sample, performing, a transitive transfer, calculating a distillation loss of the transferred student feature of the image sample relative to a teacher feature extracted from corresponding image sample of the other kind, classifying, the image sample, and calculating a classification loss of the image sample, calculating a total loss related to the training image set, and updating parameters of the neural network according to the calculated total loss.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 5, 2022
    Inventors: Dongyue Zhao, Dongchao Wen, Xian Li, Weihong Deng, Jiani Hu
  • Patent number: 11270108
    Abstract: An object tracking apparatus for a sequence of images, wherein a plurality of tracks have been obtained for the sequence of images, and each of the plurality of tracks is obtained by detecting an object in several images included in the sequence of images. The apparatus comprises matching track pair determining unit configured to determine a matching track pair from the plurality of tracks, wherein the matching track pair comprise a previous track and a subsequent track which correspond to the same object and are discontinuous, and combining unit configured to combine the previous track and the subsequent track included in the matching track pair.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: March 8, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Shiting Wang, Qi Hu, Dongchao Wen
  • Publication number: 20210334622
    Abstract: A method for generating a multilayer neural network including acquiring a multilayer neural network, wherein the multilayer neural network includes at least convolutional layers and quantization layers; generating, for each of the quantization layers in the multilayer neural network, quantization threshold parameters based on a quantization bit parameter and a learnable quantization interval parameter in the quantization layer; and updating the multilayer neural network to obtain a fixed-point neural network based on the generated quantization threshold parameters and operation parameters for each layer in the multilayer neural network.
    Type: Application
    Filed: April 14, 2021
    Publication date: October 28, 2021
    Inventors: Wei Tao, Tsewei Chen, Dongchao Wen, Junjie Liu, Deyu Wang
  • Publication number: 20210279574
    Abstract: A method of generating a quantized neural network comprises: determining, based on a floating-point weight in a neural network to be quantized, networks which correspond to the floating-point weights and are used for directly outputting quantized weights, respectively; quantizing, using the determined network, the floating-point weight corresponding to the network to obtain a quantized neural network; updating, based on a loss function value obtained via the quantized neural network, the determined network, the floating-point weight and the quantized weight in the quantized neural network.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 9, 2021
    Inventors: Junjie Liu, Tsewei Chen, Dongchao Wen, Wei Tao, Deyu Wang
  • Patent number: 11106945
    Abstract: A training and application method for a neural network model is provided. The training method determines the first network model to be trained and sets a downscaling layer for at least one layer in the first network model, wherein the number of filters and filter kernel of the downscaling layer are identical to those of layers to be trained in the second network model. Filter parameters of the downscaling layer are transmitted to the second network model as training information. By this training method, training can also be performed even when the scale of the layer for training in the first network model is different from that of the layers to be trained in the second network model, and the amount of lost data is small.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: August 31, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Junjie Liu, Tsewei Chen, Dongchao Wen, Hongxing Gao, Wei Tao
  • Publication number: 20210241097
    Abstract: A training method and device for an object recognition model. An apparatus for optimizing a neural network model for object recognition, including a loss determination unit configured to determine loss data for features extracted from a training image set using the neural network model and a loss function with a weight function, and an updating unit configured to perform an updating operation on parameters of the neural network model based on the loss data and an updating function, wherein the updating function is derived based on the loss function with the weight function of the neural network model, and the weight function and the loss function change monotonically in a specific value interval in the same direction.
    Type: Application
    Filed: November 4, 2020
    Publication date: August 5, 2021
    Inventors: Dongyue Zhao, Dongchao Wen, Xian Li, Weihong Deng, Jiani Hu
  • Publication number: 20210065011
    Abstract: A training and application method, apparatus, system and storage medium of a neural network model is provided. The training method comprises: determining a constraint threshold range according to the number of training iterations and a calculation accuracy of the neural network model, and constraining a gradient of a weight to be within the constraint threshold range, so that when the gradient of a low-accuracy weight is distorted due to a quantization error, the distortion of the gradient is corrected by the constraint of the gradient, thereby making the trained network model achieve the expected performance.
    Type: Application
    Filed: August 26, 2020
    Publication date: March 4, 2021
    Inventors: Junjie Liu, Tsewei Chen, Dongchao Wen, Wei Tao, Deyu Wang
  • Publication number: 20200210844
    Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and a storage medium. In a forward propagation of the multi-layer neural network model, the number of input feature maps is expanded and a data computation is performed by using the expanded input feature maps.
    Type: Application
    Filed: December 19, 2019
    Publication date: July 2, 2020
    Inventors: Hongxing Gao, Wei Tao, Tse-Wei Chen, Dongchao Wen, Junjie Liu
  • Publication number: 20200210843
    Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and storage medium. A number of channels of a filter in at least one convolutional layer in the multi-layer neural network model is expanded, and a convolution computation is performed by using the filter after expanding the number of channels, so that the performance of the network model does not degrade while simplifying the network model.
    Type: Application
    Filed: December 19, 2019
    Publication date: July 2, 2020
    Inventors: Wei Tao, Hongxing Gao, Tse-Wei Chen, Dongchao Wen, Junjie Liu