Patents by Inventor Ziqiang SHI

Ziqiang SHI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240331378
    Abstract: The embodiments of the present disclosure provide an apparatus for identifying items, a method for identifying items and an electronic device. The apparatus includes: a detector configured to detect one or more items in a reference area in one or more image frames in video data; a tracker configured to track an item detected in multiple image frames, wherein multi-hierarchy decision is performed on the item in the multiple image frames by using different time windows; and a classifier configured to identify the item according to a decision result of the tracker. Thereby, even if an item is moved briefly in some scenarios, the item will not be identified as two different items, which can reduce a situation in which the item is identified repeatedly and improve accuracy and robustness of item detection.
    Type: Application
    Filed: March 27, 2024
    Publication date: October 3, 2024
    Applicant: Fujitsu Limited
    Inventors: Ziqiang SHI, Liu LIU, Zhongling LIU, Rujie LIU
  • Patent number: 11556824
    Abstract: The present disclosure relates to methods for estimating an accuracy and robustness of a model and devices thereof. According to an embodiment of the present disclosure, the method comprises calculating a parameter representing a possibility that a sample in the first dataset appears in the second dataset; calculating an accuracy score of the model with respect to the sample in the first dataset; calculating a weighted accuracy score of the model with respect to the sample in the first dataset, based on the accuracy score, by taking the parameter as a weight; and calculating, as the estimation accuracy of the model with respect to the second dataset, an adjusted accuracy of the model with respect to the first dataset according to the weighted accuracy score.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: January 17, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Chaoliang Zhong, Wensheng Xia, Ziqiang Shi, Jun Sun
  • Patent number: 11556735
    Abstract: A training device and a training method for training a multi-goal model based on goals in a goal space are provided. The training device includes a memory and a processor coupled to the memory. The processor is configured to set the goal space, to acquire a plurality of sub-goal spaces of different levels of difficulty; change a sub-goal space to be processed from a current sub-goal space to a next sub-goal space of a higher level of difficulty; select, as sampling goals, goals at least from the current sub-goal space, and to acquire transitions related to the sampling goals by executing actions; train the multi-goal model based on the transitions, and evaluate the multi-goal model by calculating a success rate for achieving goals in the current sub-goal space.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: January 17, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Chaoliang Zhong, Wensheng Xia, Ziqiang Shi, Jun Sun
  • Publication number: 20210073665
    Abstract: The present disclosure relates to methods for estimating an accuracy and robustness of a model and devices thereof. According to an embodiment of the present disclosure, the method comprises calculating a parameter representing a possibility that a sample in the first dataset appears in the second dataset; calculating an accuracy score of the model with respect to the sample in the first dataset; calculating a weighted accuracy score of the model with respect to the sample in the first dataset, based on the accuracy score, by taking the parameter as a weight; and calculating, as the estimation accuracy of the model with respect to the second dataset, an adjusted accuracy of the model with respect to the first dataset according to the weighted accuracy score.
    Type: Application
    Filed: June 16, 2020
    Publication date: March 11, 2021
    Applicant: FUJITSU LIMITED
    Inventors: Chaoliang Zhong, Wensheng Xia, Ziqiang Shi, Jun Sun
  • Publication number: 20210073591
    Abstract: A robustness estimation method, a data processing method, and an information processing apparatus are provided.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 11, 2021
    Applicant: FUJITSU LIMITED
    Inventors: Chaoliang ZHONG, Ziqiang SHI, Wensheng XIA, Jun SUN
  • Publication number: 20200356807
    Abstract: A training device and a training method for training a multi-goal model based on goals in a goal space are provided. The training device includes a memory and a processor coupled to the memory. The processor is configured to set the goal space, to acquire a plurality of sub-goal spaces of different levels of difficulty; change a sub-goal space to be processed from a current sub-goal space to a next sub-goal space of a higher level of difficulty; select, as sampling goals, goals at least from the current sub-goal space, and to acquire transitions related to the sampling goals by executing actions; train the multi-goal model based on the transitions, and evaluate the multi-goal model by calculating a success rate for achieving goals in the current sub-goal space.
    Type: Application
    Filed: May 7, 2020
    Publication date: November 12, 2020
    Applicant: FUJITSU LIMITED
    Inventors: Chaoliang Zhong, Wensheng Xia, Ziqiang Shi, Jun Sun
  • Patent number: 10796205
    Abstract: A multi-view vector processing method and a multi-view vector processing device are provided. A multi-view vector x represents an object containing information on at least two non-discrete views. A model of the multi-view vector, where the model includes at least components of: a population mean ? of the multi-view vector, view component of each view of the multi-view vector and noise is established. The population mean ?, parameters of each view component and parameters of the noise , are obtained by using training data of the multi-view vector x. The device includes a processor and a storage medium storing program codes, and the program codes implements the aforementioned method when being executed by the processor.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: October 6, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Ziqiang Shi, Liu Liu, Rujie Liu
  • Patent number: 10657969
    Abstract: An identity verification method and an identity verification apparatus based on a voiceprint are provided. The identity verification method based on a voiceprint includes: receiving an unknown voice; extracting a voiceprint of the unknown voice using a neural network-based voiceprint extractor which is obtained through pre-training; concatenating the extracted voiceprint with a pre-stored voiceprint to obtain a concatenated voiceprint; and performing judgment on the concatenated voiceprint using a pre-trained classification model, to verify whether the extracted voiceprint and the pre-stored voiceprint are from a same person. With the identity verification method and the identity verification apparatus, a holographic voiceprint of the speaker can be extracted from a short voice segment, such that the verification result is more robust.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: May 19, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Ziqiang Shi, Liu Liu, Rujie Liu
  • Publication number: 20180336438
    Abstract: A multi-view vector processing method and a multi-view vector processing device are provided. A multi-view vector x represents an object containing information on at least two non-discrete views. A model of the multi-view vector, where the model includes at least components of: a population mean ? of the multi-view vector, view component of each view of the multi-view vector and noise is established. The population mean ?, parameters of each view component and parameters of the noise , are obtained by using training data of the multi-view vector x. The device includes a processor and a storage medium storing program codes, and the program codes implements the aforementioned method when being executed by the processor.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 22, 2018
    Applicant: FUJITSU LIMITED
    Inventors: Ziqiang SHI, Liu LIU, Rujie LIU
  • Publication number: 20180197547
    Abstract: An identity verification method and an identity verification apparatus based on a voiceprint are provided. The identity verification method based on a voiceprint includes: receiving an unknown voice; extracting a voiceprint of the unknown voice using a neural network-based voiceprint extractor which is obtained through pre-training; concatenating the extracted voiceprint with a pre-stored voiceprint to obtain a concatenated voiceprint; and performing judgment on the concatenated voiceprint using a pre-trained classification model, to verify whether the extracted voiceprint and the pre-stored voiceprint are from a same person. With the identity verification method and the identity verification apparatus, a holographic voiceprint of the speaker can be extracted from a short voice segment, such that the verification result is more robust.
    Type: Application
    Filed: January 9, 2018
    Publication date: July 12, 2018
    Applicant: Fujitsu Limited
    Inventors: Ziqiang SHI, Liu LIU, Rujie LIU
  • Publication number: 20170294191
    Abstract: The present invention discloses a method for speaker recognition and an apparatus for speaker recognition. The method for speaker recognition comprises: extracting, from a speaker-to-be-recognized corpus, voice characteristics of a speaker to be recognized: obtaining a speaker-to-be-recognized model based on the extracted voice characteristics of the speaker to be recognized, a universal background model UBM reflecting distribution of the voice characteristics in a characteristic space, a gradient universal speaker model GUSM reflecting statistic values of changes of the distribution of the voice characterizes in the characteristic space and a total change matrix reflecting environmental changes; and comparing the speaker-to-be-recognized model with known speaker models, to determine whether or not the speaker to be recognized is one of known speakers.
    Type: Application
    Filed: April 3, 2017
    Publication date: October 12, 2017
    Inventors: Ziqiang SHI, Liu LIU, Rujie Liu
  • Patent number: D1037019
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: July 30, 2024
    Inventors: Ziqiang Shi, Jiali Lin
  • Patent number: D1051734
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: November 19, 2024
    Inventors: Ziqiang Shi, Jiali Lin
  • Patent number: D1051754
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: November 19, 2024
    Inventors: Ziqiang Shi, Jiali Lin
  • Patent number: D1055704
    Type: Grant
    Filed: May 15, 2024
    Date of Patent: December 31, 2024
    Assignee: Jinjiang Longhu town Yuansheng electronic watch factory
    Inventors: Xiaosen Sun, Ziqiang Shi
  • Patent number: D1058381
    Type: Grant
    Filed: October 23, 2024
    Date of Patent: January 21, 2025
    Assignee: Jinjiang Dongsheng Watch and Electronics Trading Co., Ltd.
    Inventor: Ziqiang Shi