Patents by Inventor Zhenhua Guo

Zhenhua Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941023
    Abstract: Provided are a system and method for implementing an incremental data comparison. The system includes: a synchronization T environment, a simulation F environment, a simulation tool, a memory database, a comparison tool and a result database. The T environment includes a synchronization environment core application, a traditional commercial database, an incremental synchronization tool and a synchronization environment distributed database. The F environment includes a simulation environment core application and a simulation environment distributed database. The simulation tool is configured to play back a T environment service and an F environment service. The memory database is configured to store a mapping relationship between message values of the T and F environment. The comparison tool is configured to compare data in the T and F environment according to the mapping relationship between the message values.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: March 26, 2024
    Assignee: ZTE CORPORATION
    Inventors: Jianhua Mai, Zhiwen Liu, Yan Ding, Longbo Guo, Peng Zhang, Zhenhua Xu
  • Patent number: 11868817
    Abstract: A load balancing method, apparatus and device for a parallel model training task, and a computer-readable storage medium, includes: acquiring data traffic and a theoretical computational amount of each network layer in a target model; determining a theoretical computing capability of each computing device, and obtaining an initial computational amount corresponding to each computing device according to the theoretical computing capability and the theoretical computational amount; performing a load balancing operation according to the initial computational amount by using multiple device critical layer position division rule, so as to obtain a plurality of initial balancing schemes; compiling statistics on time performance parameters corresponding to the initial balancing schemes, and determining an intermediate balancing scheme from the initial balancing schemes according to the time performance parameters; and adjusting the intermediate balancing scheme according to the data traffic, so as to obtain a final
    Type: Grant
    Filed: February 20, 2021
    Date of Patent: January 9, 2024
    Assignee: INSPUR ELECTRONIC INFORMATION INDUSTRY CO., LTD.
    Inventors: Li Wang, Kai Gao, Fang Cao, Zhenhua Guo
  • Publication number: 20240005633
    Abstract: Disclosed are a person Re-identification (Re-ID) method, system, and device, and a computer-readable storage medium. The method includes: acquiring a sample set to be trained (S101); training a pre-constructed person Re-ID model by data re-sampling and cross-validation methods based on the sample set to be trained to obtain a trained person Re-ID model (S102); and performing person Re-ID based on the trained person Re-ID model, wherein persons in any two groups are of different classes after the sample set to be trained is grouped according to the cross-validation method (S103).
    Type: Application
    Filed: September 24, 2020
    Publication date: January 4, 2024
    Inventors: Runze ZHANG, Liang JIN, Zhenhua GUO
  • Publication number: 20230316089
    Abstract: An nGraph-based graphics processing unit (GPU) backend distributed training method and system, a computer-readable storage medium, and an electronic device. The method includes: receiving a training request, and obtaining corresponding training data; obtaining a Nvidia® Collective multi-GPU Communication Library (NCCL) file by means of a system path of the NCCL file linked to an nGraph framework; invoking an NCCL communication interface configuration according to the training request to obtain a training model, the NCCL communication interface is an NCCL file-based communication operation interface located at a GPU backend of the nGraph framework; and performing GPU backend training on the training data using the training model. The present application can satisfy an urgent need of a user for performing neural network distributed training on the basis of an nGraph GPU backend, thus further improving the performance of deep learning network training.
    Type: Application
    Filed: July 29, 2021
    Publication date: October 5, 2023
    Applicant: Inspur Suzhou Intelligent Technology Co., Ltd.
    Inventors: Li WANG, Fang CAO, Zhiyong QIU, Zhenhua GUO
  • Patent number: 11762721
    Abstract: Disclosed are a method for realizing an nGraph framework supporting an FPGA backend device, and a related apparatus. The method includes: integrating an OpenCL standard API library into an nGraph framework; creating, in the nGraph framework, an FPGA backend device creation module for registering an FPGA rear-end device, initializing an OpenCL environment and acquiring the FPGA backend device; creating, in the nGraph framework, an FPGA buffer space processing module for opening up an FPGA buffer space and for reading and writing an FPGA cache; creating, in the nGraph framework, an OP kernel implementation module for creating an OP kernel and compiling the OP kernel; and creating, in the nGraph framework, an FPGA compiling execution module for registering, scheduling and executing the OP kernel.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: September 19, 2023
    Assignee: INSPUR ELECTRONIC INFORMATION INDUSTRY CO., LTD.
    Inventors: Fang Cao, Zhenhua Guo, Li Wang, Kai Gao
  • Publication number: 20230267024
    Abstract: Disclosed are a method for realizing an nGraph framework supporting an FPGA backend device, and a related apparatus. The method includes: integrating an OpenCL standard API library into an nGraph framework; creating, in the nGraph framework, an FPGA backend device creation module for registering an FPGA rear-end device, initializing an OpenCL environment and acquiring the FPGA backend device; creating, in the nGraph framework, an FPGA buffer space processing module for opening up an FPGA buffer space and for reading and writing an FPGA cache; creating, in the nGraph framework, an OP kernel implementation module for creating an OP kernel and compiling the OP kernel; and creating, in the nGraph framework, an FPGA compiling execution module for registering, scheduling and executing the OP kernel.
    Type: Application
    Filed: October 27, 2020
    Publication date: August 24, 2023
    Inventors: Fang CAO, Zhenhua GUO, Li WANG, Kai GAO
  • Publication number: 20230195537
    Abstract: A load balancing method, apparatus and device for a parallel model training task, and a computer-readable storage medium, includes: acquiring data traffic and a theoretical computational amount of each network layer in a target model; determining a theoretical computing capability of each computing device, and obtaining an initial computational amount corresponding to each computing device according to the theoretical computing capability and the theoretical computational amount; performing a load balancing operation according to the initial computational amount by using multiple device critical layer position division rule, so as to obtain a plurality of initial balancing schemes; compiling statistics on time performance parameters corresponding to the initial balancing schemes, and determining an intermediate balancing scheme from the initial balancing schemes according to the time performance parameters; and adjusting the intermediate balancing scheme according to the data traffic, so as to obtain a final
    Type: Application
    Filed: February 20, 2021
    Publication date: June 22, 2023
    Inventors: Li WANG, Kai GAO, Fang CAO, Zhenhua GUO
  • Patent number: 11614964
    Abstract: An image processing method is provided, which is applied to a deep learning model. A cache queue is provided in front of each layer of the deep learning model; a plurality of computation tasks are preset for each layer of the deep learning model in advance, and are configured for computing weight parameters and corresponding to-be-processed data in a plurality of channels in each corresponding layer in parallel, and storing a computation result into a cache queue behind the corresponding layer thereof; in addition, as long as the cache queue in front of the layer includes the computation result stored in the previous layer, the layer can obtain the to-be-processed data from the computation result, subsequent computation is performed, and a parallel pipeline computation mode is also formed between the layers.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: March 28, 2023
    Assignee: INSPUR ELECTRONIC INFORMATION INDUSTRY CO., LTD.
    Inventors: Kai Gao, Zhenhua Guo, Fang Cao
  • Publication number: 20230083518
    Abstract: Disclosed is an image segmentation method, including: obtaining an original image set; performing feature extraction on the original image set by using a backbone network to obtain a feature map set; performing channel extraction fusion processing on the feature map set by using a channel extraction fusion model to obtain an enhanced feature map set; and segmenting the enhanced feature map set by using a preset convolutional neural network to obtain an image segmentation result. In addition, the present application also provides an image segmentation system and device, and a readable storage medium, which have the beneficial effects above.
    Type: Application
    Filed: October 27, 2020
    Publication date: March 16, 2023
    Inventors: Li WANG, Zhenhua GUO, Nan WU, Yaqian ZHAO
  • Publication number: 20220343622
    Abstract: Provided in the present invention are an image segmentation method and apparatus.
    Type: Application
    Filed: December 30, 2019
    Publication date: October 27, 2022
    Inventors: Li WANG, Zhenhua GUO, Yaqian ZHAO
  • Publication number: 20220343166
    Abstract: A computing method and apparatus for a convolutional neural network model. The method comprises: acquiring a computing model of a training task of a convolutional neural network model (S101); then splitting multiply-accumulate operation in a computing model of a training task of the convolutional neural network model into a plurality of multiply-add operation tasks (S102); confirming a computing device corresponding to each multiply-add operation task according to the correlation between a preset computing model and the computing device (S103); and finally, respectively computing each multiply-add operation task by utilizing the computing device corresponding to each multiply-add operation task (S104). The purposes of improving the flexibility of migration of a CNN model training task on different computing devices or cooperative computing of different processors and improving the computing speed are achieved.
    Type: Application
    Filed: November 27, 2019
    Publication date: October 27, 2022
    Inventors: Zhenhua Guo, Baoyu Fan, Li Wang, Kai Gao
  • Publication number: 20220326989
    Abstract: An image processing method is provided, which is applied to a deep learning model. A cache queue is provided in front of each layer of the deep learning model; a plurality of computation tasks are preset for each layer of the deep learning model in advance, and are configured for computing weight parameters and corresponding to-be-processed data in a plurality of channels in each corresponding layer in parallel, and storing a computation result into a cache queue behind the corresponding layer thereof; in addition, as long as the cache queue in front of the layer includes the computation result stored in the previous layer, the layer can obtain the to-be-processed data from the computation result, subsequent computation is performed, and a parallel pipeline computation mode is also formed between the layers.
    Type: Application
    Filed: December 30, 2019
    Publication date: October 13, 2022
    Inventors: Kai GAO, Zhenhua GUO, Fang CAO
  • Patent number: 11353442
    Abstract: The invention provides a physical simulation experimental device and method for water invasion and drainage gas recovery in gas reservoir, and the experimental device includes: a heterogeneous reservoir model having a first core holder, a second core holder, a third core holder and a fourth core holder, wherein the third core holder is connected between the first core holder and the second core holder, and the fourth core holder is connected between an outlet end of the first core holder and an outlet end of the second core holder; a gas injection mechanism having a gas injection bottle and a gas injection cylinder; a water body simulation mechanism having a water storage tank and a water injection pump. The invention can simulate and reveal different drainage gas recovery modes, timings, scales and their influences on the recovery ratio of the gas reservoir.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: June 7, 2022
    Assignee: PETROCHINA COMPANY LIMITED
    Inventors: Xuan Xu, Xizhe Li, Yong Hu, Yongxin Han, Yunsheng Wei, Yujin Wan, Chunyan Jiao, Zhenhua Guo, Haifa Tang, Weigang Huang, Guangzhen Chu, Yunhe Su
  • Publication number: 20210157038
    Abstract: The prism-coupling systems and methods include using a prism-coupling system to collect initial TM and TE mode spectra of a chemically strengthened article having a refractive index profile with a near-surface spike region and a deep region. The prism-coupling system has a light source configured to generate sequential measurement light beams or reflected light beams having different measurement wavelengths. The different measurement wavelengths generate different TM and TE mode spectra. The light source can include multiple light-emitting elements and optical filters or a broadband light source and optical filters. The optical filters can be sequentially inserted into either the input optical path or the output optical path of the prism-coupling system.
    Type: Application
    Filed: November 25, 2020
    Publication date: May 27, 2021
    Inventors: Ryan Claude Andrews, Pierre Michel Bouzi, Zhenhua Guo, Jacob Immerman, Jeremiah Robert Jacobson, Michael David Moon, Babak Robert Raj, Nathaniel David Wetmore, Xiupu Wang
  • Publication number: 20200326265
    Abstract: One measurement method of pig two cell embryo volume. Pig is an important economic animal, the regulation of reproduction can not only provide reference for human reproductive physiology and pathology process research, but also can provide a theoretical basis for improving the reproductive performance of pig. However, the low fertility is a problem plaguing the industry to raise pig. One measurement method of pig two cell embryo volume, based on the figure of pig two cell embryo that was obtained by the microscope, and then measurement, making the regression curve and regression equation. Thus, simple camera and can get more accurate measurement, embryo volume. that is conducive for further research to improve the developmental potential of pig embryo, enhance the production efficiency, is a reliable method to identificatory high quality embryo. The invention is applied to the field of embryo engineering technology.
    Type: Application
    Filed: April 15, 2019
    Publication date: October 15, 2020
    Inventor: ZhenHua GUO
  • Publication number: 20200132656
    Abstract: The invention provides a physical simulation experimental device and method for water invasion and drainage gas recovery in gas reservoir, and the experimental device includes: a heterogeneous reservoir model having a first core holder, a second core holder, a third core holder and a fourth core holder, wherein the third core holder is connected between the first core holder and the second core holder, and the fourth core holder is connected between an outlet end of the first core holder and an outlet end of the second core holder; a gas injection mechanism having a gas injection bottle and a gas injection cylinder; a water body simulation mechanism having a water storage tank and a water injection pump. The invention can simulate and reveal different drainage gas recovery modes, timings, scales and their influences on the recovery ratio of the gas reservoir.
    Type: Application
    Filed: October 25, 2019
    Publication date: April 30, 2020
    Inventors: Xuan Xu, Xizhe Li, Yong Hu, Yongxin Han, Yunsheng Wei, Yujin Wan, Chunyan Jiao, Zhenhua Guo, Haifa Tang, Weigang Huang, Guangzhen Chu, Yunhe Su
  • Publication number: 20190251890
    Abstract: Disclosed herein are devices comprising a receive (RX) sensor layer, a transmit (TX) sensor layer, a cover glass, a polarizer, and at least one conductive element disposed on at least one surface of the cover glass, at least one surface of the polarizer, or both. Also disclosed herein are methods for reducing mura in a touch-display device.
    Type: Application
    Filed: October 31, 2017
    Publication date: August 15, 2019
    Inventors: Xiaoju Guo, Zhenhua Guo, Jr-Nan Hu, Vitor Marino Schneider, Elena Streltsova, Ljerka Ukraincyyk, Sujanto Widjaja
  • Patent number: 9906589
    Abstract: The disclosure is related to a shard manager that manages assignment of shards (data partitions) to application servers. An application service (“app service”) provides a specific service to clients and can be executing on multiple application servers. The dataset managed by the app service can be divided into multiple shards and the shards can be assigned to different app servers. The shard manager can manage the assignment of shards to different app servers based on an assignment policy. The shard assignments can be published to a configuration service. A client can request the configuration service to provide identification information of the app server to which a particular shard the client intends to access is assigned. The shard manager can also provide dynamic load balancing solutions. The shard manager can poll the app servers in runtime to determine the load information and per-shard resource usage, and balance the load by reassigning the shards accordingly.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: February 27, 2018
    Assignee: Facebook, Inc.
    Inventors: Vishal Kathuria, Vikas Mehta, Muthukaruppan Annamalai, Zhenhua Guo
  • Publication number: 20170206148
    Abstract: The disclosure is directed to a failover mechanism for failing over an application service, e.g., a messaging service, from servers in a first region to servers in a second region. Data is stored as shards in which each shard contains data associated with a subset of the users. Data access requests are served by a primary region of the shard. A global shard manager manages failing over the application service from a current primary region of a shard to a secondary region of the shard. The current primary determines whether a criterion for failing over, e.g., a replication lag between the primary and the secondary regions is within a threshold, and if it is within the threshold, the failover process waits until the lag is zero. After the replication lag is zero, the application service is failed over to the second region, which then becomes the primary for the shard.
    Type: Application
    Filed: January 20, 2016
    Publication date: July 20, 2017
    Inventors: Vikas Mehta, Haobo Xu, Jason Curtis Jenks, Hairong Kuang, Pierre-Luc Bertrand, Andrei Lutsenko, Zhenhua Guo, Jun Ying
  • Patent number: 9551725
    Abstract: A system for monitoring cleanliness of a material handling system is disclosed. The system includes a measuring device, a first signal device, a second signal device, and a measuring host. The measuring device installed in the cartridges conduct the cleanliness measurements and obtains the measured result. The first signal device installed in the cartridges transforms the measured results to wireless signals. The second signal device installed in a predetermined location outside of the cartridges receives the wireless signal. The measuring host transforms the received wireless signals back to the measured results. The cartridges of the material handling system are also disclosed. The disclosed cleanliness monitoring system and the cartridge can reduce the cost.
    Type: Grant
    Filed: September 5, 2012
    Date of Patent: January 24, 2017
    Assignee: Shenzhen China Star Optoelectronics Technology Co., Ltd
    Inventors: Yongqiang Wang, Chunhao Wu, Kunhsien Lin, Xiande Li, Minghu Qi, Weibing Yang, Zenghong Chen, Zhenhua Guo, Yunshao Jiang, Zhiyou Shu