Patents by Inventor Shangong Wang

Shangong Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11964308
    Abstract: The present invention discloses a molten salt ultrasonic cleaning machine. The molten salt ultrasonic cleaning machine includes a tank body, a molten salt heating system, an ultrasonic application system and a stirring system, wherein the tank body is configured to accommodate molten salt and a to-be-cleaned workpiece; the tank body includes a bottom wall, and a side wall arranged circumferentially in a surrounding way; the molten salt heating system is configured to heat the molten salt in the tank body; the ultrasonic application system is configured to apply ultrasonic impact to the to-be-cleaned workpiece in the tank body; and the stirring system includes a stirring rod which is rotatably arranged in the tank body. When the molten salt ultrasonic cleaning machine provided by the present invention cleans a workpiece, the stirring rod rotates to improve the flowability of the molten salt.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: April 23, 2024
    Assignee: JIANGSU XCMG CONSTRUCTION MACHINERY RESEARCH INSTITUTE LTD.
    Inventors: Shangong Chen, Xuemei Zong, Guangcun Wang
  • Publication number: 20200026988
    Abstract: Methods and systems are disclosed using improved training and learning for deep neural networks. In one example, a deep neural network includes a plurality of layers, and each layer has a plurality of nodes. For each L layer in the plurality of layers, the nodes of each L layer are randomly connected to nodes in a L+1 layer. For each L+1 layer in the plurality of layers, the nodes of each L+1 layer are connected to nodes in a subsequent L layer in a one-to-one manner. Parameters related to the nodes of each L layer are fixed. Parameters related to the nodes of each L+1 layers are updated, and L is an integer starting with 1. In another example, a deep neural network includes an input layer, output layer, and a plurality of hidden layers. Inputs for the input layer and labels for the output layer are determined related to a first sample. Similarity between different pairs of inputs and labels between a second sample with the first sample is estimated using Gaussian regression process.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Yiwen Guo, Anbang Yao, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng, Wenhua Cheng, Yurong Chen
  • Publication number: 20200026499
    Abstract: Described herein are hardware acceleration of random number generation for machine learning and deep learning applications. An apparatus (700) includes a uniform random number generator (URNG) circuit (710) to generate uniform random numbers and an adder circuit (750) that is coupled to the URNG circuit (710). The adder circuit hardware (750) accelerates generation of Gaussian random numbers for machine learning.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Yiwen Guo, Anbang Yao, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng
  • Publication number: 20200026999
    Abstract: Methods and systems are disclosed for boosting deep neural networks for deep learning. In one example, in a deep neural network including a first shallow network and a second shallow network, a first training sample is processed by the first shallow network using equal weights. A loss for the first shallow network is determined based on the processed training sample using equal weights. Weights for the second shallow network are adjusted based on the determined loss for the first shallow network. A second training sample is processed by the second shallow network using the adjusted weights. In another example, in a deep neural network including a first weak network and a second weak network, a first subset of training samples is processed by the first weak network using initialized weights. A classification error for the first weak network on the first subset of training samples is determined.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Libin Wang, Yiwen Guo, Anbang Yao, Dongqi Cai, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng, Yurong Chen
  • Publication number: 20200026965
    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Yiwen GUO, Yuqing Hou, Anbang Yao, Dongqi Cai, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng, Yurong Chen, Libin Wag