Patents by Inventor Dongqi Cai

Dongqi Cai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210201078
    Abstract: Methods and systems for advanced and augmented training of deep neural networks (DNNs) using synthetic data and innovative generative networks. A method includes training a DNN using synthetic data, training a plurality of DNNs using context data, associating features of the DNNs trained using context data with features of the DNN trained with synthetic data, and generating an augmented DNN using the associated features.
    Type: Application
    Filed: April 7, 2017
    Publication date: July 1, 2021
    Inventors: Anbang Yao, Shandong Wang, Wenhua Cheng, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Yiwen Guo, Liu Yang, Yuging Hou, Zhou Su, Yurong Chen
  • Publication number: 20210133911
    Abstract: Described herein are advanced artificial intelligence agents for modeling physical interactions. An apparatus to provide an active artificial intelligence (AI) agent includes at least one database to store physical interaction data and compute cluster coupled to the at least one database. The compute cluster automatically obtains physical interaction data from a data collection module without manual interaction, stores the physical interaction data in the at least one database, and automatically trains diverse sets of machine learning program units to simulate physical interactions with each individual program unit having a different model based on the applied physical interaction data.
    Type: Application
    Filed: April 7, 2017
    Publication date: May 6, 2021
    Inventors: Anbang YAO, Dongqi CAI, Libin WANG, Lin XU, Ping HU, Shandong WANG, Wehnua CHENG, Yiwen GUO, Liu YANG, Yuqing HOU, Zhou SU
  • Publication number: 20210004572
    Abstract: Methods and apparatus for multi-task recognition using neural networks are disclosed. An example apparatus includes a filter engine to generate a facial identifier feature map based on image data, the facial identifier feature map to identify a face within the image data. The example apparatus also includes a sibling semantic engine to process the facial identifier feature map to generate an attribute feature map associated with a facial attribute. The example apparatus also includes a task loss engine to calculate a probability factor for the attribute, the probability factor identifying the facial attribute. The example apparatus also includes a report generator to generate a report indicative of a classification of the facial attribute.
    Type: Application
    Filed: March 26, 2018
    Publication date: January 7, 2021
    Inventors: Ping Hu, Anbang Yao, Yurong Chen, Dongqi Cai, Shandong Wang
  • Publication number: 20200285879
    Abstract: A semiconductor package apparatus may include technology to apply a trained scene text detection network to an image to identify a core text region, a supportive text region, and a background region of the image, and detect text in the image based on the identified core text region and supportive text region. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: November 8, 2017
    Publication date: September 10, 2020
    Applicant: INTEL CORPORATION
    Inventors: Wenhua Cheng, Anbang Yao, Libin Wang, Dongqi Cai, Jianguo Li, Yurong Chen
  • Publication number: 20200279156
    Abstract: A system to perform multi-modal analysis has at least three distinct characteristics: an early abstraction layer for each data modality integrating homogeneous feature cues coming from different deep learning architectures for that data modality, a late abstraction layer for further integrating heterogeneous features extracted from different models or data modalities and output from the early abstraction layer, and a propagation-down strategy for joint network training in an end-to-end manner. The system is thus able to consider correlations among homogeneous features and correlations among heterogenous features at different levels of abstraction. The system further extracts and fuses discriminative information contained in these models and modalities for high performance emotion recognition.
    Type: Application
    Filed: October 9, 2017
    Publication date: September 3, 2020
    Inventors: Dongqi Cai, Anbang Yao, Ping Hu, Shandong Wang, Yurong Chen
  • Publication number: 20200242734
    Abstract: Methods and systems are disclosed using improved Convolutional Neural Networks (CNN) for image processing. In one example, an input image is down-sampled into smaller images with a smaller resolution than the input image. The down-sampled smaller images are processed by a CNN having a last layer with a reduced number of nodes than a last layer of a full CNN used to process the input image at a full resolution. A result is outputted based on the processed down-sampled smaller images by the CNN having a last layer with a reduced number of nodes. In another example, shallow CNN networks are built randomly. The randomly built shallow CNN networks are combined to imitate a trained deep neural network (DNN).
    Type: Application
    Filed: April 7, 2017
    Publication date: July 30, 2020
    Inventors: Shandong WANG, Yiwen GUO, Anbang YAO, Dongqi CAI, Libin WANG, Lin XU, Ping HU, Wenhua CHENG, Yurong CHEN
  • Publication number: 20200234411
    Abstract: Methods and systems are disclosed using camera devices for deep channel and Convolutional Neural Network (CNN) images and formats. In one example, image values are captured by a color sensor array in an image capturing device or camera. The image values provide color channel data. The captured image values by the color sensor array are input to a CNN having at least one CNN layer. The CNN provides CNN channel data for each layer. The color channel data and CNN channel data is to form a deep channel image that stored in a memory. In another example, image values are captured by sensor array. The captured image values by the sensor array are input a CNN having a first CNN layer. An output is generated at the first CNN layer using the captured image values by the color sensor array. The output of the first CNN layer is stored as a feature map of the captured image.
    Type: Application
    Filed: April 7, 2017
    Publication date: July 23, 2020
    Inventors: Lin XU, Liu YANG, Anbang YAO, dongqi CAI, Libin WANG, Ping HU, Shaodong WANG, Wenhua CHENG, Yiwen GUO, Yurong CHEN
  • Publication number: 20200226362
    Abstract: Techniques are provided for neural network based, human attribute recognition, guided by anatomical key-points and statistic correlation models. Attributes include characteristics that can be visibly identified or inferred from an image, such as gender, hairstyle, clothing style, etc. A methodology implementing the techniques according to an embodiment includes applying an attribute feature extraction (AFE) convolutional neural network (CNN) to an image of a human to generate attribute feature maps based on the image. The method further includes applying a key-point guided proposal (KPG) CNN to the image of the human to generate proposed hierarchical regions of the image based on associated anatomical key-points.
    Type: Application
    Filed: December 27, 2017
    Publication date: July 16, 2020
    Applicant: INTEL CORPORATION
    Inventors: Ping Hu, Anbang Yao, Jia Wei, Dongqi Cai, Yurong Chen
  • Publication number: 20200026965
    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Yiwen GUO, Yuqing Hou, Anbang Yao, Dongqi Cai, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng, Yurong Chen, Libin Wag
  • Publication number: 20200027015
    Abstract: Described herein are systems and methods for providing deeply stacked automated program synthesis. In one embodiment, an apparatus to perform automated program synthesis includes a memory to store instructions for automated program synthesis and a compute cluster coupled to the memory. The compute cluster supports the instructions for performing the automated program synthesis including partitioning sketched data into partitions, training diverse sets of individual program synthesis units each having different capabilities with partitioned sketched data and for each partition applying respective transformations, and generating sketched baseline data for each individual program synthesis unit.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Angang YAO, Dongqi CAI, Libin WANG, Lin XU, Ping HU, Shandong WANG, Wenhua CHENG, Yiwen GUO, Liu YANG, Yurong CHEN, Yuqing HOU, Zhou SU
  • Publication number: 20200026499
    Abstract: Described herein are hardware acceleration of random number generation for machine learning and deep learning applications. An apparatus (700) includes a uniform random number generator (URNG) circuit (710) to generate uniform random numbers and an adder circuit (750) that is coupled to the URNG circuit (710). The adder circuit hardware (750) accelerates generation of Gaussian random numbers for machine learning.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Yiwen Guo, Anbang Yao, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng
  • Publication number: 20200026988
    Abstract: Methods and systems are disclosed using improved training and learning for deep neural networks. In one example, a deep neural network includes a plurality of layers, and each layer has a plurality of nodes. For each L layer in the plurality of layers, the nodes of each L layer are randomly connected to nodes in a L+1 layer. For each L+1 layer in the plurality of layers, the nodes of each L+1 layer are connected to nodes in a subsequent L layer in a one-to-one manner. Parameters related to the nodes of each L layer are fixed. Parameters related to the nodes of each L+1 layers are updated, and L is an integer starting with 1. In another example, a deep neural network includes an input layer, output layer, and a plurality of hidden layers. Inputs for the input layer and labels for the output layer are determined related to a first sample. Similarity between different pairs of inputs and labels between a second sample with the first sample is estimated using Gaussian regression process.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Yiwen Guo, Anbang Yao, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng, Wenhua Cheng, Yurong Chen
  • Publication number: 20200026999
    Abstract: Methods and systems are disclosed for boosting deep neural networks for deep learning. In one example, in a deep neural network including a first shallow network and a second shallow network, a first training sample is processed by the first shallow network using equal weights. A loss for the first shallow network is determined based on the processed training sample using equal weights. Weights for the second shallow network are adjusted based on the determined loss for the first shallow network. A second training sample is processed by the second shallow network using the adjusted weights. In another example, in a deep neural network including a first weak network and a second weak network, a first subset of training samples is processed by the first weak network using initialized weights. A classification error for the first weak network on the first subset of training samples is determined.
    Type: Application
    Filed: April 7, 2017
    Publication date: January 23, 2020
    Inventors: Libin Wang, Yiwen Guo, Anbang Yao, Dongqi Cai, Lin Xu, Ping Hu, Shangong Wang, Wenhua Cheng, Yurong Chen
  • Publication number: 20190325203
    Abstract: An apparatus for dynamic emotion recognition in unconstrained scenarios is described herein. The apparatus comprises a controller to pre-process image data and a phase-convolution mechanism to build lower levels of a CNN such that the filters form pairs in phase. The apparatus also comprises a phase-residual mechanism configured to build middle layers of the CNN via plurality of residual functions and an inception-residual mechanism to build top layers of the CNN by introducing multi-scale feature extraction. Further, the apparatus comprises a fully connected mechanism to classify extracted features.
    Type: Application
    Filed: January 20, 2017
    Publication date: October 24, 2019
    Applicant: INTEL CORPORATION
    Inventors: Anbang Yao, Dongqi Cai, Ping Hu, Shandong Wang, Yurong Chen