Patents by Inventor Patrick Z. Dong

Patrick Z. Dong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200304831
    Abstract: Methods and systems for using feature encoding for storing a video stream without redundant frames are disclosed. A video stream containing a plurality of frames is received in a computing system. Each frame is divided to one or more sub-frames with each sub-frame containing a resolution suitable as an input image to a deep learning model based on VGG-16 model, ResNet or MobilNet. Respective vectors of feature encoding values of all sub-frames of current and immediately prior frames are obtained by performing computations of the deep learning model. A difference metric between the current frame and the immediately prior frame is obtained by comparing the respective vectors using a difference measurement technique. The current frame is stored in a to-be-kept video file only when the difference metric indicates that the current frame and the immediately prior frame are different in accordance with a predefined criterion.
    Type: Application
    Filed: April 9, 2019
    Publication date: September 24, 2020
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10482374
    Abstract: An ensemble learning based image classification system contains multiple cellular neural networks (CNN) based integrated circuits (ICs) operatively coupling together as a set of base learners of an ensemble for an image classification task. Each CNN based IC is configured with at least one distinct deep learning model in form of filter coefficients. The ensemble learning based image classification system further contains a controller configured as a meta learner of the ensemble and a memory based data buffer for holding various data used in the ensemble by the controller and the CNN based ICs. Various data may include input imagery data to be classified. Various data may also include extracted feature vectors or image classification outputs out of the set of base learners. The extracted feature vectors or image classification outputs are then used by the meta learner to further perform the image classification task.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: November 19, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Jason Z. Dong, Michael Lin, Baohua Sun
  • Patent number: 10445568
    Abstract: Two-dimensional symbols with each containing multiple ideograms for facilitating machine learning are disclosed. Two-dimensional symbol comprises a matrix of N×N pixels of data representing a “super-character”. The matrix is divided into M×M sub-matrices with each of the sub-matrices containing (N/M)×(N/M) pixels. N and M are positive integers or whole numbers, and N is preferably a multiple of M. Each of the sub-matrices represents one ideogram defined in an ideogram collection set. “Super-character” represents at least one meaning each formed with a specific combination of a plurality of ideograms. Ideogram collection set includes, but is not limited to, pictograms, logosyllabic characters, Japanese characters, Korean characters, punctuation marks, numerals, special characters. Logosyllabic characters may contain one or more of Chinese characters, Japanese characters, Korean characters. Features of each ideogram can be represented by more than one layer of two-dimensional symbol.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: October 15, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10417342
    Abstract: A local processing device contains a bus, an input interface and at least one cellular neural networks (CNN) based integrated circuit (IC). The input interface is receiving a 2-D symbol representing a Chinese poetry or verse. The 2-D symbol is a matrix of N×N pixels of K-bit data that contains a “super-character”. The matrix is divided into M×M sub-matrices each containing (N/M)×(N/M) pixels. Each of the sub-matrices represents an ideogram. K, N and M are positive integers, and N is a multiple of M. CNN based IC is configured for understanding semantic meaning of the Chinese poetry or verse within the “super-character” contained in the 2-D symbol. The ideogram is created by embedded fonts of all of the characters contained in a corresponding phrase of the Chinese poetry or verse or is a pictogram representing artistic conception of each sentence of the Chinese poetry or verse.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: September 17, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10402628
    Abstract: Image classification system contains a CNN based IC configured for extracting features out of input data by performing convolution operations using filter coefficients of ordered convolutional layers and a classifier IC configured for classifying the input data using reduced set of the extracted features based on a light-weight classifier. Light-weight classifier is derived by: training filter coefficients of the ordered convolutional layers using a dataset containing N labeled data, the trained filter coefficients are for the CNN based IC; outputting respective extracted features of the N labeled data after performing convolution operations of ordered convolutional layers using the trained filter coefficients, each labeled data contains X features; creating the reduced set of the extracted features by eliminating those of the X features that contain zeros in at least M of the N labeled data; and adjusting M until the light-weight classifier achieves satisfactory results using the reduced set.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: September 3, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Jason Dong, Wenhan Zhang, Baohua Sun
  • Patent number: 10387772
    Abstract: An ensemble learning based image classification system contains multiple cellular neural networks (CNN) based integrated circuits (ICs) operatively coupling together as a set of base learners of an ensemble for an image classification task. Each CNN based IC is configured with at least one distinct deep learning model in form of filter coefficients. The ensemble learning based image classification system further contains a controller configured as a meta learner of the ensemble and a memory based data buffer for holding various data used in the ensemble by the controller and the CNN based ICs. Various data may include input imagery data to be classified. Various data may also include extracted feature vectors or image classification outputs out of the set of base learners. The extracted feature vectors or image classification outputs are then used by the meta learner to further perform the image classification task.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: August 20, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Jason Z. Dong, Michael Lin, Baohua Sun
  • Patent number: 10387740
    Abstract: A deep learning object detection and recognition system contains a number of cellular neural networks (CNN) based integrated circuits (ICs) operatively coupling together via the network bus. The system is configured for detecting and then recognizing one or more objects out of a two-dimensional (2-D) imagery data. The 2-D imagery data is divided into N set of distinct sub-regions in accordance with respective N partition schemes. CNN based ICs are dynamically allocated for extracting features out of each sub-region for detecting and then recognizing an object potentially contained therein. Any two of the N sets of sub-regions overlap each other. N is a positive integer. Object detection is achieved with a two-category classification using a deep learning model based on approximated fully-connected layers, while object recognition is performed using a local database storing feature vectors of known objects.
    Type: Grant
    Filed: May 19, 2018
    Date of Patent: August 20, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Jason Z. Dong, Wenhan Zhang, Baohua Sun
  • Patent number: 10366302
    Abstract: CNN based integrated circuit is configured with a set of pre-trained filter coefficients or weights as a feature extractor of an input data. Multiple fully-connected networks (FCNs) are trained for use in a hierarchical category classification scheme. Each FCN is capable of classifying the input data via the extracted features in a specific level of the hierarchical category classification scheme. First, a root level FCN is used for classifying the input data among a set of top level categories. Then, a relevant next level FCN is used in conjunction with the same extracted features for further classifying the input data among a set of subcategories to the most probable category identified using the previous level FCN. Hierarchical category classification scheme continues for further detailed subcategories if desired.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: July 30, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10366328
    Abstract: Multiple 3×3 convolutional filter kernels are used for approximating operations of fully-connected (FC) layers. Image classification task is entirely performed within a CNN based integrated circuit. Output at the end of ordered convolutional layers contains P feature maps with F×F pixels of data per feature map. 3×3 filter kernels comprises L layers with each organized in an array of R×Q of 3×3 filter kernels, Q and R are respective numbers of input and output feature maps of a particular layer of the L layers. Each input feature map of the particular layer comprises F×F pixels of data with one-pixel padding added around its perimeter. Each output feature map of the particular layer comprises (F?2)×(F?2) pixels of useful data. Output of the last layer of the L layers contains Z classes. L equals to (F?1)/2 if F is an odd number. P, F, Q, R and Z are positive integers.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: July 30, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Jason Z. Dong, Baohua Sun
  • Publication number: 20190228219
    Abstract: Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. A plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.
    Type: Application
    Filed: April 4, 2019
    Publication date: July 25, 2019
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10360470
    Abstract: Method and systems of replacing operations of depthwise separable filters with first and second replacement convolutional layers are disclosed. Depthwise separable filters contains a combination of a depthwise convolutional layer followed by a pointwise convolutional layer with input of P feature maps and output of Q feature maps. The first replacement convolutional layer contains P×P of 3×3 filter kernels formed by placing each of the P×1 of 3×3 filter kernels of the depthwise convolutional layer on respective P diagonal locations, and zero-value 3×3 filter kernels zero-value 3×3 filter kernels in all off-diagonal locations. The second replacement convolutional layer contains Q×P of 3×3 filter kernels formed by placing Q×P of 1×1 filter coefficients of the pointwise convolutional layer in center position of the respective Q×P of 3×3 filter kernels, and numerical value zero in eight perimeter positions.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: July 23, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Jason Z. Dong, Baohua Sun
  • Patent number: 10339445
    Abstract: Operations of a combination of first and second original convolutional layers followed by a short path are replaced by operations of a set of three particular convolutional layers. The first contains 2N×N filter kernels formed by placing said N×N filter kernels of the first original convolutional layer in left side and N×N filter kernels of an identity-value convolutional layer in right side. The second contains 2N×2N filter kernels formed by placing the N×N filter kernels of the second original convolutional layer in upper left corner, N×N filter kernels of an identity-value convolutional layer in lower right corner, and N×N filter kernels of two zero-value convolutional layers in either off-diagonal corner. The third contains N×2N of kernels formed by placing N×N filter kernels of a first identity-value convolutional layer and N×N filter kernels of a second identity-value convolutional layer in a vertical stack. Each filter kernel contains 3×3 filter coefficients.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: July 2, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Baohua Sun
  • Publication number: 20190197301
    Abstract: Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. A plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.
    Type: Application
    Filed: March 2, 2019
    Publication date: June 27, 2019
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Publication number: 20190197300
    Abstract: Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. A plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.
    Type: Application
    Filed: March 2, 2019
    Publication date: June 27, 2019
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10331983
    Abstract: An artificial intelligence inference computing device contains a printed circuit board (PCB) and a number of electronic components mounted thereon. Electronic components include a wireless communication module, a controller module, a memory module, a storage module and at least one cellular neural networks (CNN) based integrated circuit (IC) configured for performing convolutional operations in a deep learning model for extracting features out of input data. Each CNN based IC includes a number of CNN processing engines operatively coupled to at least one input/output data bus. CNN processing engines are connected in a loop with a clock-skew circuit. Wireless communication module is configured for transmitting pre-trained filter coefficients of the deep learning model, input data and classification results.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: June 25, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Jason Z. Dong, Dan Bin Liu, Baohua Sun
  • Patent number: 10325147
    Abstract: Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. A plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.
    Type: Grant
    Filed: March 2, 2019
    Date of Patent: June 18, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10311294
    Abstract: Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. A plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.
    Type: Grant
    Filed: March 2, 2019
    Date of Patent: June 4, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10311149
    Abstract: Natural language translation device contains a bus, an input interface connecting to the bus for receiving a source sentence in a first natural language to be translated to a target sentence in second natural language one word at a time in sequential order. A two-dimensional (2-D) symbol containing a super-character characterizing the i-th word of the target sentence based on the received source sentence is formed in accordance with a set of 2-D symbol creation rules. The i-th word of the target sentence is obtained by classifying the 2-D symbol via a deep learning model that contains multiple ordered convolution layers in a Cellular Neural Networks or Cellular Nonlinear Networks (CNN) based integrated circuit.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: June 4, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Catherine Chi, Charles Jin Young, Jason Z Dong, Baohua Sun
  • Patent number: 10296817
    Abstract: Apparatus for recognition of handwritten Chinese characters contains a bus, an input means connecting to the bus for receiving input imagery data created from a handwritten Chinese character, a Cellular Neural Networks or Cellular Nonlinear Networks (CNN) based integrated circuit operatively connecting to the bus for extracting features out of the input imagery data using pre-trained filter coefficients of a plurality of order convolutional layers stored therein, a memory connecting the bus, the memory being configured for storing weight coefficients of fully-connected (FC) layers, a processing unit connecting to the bus for performing computations of FC layers to classify the extracted features from the CNN based integrated circuit to a particular Chinese character in a predefined Chinese character set, and a display unit connecting to the bus for displaying the particular Chinese character.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: May 21, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Charles Jin Young, Jason Dong, Wenhan Zhang, Baohua Sun
  • Patent number: 10275646
    Abstract: Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. A plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: April 30, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun