Patents Assigned to Gyrfalcon Technology, Inc.
  • Publication number: 20210019606
    Abstract: An integrated circuit may include multiple cellular neural networks (CNN) processing engines coupled to at least one input/output data bus and a clock-skew circuit in a loop circuit. Each CNN processing engine includes multiple convolution layers, a first memory buffer to store imagery data and a second memory buffer to store filter coefficients. Each of the CNN processing engines is configured to perform convolution operations over an input image simultaneously in a first clock cycle to generate output to be fed to an immediate neighbor CNN processing engine for performing convolution operations in a next clock cycle. The second memory buffer may store a first subset of filter coefficients for a first convolution layer of the CNN processing engine and store a reference location to the first subset of filter coefficients for a second convolution layer, where the filter coefficients for the first and second convolution layers are duplicate.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 21, 2021
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Baohua Sun, Yongxiong Ren, Wenhan Zhang
  • Publication number: 20210019602
    Abstract: An integrated circuit may include multiple cellular neural networks (CNN) processing engines coupled in a loop circuit and configured to perform an AI task. Each CNN processing engine includes multiple convolution layers, a first memory buffer to store imagery data and a second memory buffer to store filter coefficients. The CNN processing engines are configured to perform convolution operations over an input image simultaneously in one or more iterations. In each iteration, various sub-images of the input image are loaded to the first memory buffer circularly. A portion of the filter coefficients corresponding to the sub-image are loaded to the second memory buffer in a cyclic order. Data may be arranged in the second memory buffer to facilitate loading of duplicate filter coefficients among at least two convolution layers without requiring duplicate memory space. Methods of training a CNN model having duplicate weights are also provided.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 21, 2021
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Baohua Sun, Yongxiong Ren, Wenhan Zhang
  • Publication number: 20200380263
    Abstract: A system for detecting key frames in a video may include a feature extractor configured to extract feature descriptors for each of the multiple image frames in the video. The feature extractor may be an embedded cellular neural network of an artificial intelligence (AI) chip. The system may also include a key frame extractor configured to determine one or more key frames in the multiple image frames based on the corresponding feature descriptors of the image frames. The key frame extractor may determine the key frames based on distance values between a first set of feature descriptors corresponding to a first subset of image frames and a second set of feature descriptors corresponding to a second subset of image frames. The system may output an alert based on determining the key frames and/or display the key frames. The system may also compress the video by removing the non-key frames.
    Type: Application
    Filed: May 29, 2019
    Publication date: December 3, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Bin Yang, Qi Dong, Xiaochun Li, Wenhan Zhang, Yinbo Shi, Yequn Zhang
  • Publication number: 20200320385
    Abstract: A system for training an artificial intelligence (AI) model for an AI chip may include a forward network and a backward propagation network. The AI model may be a convolution neural network (CNN). The forward network may infer the output of the AI chip based on the training data. The backward network may use the output of the AI chip and the ground truth data to train the weights of the AI model. In some examples, the system may train the AI model using a gradient descent method. The system may quantize the weights and update the weights during the training. In some examples, the system may perform a uniform quantization over the weights. The system may also determine the distribution of the weights. If the weight distribution is not symmetric, the system may group the weights and quantize the weights based on the grouping.
    Type: Application
    Filed: April 30, 2019
    Publication date: October 8, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Baohua Sun, Yongxiong Ren, Wenhan Zhang, Patrick Zeng Dong
  • Publication number: 20200302288
    Abstract: A system for training an artificial intelligence (AI) model for an AI chip may include an AI training unit to train weights of an AI model in floating point, and one or more quantization units for updating the weights of the AI model while accounting for the hardware constraints in the AI chip. The system may also include customization unit for performing one or more linear transformations on the updated weights. The system may also perform output equalization for one or more convolution layers of the AI model to equalize the inputs and/or outputs of each layer of the AI model to within the range allowed in the physical AI chip. The system may further update the weights by performing shift-based quantization that mimics the characteristics of a hardware chip. The updated weights may be stored in fixed point and uploadable to an AI chip implementing an AI task.
    Type: Application
    Filed: September 27, 2019
    Publication date: September 24, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yongxiong Ren, Yi Fan, Yequn Zhang, Tianran Chen, Yinbo Shi, Xiaochun Li, Lin Yang
  • Publication number: 20200302289
    Abstract: A system for training an artificial intelligence (AI) model for an AI chip to implement an AI task may include an AI training unit to train weights of an AI model in floating point, a convolution quantization unit for quantizing the trained weights to a number of quantization levels, and an activation quantization unit for updating the weights of the AI model so that output of the AI model based at least on the updated weights are within a range of activation layers of the AI chip. The updated weights may be stored in fixed point and uploadable to the AI chip. The various units may be configured to account for the hardware constraints in the AI chip to minimize performance degradation when the trained weights are uploaded to the AI chip and expedite training convergence. Forward propagation and backward propagation may be combined in training the AI model.
    Type: Application
    Filed: September 27, 2019
    Publication date: September 24, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yongxiong Ren, Yi Fan, Yequn Zhang, Baohua Sun, Bin Yang, Xiaochun Li, Lin Yang
  • Publication number: 20200302276
    Abstract: An artificial intelligence (AI) semiconductor having an embedded convolution neural network (CNN) may include a first convolution layer and a second convolution layer, in which the weights of the first layer and the weights of the second layer are quantized in different bit-widths, thus at different compression ratios. In a VGG neural network, the weights of a first group of convolution layers may have a different compression ratio than the weights of a second group of convolution layers. The weights of the CNN may be obtained in a training system including convolution quantization and/or activation quantization. Depending on the compression ratio, the weights of a convolution layer may be trained with or without re-training. An AI task, such as image retrieval, may be implemented in the AI semiconductor having the CNN described above.
    Type: Application
    Filed: September 27, 2019
    Publication date: September 24, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Bin Yang, Hua Zhou, Xiaochun Li, Wenhan Zhang, Qi Dong, Yequn Zhang, Yongxiong Ren, Patrick Dong
  • Publication number: 20200293856
    Abstract: A cellular neural network architecture may include a processor and an embedded cellular neural network (CeNN) executable in an artificial intelligence (AI) integrated circuit and configured to perform certain AI functions. The CeNN may include multiple convolution layers, such as first, second, and third layers, each layer having multiple binary weights. In some examples, a method may configure the multiple layers in the CeNN to produce a residual connection. In configuring the second and third layers, the method may use an identity matrix.
    Type: Application
    Filed: March 14, 2019
    Publication date: September 17, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Bowei Liu, Yinbo Shi, Yequn Zhang, Xiaochun Li
  • Publication number: 20200293865
    Abstract: A cellular neural network architecture may include a processor and embedded cellular, neural network (CeNN) executable in an artificial intelligence (AI) integrated circuit and configured to perform certain AI functions. The CeNN may include multiple convolution layers, each having multiple binary weights. In some examples, a method may configure a given layer of the CeNN and one or more additional layers of the CeNN to retrieve the output of the given layer for debugging or training the CeNN. In configuring the one or more additional layers, the method may use an identity layer.
    Type: Application
    Filed: March 14, 2019
    Publication date: September 17, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Bowei Liu, Yinbo Shi, Yequn Zhang, Xiaochun Li
  • Publication number: 20200250523
    Abstract: In some examples, given an AI model in floating point, a system may use one or more artificial intelligence (AI) chips to train a global gain vector for use to convert the AI model in floating point to an AI model in fixed point for uploading to a physical AI chip. The system may determine initial gain vectors, and in each of multiple iterations, obtain the performance values of the AI chips based on the gain vectors and update the gam vectors for the next iteration. The gain vectors are updated based on a velocity of gain. The performance value may be based on feature maps of an AI model before and after the converting. The performance value may also be based on interference over a test dataset. Upon completion of the iterations, the system determines the global gain vector that resulted in the best performance value during the iterations.
    Type: Application
    Filed: February 5, 2019
    Publication date: August 6, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yongxiong Ren, Yequn Zhang, Baohua Sun, Xiaochun Li, Qi Dong, Lin Yang
  • Patent number: 10733039
    Abstract: This disclosure relates to testing of integrated artificial intelligence (AI) circuit with embedded memory to improve effective chip yield and to mapping addressable memory segments of the embedded memory to multilayer AI networks at the network level, layer level, parameter level, and bit level based on bit error rate (BER) of the addressable memory segments. The disclosed methods and systems allows for deployment of one or more multilayer AI networks in an AI circuit with sufficient model accuracy even when the embedded memory has an overall BER higher than a preferred overall threshold.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: August 4, 2020
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Chyu-Jiuh Torng, Daniel H. Liu, Wenhan Zhang, Hualiang Yu
  • Publication number: 20200234118
    Abstract: A system may include multiple client devices and a processing device communicatively coupled to the client devices. One or more client devices may implement a greedy approach in searching for an optimal artificial intelligence (AI) model. For example, a client device may use a training dataset to perform an AI task, and update its AI model. The client device may verify the performance of the AI task and determine whether to accept or reject its updated AI model. Upon rejection, the client device may repeat updating its AI model until the updated AI model is accepted, or until a stopping criteria is met. The processing device may be configured to update the initial AI models based on the accepted updated AI models obtained in the multiple client device. Training data for each of the client devices may contain a subset shuffled from a larger training dataset.
    Type: Application
    Filed: December 3, 2019
    Publication date: July 23, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yinbo Shi, Yequn Zhang, Xiaochun Li, Bowei Liu
  • Publication number: 20200234119
    Abstract: A system may include multiple client devices and a processing device communicatively coupled to the client devices. A client device may receive an initial artificial intelligence (AI) model, use a training dataset to perform an AI task, and update its AI model. The client device may verify the performance of the AI task to determine whether to accept or reject its updated AI model. Upon rejection, the client device may repeat updating its AI model until the updated AI model is accepted, or until a stopping criteria is met. The processing device may be configured to update the initial AI models based on the accepted updated AI models obtained in the multiple client devices, and repeat the process for each client device using the updated initial AI models. Training data for each of the client devices may contain a subset shuffled from a larger training dataset.
    Type: Application
    Filed: December 3, 2019
    Publication date: July 23, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yinbo Shi, Yequn Zhang, Xiaochun Li, Bowei Liu
  • Patent number: 10713830
    Abstract: An image and the maximum number of tokens for a to-be-created image caption are received in a computing system. Font size of graphical image of the token is calculated from the maximum number of tokens and the dimension of desired input image for prediction-style image classification technique. Desired input image is divided into first and second portions. A 2-D symbol is formed by placing a resized image derived from the received image with substantially similar contents in the first portion and by initializing the second portion with blank images. Next token of the image caption is predicted by classifying the 2-D symbol using the prediction-style image classification technique. 2-D symbol is modified by appending the graphical image of just-predicted token to the existing image caption in the second portion, if termination condition for image caption creation is false. Next token is repeatedly predicted until termination condition becomes true.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: July 14, 2020
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Baohua Sun
  • Publication number: 20200201697
    Abstract: This disclosure relates to testing of integrated artificial intelligence (AI) circuit with embedded memory to improve effective chip yield and to mapping addressable memory segments of the embedded memory to multilayer AI networks at the network level, layer level, parameter level, and bit level based on bit error rate (BER) of the addressable memory segments. The disclosed methods and systems allows for deployment of one or more multilayer AI networks in an AI circuit with sufficient model accuracy even when the embedded memory has an overall BER higher than a preferred overall threshold.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Chyu-Jiuh Torng, Daniel H. LIU, Wenhan Zhang, Hualiang Yu
  • Publication number: 20200193280
    Abstract: This disclosure relates to artificial intelligence (AI) circuits with embedded memory for storing trained AI model parameters. The embedded memory cell structure, device profile, and/or fabrication process are designed to generate binary data access asymmetry and error rate asymmetry between writing binary zeros and binary ones that are adapted to and compatible with a binary data asymmetry of the trained model parameters and/or a bit-inversion tolerance asymmetry of the AI model between binary zeros and ones. The disclosed method and system improves predictive accuracy and memory error tolerance without significantly reducing an overall memory error rate and without relying on memory cell redundancy and error correction codes.
    Type: Application
    Filed: December 12, 2018
    Publication date: June 18, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Chyu-Jiuh Torng, Hualiang Yu, Wenhan Zhang, Daniel H. Liu
  • Patent number: 10672455
    Abstract: An integrated circuit includes an artificial intelligence (AI) logic and an embedded memory coupled to the AI logic and connectable to an external processor. The embedded memory includes multiple storage cells and multiple reference units. One or more reference units in the memory are selected for memory access through configuration at chip packaging level by the external processor. The external processor may execute a self-test process to select or update the one or more reference units for memory access so that the error rate of memory is below a threshold. The self-test process may be performed, via a memory initialization controller in the memory, to test and reuse the reference cells in the memory at chip level. The embedded memory may be a STT-MRAM, SOT, OST MRAM, and/or MeRAM memory.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: June 2, 2020
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Chyu-Jiuh Torng, Lin Yang, Qi Dong, Daniel H. Liu
  • Publication number: 20200151551
    Abstract: A system may include multiple client devices and a processing device communicatively coupled to the client devices. Each client device includes an artificial intelligence (AI) chip and is configured to generate an AI model. The processing device may be configured to (i) receive a respective AI model and an associated performance value of the respective AI model from each of the plurality of client devices; (ii) determine an optimal AI model based on the performance values associated with the respective AI models from the plurality of client devices; and (iii) determine a global AI model based on the optimal AI model. The system may load the global AI model into an AI chip of a client device to cause the client device to perform an AI task based on the global AI model in the AI chip. The AI model may include a convolutional neural network.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yequn Zhang, Yongxiong Ren, Baohua Sun, Lin Yang, Qi Dong
  • Publication number: 20200151584
    Abstract: A device for obtaining a local optimal AI model may include an artificial intelligence (AI) chip and a processing device configured to receive a first initial AI model from the host device. The device may load the initial AI model into the AI chip to determine a performance value of the AI model based on a dataset, and determine a probability that a current AI model should be replaced by the initial AI model. The device may determine, based on the probability, whether to replace the current AI model with the initial AI model. If it is determined that the current AI model be replaced, the device may replace the current AI model with the initial AI model. The device may repeat the above processes and obtain a final current AI model. The device may transmit the final current AI model to the host device.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yequn Zhang, Yongxiong Ren, Baohua Sun, Lin Yang, Qi Dong
  • Publication number: 20200151558
    Abstract: A system may be configured to obtain a global artificial intelligence (AI) model for uploading into an AI chip to perform AI tasks. The system may implement a training process including receiving updated AI models from one or more client devices, determining a global AI model based on the received AI models from the client devices, and updating initial AI models for the client devices. Each client device may receive an initial AI model and train an updated AI model by training the entire parameters of the AI model together, by training a subset of the parameters of the AI model in a layer by layer fashion, or by training a subset of the parameters by parameter types. Each client device may include one or more AI chips configured to run an AI task to measure performance of an AI model. The AI model may include a convolutional neural network.
    Type: Application
    Filed: February 11, 2019
    Publication date: May 14, 2020
    Applicant: Gyrfalcon Technology Inc.
    Inventors: Yongxiong Ren, Yequn Zhang, Baohua Sun, Xiaochun Li, Qi Dong, Lin Yang