Patents by Inventor Youn-Long Lin

Youn-Long Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240086312
    Abstract: The invention provides a memory searching device and method. The memory searching device includes a memory, a lookup command processing circuit, and a lookup result processing circuit. The lookup command processing circuit reorders an original order of lookup commands in an original lookup command string into a new order based on an accessing characteristic of the memory, and provides a reordered lookup command string to the memory. The lookup result processing circuit is coupled to the memory to receive a lookup result string, and coupled to the lookup command processing circuit to receive mapping information between the original order and the new order. The lookup result string includes lookup results corresponding to the lookup commands of the reordered lookup command string. The lookup result processing circuit restores an order of the lookup results to the original order based on the mapping information.
    Type: Application
    Filed: October 20, 2022
    Publication date: March 14, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Huang-Chih Kuo, Youn-Long Lin
  • Publication number: 20240005159
    Abstract: A simplification device and a simplification method for neural network model are provided. The simplification method may simplify an original trained neural network model to a simplified trained neural network model, wherein the simplified trained neural network model includes at most two linear operation layers. The simplification method includes: converting the original trained neural network model into an original mathematical function; performing an iterative analysis operation on the original mathematical function to simplify the original mathematical function to a simplified mathematical function, wherein the simplified mathematical function has a new weight; computing the new weight by using multiple original weights of the original trained neural network model; and converting the simplified mathematical function to the simplified trained neural network model.
    Type: Application
    Filed: August 22, 2022
    Publication date: January 4, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Po-Han Chen, Yi Lee, Kai-Chiang Wu, Youn-Long Lin, Juinn-Dar Huang
  • Patent number: 11782839
    Abstract: A feature map caching method of a convolutional neural network includes a connection analyzing step and a plurality of layer operation steps. The connection analyzing step is for analyzing a network to establish a convolutional neural network connection list. The convolutional neural network connection list includes a plurality of tensors and a plurality of layer operation coefficients. Each of the layer operation coefficients includes a step index, at least one input operand label and an output operand label. The step index as a processing order for the layer operation step. At least one of the layer operation steps is for flushing at least one of the tensors in a cache according to a distance between the at least one of the layer operation steps and a future layer operation step of the layer operation steps. The distance is calculated according to the convolutional neural network connection list.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: October 10, 2023
    Assignee: NEUCHIPS CORPORATION
    Inventors: Ping Chao, Chao-Yang Kao, Youn-Long Lin
  • Patent number: 11615286
    Abstract: A computing system and a compressing method for neural network parameters are provided. In the method, multiple neural network parameters are obtained. The neural network parameters are used for a neural network algorithm. Every at least two neural network parameters are grouped into an encoding combination. The number of neural network parameters in each encoding combination is the same. The encoding combinations are compressed with the same compression target bit number. Each encoding combination is compressed independently. The compression target bit number is not larger than a bit number of each encoding combination. Thereby, the storage space can be saved and excessive power consumption for accessing the parameters can be prevented.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: March 28, 2023
    Assignee: NEUCHIPS CORPORATION
    Inventors: Youn-Long Lin, Chao-Yang Kao, Huang-Chih Kuo, Chiung-Liang Lin
  • Publication number: 20220358183
    Abstract: A matrix multiplier and an operation method thereof are provided. The matrix multiplier includes a plurality of first input lines, a plurality of second input lines and a computing array. The computing array includes a plurality of multiplication accumulation (MAC) cells. A first MAC cell of the plurality of MAC cells is coupled to a first corresponding input line of the plurality of first input lines and a second corresponding input line of the plurality of second input lines to receive a first input value and a second input value to perform a multiplication accumulation operation. When at least one of the first input value and the second input value is a specified value, the multiplication accumulation operation of the first MAC cell is disabled.
    Type: Application
    Filed: August 2, 2021
    Publication date: November 10, 2022
    Applicant: NEUCHIPS CORPORATION
    Inventors: Jian-Wen Chen, YuShan Ruan, Chih-Wei Chang, Youn-Long Lin
  • Patent number: 11474937
    Abstract: A computing device and an operation method thereof are provided. The computing device includes multiple memories and an indexer circuit. The indexer circuit is separately coupled to the memories through multiple memory channels. The indexer circuit determines an arrangement of at least one lookup table to at least one of the memories according to a characteristic of the at least one lookup table and a transmission bandwidth of the memory channels, so as to balance a transmission load of the memory channels.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: October 18, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Chao-Yang Kao, Youn-Long Lin
  • Patent number: 11467968
    Abstract: A memory-adaptive processing method for a convolutional neural network includes a feature map counting step, a size relation counting step and a convolution calculating step. The feature map counting step is for counting a number of a plurality of input channels of a plurality of input feature maps, an input feature map tile size, a number of a plurality of output channels of a plurality of output feature maps and an output feature map tile size for a convolutional layer operation. The size relation counting step is for obtaining a cache free space size in a feature map cache and counting a size relation. The convolution calculating step is for performing the convolutional layer operation with the input feature maps to produce the output feature maps according to a memory-adaptive processing technique, and the memory-adaptive processing technique includes a dividing step and an output-group-first processing step.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: October 11, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Ping Chao, Chao-Yang Kao, Youn-Long Lin
  • Publication number: 20220237430
    Abstract: A harmonic densely connecting method includes an input step, a plurality of layer operation steps and an output step. The input step is for storing an original input tensor of the block into a memory. Each of the layer operation steps includes a layer-input tensor concatenating step and a convolution operation step. The layer-input tensor concatenating step is for selecting at least one layer-input element tensor of a layer-input set from the memory according to an input connection rule. When a number of the at least one layer-input element tensor is greater than 1, concatenating all of the layer-input element tensors and producing a layer-input tensor. The convolution operation step is for calculating a convolution operation to produce at least one result tensor and then storing the at least one result tensor into the memory. The output step is for outputting a block output.
    Type: Application
    Filed: April 13, 2022
    Publication date: July 28, 2022
    Inventors: Ping CHAO, Chao-Yang KAO, Youn-Long LIN
  • Publication number: 20220121565
    Abstract: A computing device and an operation method thereof are provided. The computing device includes multiple memories and an indexer circuit. The indexer circuit is separately coupled to the memories through multiple memory channels. The indexer circuit determines an arrangement of at least one lookup table to at least one of the memories according to a characteristic of the at least one lookup table and a transmission bandwidth of the memory channels, so as to balance a transmission load of the memory channels.
    Type: Application
    Filed: November 20, 2020
    Publication date: April 21, 2022
    Applicant: NEUCHIPS CORPORATION
    Inventors: Chao-Yang Kao, Youn-Long Lin
  • Patent number: 11307853
    Abstract: A matrix multiplication device and an operation method thereof are provided. The matrix multiplication device includes calculation circuits, a control circuit, a multiplication circuit, and a routing circuit. The calculation circuits produce multiply-accumulate values. The control circuit receives a plurality of first element values of a first matrix. The control circuit classifies the first element values into at least one classification value. The multiplication circuit multiplies the classification value by a second element value of a second matrix in a low power mode to obtain at least one product value. The routing circuit transmits each of the product values to at least one corresponding calculation circuit in the calculation circuits in the low power mode.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: April 19, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Chiung-Liang Lin, Chao-Yang Kao, Youn-Long Lin, Huang-Chih Kuo, Jian-Wen Chen
  • Patent number: 11210215
    Abstract: A computing device and an operation method thereof are provided. The computing device includes a plurality of memories and a processing circuit. The processing circuit is coupled to the memories. The processing circuit dynamically determines which of the plurality of memories to store at least one lookup table according to characteristics of the at least one lookup table. The processing circuit may then execute at least one algorithm by using the at least one lookup table.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: December 28, 2021
    Assignee: NEUCHIPS CORPORATION
    Inventors: Youn-Long Lin, Chao-Yang Kao, Huang-Chih Kuo
  • Patent number: 11068239
    Abstract: A curve function device and an operation method thereof are provided. The curve function device includes a lookup table, a weight calculation circuit, and a linear function circuit. According to first partial bits of an input value, a bias value of a current segment and a bias value of a next segment can be extracted from the lookup table. The weight calculation circuit can calculate a weight value of the current segment according to the bias value of the current segment and the bias value of the next segment. The linear function circuit can calculate a linear function value by using the bias value of the current segment, the weight value of the current segment, and second partial bits of the input value. This linear function value can be used as an approximate value of the curve function.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: July 20, 2021
    Assignee: NEUCHIPS CORPORATION
    Inventors: Huang-Chih Kuo, Youn-Long Lin
  • Publication number: 20210182204
    Abstract: A memory-adaptive processing method for a convolutional neural network includes a feature map counting step, a size relation counting step and a convolution calculating step. The feature map counting step is for counting a number of a plurality of input channels of a plurality of input feature maps, an input feature map tile size, a number of a plurality of output channels of a plurality of output feature maps and an output feature map tile size for a convolutional layer operation. The size relation counting step is for obtaining a cache free space size in a feature map cache and counting a size relation. The convolution calculating step is for performing the convolutional layer operation with the input feature maps to produce the output feature maps according to a memory-adaptive processing technique, and the memory-adaptive processing technique includes a dividing step and an output-group-first processing step.
    Type: Application
    Filed: February 26, 2021
    Publication date: June 17, 2021
    Inventors: Ping CHAO, Chao-Yang KAO, Youn-Long LIN
  • Publication number: 20210097368
    Abstract: A processing system includes at least one signal processing unit and at least one neural network layer. A first signal processing unit of the at least one signal processing unit performs signal processing with at least one first parameter. A first neural network layer of the at least one neural network layer has at least one second parameter. The at least one first parameter and the at least one second parameter are trained together.
    Type: Application
    Filed: February 12, 2020
    Publication date: April 1, 2021
    Inventors: Youn-Long Lin, Chao-Yang Kao, Huang-Chih Kuo
  • Publication number: 20210096987
    Abstract: A computing device and an operation method thereof are provided. The computing device includes a plurality of memories and a processing circuit. The processing circuit is coupled to the memories. The processing circuit dynamically determines which of the plurality of memories to store at least one lookup table according to characteristics of the at least one lookup table. The processing circuit may then execute at least one algorithm by using the at least one lookup table.
    Type: Application
    Filed: February 18, 2020
    Publication date: April 1, 2021
    Applicant: NEUCHIPS CORPORATION
    Inventors: Youn-Long Lin, Chao-Yang Kao, Huang-Chih Kuo
  • Patent number: 10963390
    Abstract: A memory-adaptive processing method for a convolutional neural network includes a feature map counting step, a size relation counting step and a convolution calculating step. The feature map counting step is for counting a plurality of input channels of an input feature map tile and a plurality of output channels of an output feature map tile for a convolutional layer operation of the convolutional neural network. The size relation counting step is for obtaining a cache free space size in a feature map cache and counting a size relation among a total input size, a total output size and the cache free space size of the feature map cache. The convolution calculating step is for performing the convolutional layer operation according to a memory-adaptive processing technique.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: March 30, 2021
    Assignee: NEUCHIPS CORPORATION
    Inventors: Ping Chao, Chao-Yang Kao, Youn-Long Lin
  • Publication number: 20210064373
    Abstract: A matrix multiplication device and an operation method thereof are provided. The matrix multiplication device includes calculation circuits, a control circuit, a multiplication circuit, and a routing circuit. The calculation circuits produce multiply-accumulate values. The control circuit receives a plurality of first element values of a first matrix. The control circuit classifies the first element values into at least one classification value. The multiplication circuit multiplies the classification value by a second element value of a second matrix in a low power mode to obtain at least one product value. The routing circuit transmits each of the product values to at least one corresponding calculation circuit in the calculation circuits in the low power mode.
    Type: Application
    Filed: October 29, 2019
    Publication date: March 4, 2021
    Applicant: NEUCHIPS CORPORATION
    Inventors: Chiung-Liang Lin, Chao-Yang Kao, Youn-Long Lin, Huang-Chih Kuo, Jian-Wen Chen
  • Publication number: 20210064341
    Abstract: A curve function device and an operation method thereof are provided. The curve function device includes a lookup table, a weight calculation circuit, and a linear function circuit. According to first partial bits of an input value, a bias value of a current segment and a bias value of a next segment can be extracted from the lookup table. The weight calculation circuit can calculate a weight value of the current segment according to the bias value of the current segment and the bias value of the next segment. The linear function circuit can calculate a linear function value by using the bias value of the current segment, the weight value of the current segment, and second partial bits of the input value. This linear function value can be used as an approximate value of the curve function.
    Type: Application
    Filed: November 5, 2019
    Publication date: March 4, 2021
    Applicant: NEUCHIPS CORPORATION
    Inventors: Huang-Chih Kuo, Youn-Long Lin
  • Patent number: 10908879
    Abstract: A fast vector multiplication and accumulation circuit is applied to an artificial neural network accelerator and configured to calculate an inner product of a multiplier vector and a multiplicand vector. A scheduler is configured to arrange a plurality of multiplicands of the multiplicand vector into a plurality of scheduled operands according to a plurality of multipliers of the multiplier vector, respectively. A self-accumulating adder is signally connected to the scheduler and includes a compressor, at least two delay elements and at least one shifter. The compressor is configured to add the scheduled operands to generate a plurality of compressed operands. The at least two delay elements are connected to the compressor. The shifter is configured to shift one of the compressed operands. An adder is signally connected to the output ports of the compressor so as to add the compressed operands to generate the inner product.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: February 2, 2021
    Assignee: NEUCHIPS CORPORATION
    Inventors: Youn-Long Lin, Tao-Yi Lee
  • Publication number: 20200410353
    Abstract: A harmonic densely connecting method includes an input step, a plurality of layer operation steps and an output step. The input step is for storing an original input tensor of the block into a memory. Each of the layer operation steps includes a layer-input tensor concatenating step and a convolution operation step. The layer-input tensor concatenating step is for selecting at least one layer-input element tensor of a layer-input set from the memory according to an input connection rule. When a number of the at least one layer-input element tensor is greater than 1, concatenating all of the layer-input element tensors and producing a layer-input tensor. The convolution operation step is for calculating a convolution operation to produce at least one result tensor and then storing the at least one result tensor into the memory. The output step is for outputting a block output.
    Type: Application
    Filed: June 25, 2019
    Publication date: December 31, 2020
    Inventors: Ping CHAO, Chao-Yang KAO, Youn-Long LIN