Patents Assigned to NEUCHIPS CORPORATION
  • Publication number: 20240134931
    Abstract: A matrix computing device and an operation method for the matrix computing device are provided. The matrix computing device includes a storage unit, a control circuit, and a computing circuit. The storage unit includes a weight matrix. The control circuit re-orders an arrangement order of weights in the weight matrix according to a shape of an output matrix to determine a weight readout order of the weights. The computing circuit receives the weights based on the weight readout order, and performs a matrix computation on the weights and an input matrix to generate a computing matrix. The control circuit performs a reshape transformation on the computing matrix to generate the output matrix, and writes the output matrix to the storage unit.
    Type: Application
    Filed: December 7, 2022
    Publication date: April 25, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Chiung-Liang Lin, YuShan Ruan, Huan Jan Chou
  • Publication number: 20240111827
    Abstract: The present disclosure provides a matrix device and an operation method thereof. The matrix device includes a transpose circuit and a memory. The transpose circuit is configured to receive a first element string representing a native matrix from a matrix source, wherein all elements in the native matrix are arranged in the first element string in one of a “row-major manner” and a “column-major manner”. The transpose circuit transposes the first element string into a second element string, wherein the second element string is equivalent to an element string in which all elements of the native matrix are arranged in another one of the “row-major manner” and the “column-major manner”. The memory is coupled to the transpose circuit to receive the second element string.
    Type: Application
    Filed: November 2, 2022
    Publication date: April 4, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Huang-Chih Kuo, YuShan Ruan, Jian-Wen Chen, Tzu-Jen Lo
  • Publication number: 20240086312
    Abstract: The invention provides a memory searching device and method. The memory searching device includes a memory, a lookup command processing circuit, and a lookup result processing circuit. The lookup command processing circuit reorders an original order of lookup commands in an original lookup command string into a new order based on an accessing characteristic of the memory, and provides a reordered lookup command string to the memory. The lookup result processing circuit is coupled to the memory to receive a lookup result string, and coupled to the lookup command processing circuit to receive mapping information between the original order and the new order. The lookup result string includes lookup results corresponding to the lookup commands of the reordered lookup command string. The lookup result processing circuit restores an order of the lookup results to the original order based on the mapping information.
    Type: Application
    Filed: October 20, 2022
    Publication date: March 14, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Huang-Chih Kuo, Youn-Long Lin
  • Publication number: 20240061793
    Abstract: A computing device and a data access method therefor are provided. The computing device includes a bus, a destination memory circuit, and a source memory circuit. The source memory circuit provides multiple pieces of data to the destination memory circuit through the bus based on a burst access instruction. In an embodiment, a source address in the burst access instruction is one of multiple consecutive addresses of a source memory, and a destination address in the burst access instruction is a virtual address. In another embodiment, a source address in the burst access instruction is a virtual address, and a destination address in the burst access instruction is one of multiple consecutive addresses in the destination memory circuit. In yet another embodiment, a source address in the burst access instruction is a first virtual address, and a destination address in the burst access instruction is a second virtual address.
    Type: Application
    Filed: October 12, 2022
    Publication date: February 22, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Cheng-Bing Wu, YuShan Ruan
  • Publication number: 20240012872
    Abstract: A total interaction method and device to compute an interaction relationship between multiple features in a recommendation system is provided. The total interaction method includes: adding a plurality of categorical feature vectors to a first matrix, wherein each of the categorical feature vectors includes a plurality of latent features; performing one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix; transposing the second matrix to generate a transposed matrix; and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result.
    Type: Application
    Filed: August 23, 2022
    Publication date: January 11, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Ching-Yun Kao, Wei-Hsiang Kuo, Juinn-Dar Huang
  • Publication number: 20240005159
    Abstract: A simplification device and a simplification method for neural network model are provided. The simplification method may simplify an original trained neural network model to a simplified trained neural network model, wherein the simplified trained neural network model includes at most two linear operation layers. The simplification method includes: converting the original trained neural network model into an original mathematical function; performing an iterative analysis operation on the original mathematical function to simplify the original mathematical function to a simplified mathematical function, wherein the simplified mathematical function has a new weight; computing the new weight by using multiple original weights of the original trained neural network model; and converting the simplified mathematical function to the simplified trained neural network model.
    Type: Application
    Filed: August 22, 2022
    Publication date: January 4, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Po-Han Chen, Yi Lee, Kai-Chiang Wu, Youn-Long Lin, Juinn-Dar Huang
  • Patent number: 11836214
    Abstract: A matrix calculation device including a storing unit, a multiply accumulate (MAC) circuit, a pre-fetch circuit, and a control circuit, and an operation method thereof are provided. The storing unit stores a first and second matrixes. The MAC circuit is configured to execute MAC calculation. The pre-fetch circuit pre-fetches at least one column of the first matrix from the storing unit to act as pre-fetch data, pre-fetches at least one row of the second matrix from the storing unit to act as the pre-fetch data, or pre-fetches at least one column of the first matrix and at least one row of the second matrix from the storing unit to act as the pre-fetch data. The control circuit decides whether to perform the MAC calculation on a current column of the first matrix and a current row of the second matrix through the MAC circuit according to the pre-fetch data.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: December 5, 2023
    Assignee: NEUCHIPS CORPORATION
    Inventors: Chiung-Liang Lin, Chao-Yang Kao
  • Publication number: 20230325374
    Abstract: A generation method and an index condensation method of an embedding table are disclosed. The generation method includes: establishing an initial structure of the embedding table corresponding to categorical data according to an initial index dimension; performing model training on the embedding table having the initial structure to generate an initial content; defining each initial index as one of an important index and a non-important index based on the initial content; keeping initial indices defined as the important index in a condensed index dimension; dividing, based on a preset compression rate, initial indices defined as the non-important index into at least one initial index group each mapped to a condensed index in the condensed index dimension; establishing a new structure of the embedding table according to the condensed index dimension; and performing the model training on the embedding table having the new structure to generate a condensed content.
    Type: Application
    Filed: May 17, 2022
    Publication date: October 12, 2023
    Applicant: NEUCHIPS CORPORATION
    Inventors: Yu-Da Chu, Ching-Yun Kao, Juinn-Dar Huang
  • Publication number: 20230325709
    Abstract: An embedding table generation method and an embedding table condensation method are provided. The embedding table generation method includes: building an initial architecture of an embedding table corresponding to categorical data according to an initial feature dimension; performing model training on the embedding table with the initial architecture to generate initial content of the embedding table; computing a condensed feature dimension based on the initial content of the embedding table; building a new architecture of the embedding table according to the condensed feature dimension; and performing the model training on the embedding table with the new architecture to generate condensed content of the embedding table.
    Type: Application
    Filed: May 19, 2022
    Publication date: October 12, 2023
    Applicant: NEUCHIPS CORPORATION
    Inventors: Ching-Yun Kao, Yu-Da Chu, Juinn-Dar Huang
  • Patent number: 11782839
    Abstract: A feature map caching method of a convolutional neural network includes a connection analyzing step and a plurality of layer operation steps. The connection analyzing step is for analyzing a network to establish a convolutional neural network connection list. The convolutional neural network connection list includes a plurality of tensors and a plurality of layer operation coefficients. Each of the layer operation coefficients includes a step index, at least one input operand label and an output operand label. The step index as a processing order for the layer operation step. At least one of the layer operation steps is for flushing at least one of the tensors in a cache according to a distance between the at least one of the layer operation steps and a future layer operation step of the layer operation steps. The distance is calculated according to the convolutional neural network connection list.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: October 10, 2023
    Assignee: NEUCHIPS CORPORATION
    Inventors: Ping Chao, Chao-Yang Kao, Youn-Long Lin
  • Patent number: 11615286
    Abstract: A computing system and a compressing method for neural network parameters are provided. In the method, multiple neural network parameters are obtained. The neural network parameters are used for a neural network algorithm. Every at least two neural network parameters are grouped into an encoding combination. The number of neural network parameters in each encoding combination is the same. The encoding combinations are compressed with the same compression target bit number. Each encoding combination is compressed independently. The compression target bit number is not larger than a bit number of each encoding combination. Thereby, the storage space can be saved and excessive power consumption for accessing the parameters can be prevented.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: March 28, 2023
    Assignee: NEUCHIPS CORPORATION
    Inventors: Youn-Long Lin, Chao-Yang Kao, Huang-Chih Kuo, Chiung-Liang Lin
  • Publication number: 20220358183
    Abstract: A matrix multiplier and an operation method thereof are provided. The matrix multiplier includes a plurality of first input lines, a plurality of second input lines and a computing array. The computing array includes a plurality of multiplication accumulation (MAC) cells. A first MAC cell of the plurality of MAC cells is coupled to a first corresponding input line of the plurality of first input lines and a second corresponding input line of the plurality of second input lines to receive a first input value and a second input value to perform a multiplication accumulation operation. When at least one of the first input value and the second input value is a specified value, the multiplication accumulation operation of the first MAC cell is disabled.
    Type: Application
    Filed: August 2, 2021
    Publication date: November 10, 2022
    Applicant: NEUCHIPS CORPORATION
    Inventors: Jian-Wen Chen, YuShan Ruan, Chih-Wei Chang, Youn-Long Lin
  • Patent number: 11474937
    Abstract: A computing device and an operation method thereof are provided. The computing device includes multiple memories and an indexer circuit. The indexer circuit is separately coupled to the memories through multiple memory channels. The indexer circuit determines an arrangement of at least one lookup table to at least one of the memories according to a characteristic of the at least one lookup table and a transmission bandwidth of the memory channels, so as to balance a transmission load of the memory channels.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: October 18, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Chao-Yang Kao, Youn-Long Lin
  • Patent number: 11467968
    Abstract: A memory-adaptive processing method for a convolutional neural network includes a feature map counting step, a size relation counting step and a convolution calculating step. The feature map counting step is for counting a number of a plurality of input channels of a plurality of input feature maps, an input feature map tile size, a number of a plurality of output channels of a plurality of output feature maps and an output feature map tile size for a convolutional layer operation. The size relation counting step is for obtaining a cache free space size in a feature map cache and counting a size relation. The convolution calculating step is for performing the convolutional layer operation with the input feature maps to produce the output feature maps according to a memory-adaptive processing technique, and the memory-adaptive processing technique includes a dividing step and an output-group-first processing step.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: October 11, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Ping Chao, Chao-Yang Kao, Youn-Long Lin
  • Patent number: 11456755
    Abstract: The disclosure provides a look-up table (LUT) compression method and a LUT reading method for computation equipment and its host and device. In a LUT compression phase, the host retrieves an original data from an original LUT by using an original table address, checks the original data according to a reconstruction condition to obtain a check result (bitmap), converts the original data into a reconstructed data according to the check result, writes the reconstructed data to a compressed LUT by using a compressed table address, writes a relationship among the original table address, the compressed table address, and the check result (bitmap) to a mapping table, and stores the compressed LUT to the device.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: September 27, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Tzu-Jen Lo, Huang-Chih Kuo
  • Patent number: 11387843
    Abstract: A method and apparatus for encoding and decoding of floating-point number is provided. The method for encoding is used to convert at least one original floating-point number to at least one encoded floating-point number. The method for encoding includes: determining a number of exponent bits of the at least one encoded floating-point number and calculating an exponent bias according to at least one original exponent value of the at least one original floating-point number; and converting an original exponent value of a current original floating-point number of the at least one original floating-point number to an encoded exponent value of a current encoded floating-point number of the at least one encoded floating-point number according to the exponent bias.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: July 12, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Juinn Dar Huang, Cheng Wei Huang, Tim Wei Chen, Chiung-Liang Lin
  • Patent number: 11379185
    Abstract: A matrix multiplication device and an operation method thereof are provided. The matrix multiplication device includes a plurality of unit circuits. Each of the unit circuits includes a multiplying-adding circuit, a first register, and a second register. A first input terminal and a second input terminal of the multiplying-adding circuit are respectively coupled to a corresponding first input line and a corresponding second input line. An input terminal and an output terminal of the first register are respectively coupled to an output terminal and a third input terminal of the multiplying-adding circuit. The second register is coupled to the first register to receive and temporarily store a multiplication accumulation result. Wherein, the second registers of the unit circuits output the multiplication accumulation results in a column direction in a first output mode, and output the multiplication accumulation results in a row direction in a second output mode.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: July 5, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Jian-Wen Chen, Chiung-Liang Lin
  • Publication number: 20220121565
    Abstract: A computing device and an operation method thereof are provided. The computing device includes multiple memories and an indexer circuit. The indexer circuit is separately coupled to the memories through multiple memory channels. The indexer circuit determines an arrangement of at least one lookup table to at least one of the memories according to a characteristic of the at least one lookup table and a transmission bandwidth of the memory channels, so as to balance a transmission load of the memory channels.
    Type: Application
    Filed: November 20, 2020
    Publication date: April 21, 2022
    Applicant: NEUCHIPS CORPORATION
    Inventors: Chao-Yang Kao, Youn-Long Lin
  • Patent number: 11307853
    Abstract: A matrix multiplication device and an operation method thereof are provided. The matrix multiplication device includes calculation circuits, a control circuit, a multiplication circuit, and a routing circuit. The calculation circuits produce multiply-accumulate values. The control circuit receives a plurality of first element values of a first matrix. The control circuit classifies the first element values into at least one classification value. The multiplication circuit multiplies the classification value by a second element value of a second matrix in a low power mode to obtain at least one product value. The routing circuit transmits each of the product values to at least one corresponding calculation circuit in the calculation circuits in the low power mode.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: April 19, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Chiung-Liang Lin, Chao-Yang Kao, Youn-Long Lin, Huang-Chih Kuo, Jian-Wen Chen
  • Publication number: 20220066736
    Abstract: A matrix multiplication device and an operation method thereof are provided. The matrix multiplication device includes a plurality of unit circuits. Each of the unit circuits includes a multiplying-adding circuit, a first register, and a second register. A first input terminal and a second input terminal of the multiplying-adding circuit are respectively coupled to a corresponding first input line and a corresponding second input line. An input terminal and an output terminal of the first register are respectively coupled to an output terminal and a third input terminal of the multiplying-adding circuit. The second register is coupled to the first register to receive and temporarily store a multiplication accumulation result. Wherein, the second registers of the unit circuits output the multiplication accumulation results in a column direction in a first output mode, and output the multiplication accumulation results in a row direction in a second output mode.
    Type: Application
    Filed: September 21, 2020
    Publication date: March 3, 2022
    Applicant: NEUCHIPS CORPORATION
    Inventors: Jian-Wen Chen, Chiung-Liang Lin