Patents by Inventor Juinn-Dar Huang

Juinn-Dar Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240012872
    Abstract: A total interaction method and device to compute an interaction relationship between multiple features in a recommendation system is provided. The total interaction method includes: adding a plurality of categorical feature vectors to a first matrix, wherein each of the categorical feature vectors includes a plurality of latent features; performing one of categorical feature interaction computation and latent feature interaction computation on the first matrix to generate a second matrix; transposing the second matrix to generate a transposed matrix; and performing the other one of the categorical feature interaction computation and the latent feature interaction computation on the transposed matrix to generate a total interaction result.
    Type: Application
    Filed: August 23, 2022
    Publication date: January 11, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Ching-Yun Kao, Wei-Hsiang Kuo, Juinn-Dar Huang
  • Publication number: 20240005159
    Abstract: A simplification device and a simplification method for neural network model are provided. The simplification method may simplify an original trained neural network model to a simplified trained neural network model, wherein the simplified trained neural network model includes at most two linear operation layers. The simplification method includes: converting the original trained neural network model into an original mathematical function; performing an iterative analysis operation on the original mathematical function to simplify the original mathematical function to a simplified mathematical function, wherein the simplified mathematical function has a new weight; computing the new weight by using multiple original weights of the original trained neural network model; and converting the simplified mathematical function to the simplified trained neural network model.
    Type: Application
    Filed: August 22, 2022
    Publication date: January 4, 2024
    Applicant: NEUCHIPS CORPORATION
    Inventors: Po-Han Chen, Yi Lee, Kai-Chiang Wu, Youn-Long Lin, Juinn-Dar Huang
  • Publication number: 20230325374
    Abstract: A generation method and an index condensation method of an embedding table are disclosed. The generation method includes: establishing an initial structure of the embedding table corresponding to categorical data according to an initial index dimension; performing model training on the embedding table having the initial structure to generate an initial content; defining each initial index as one of an important index and a non-important index based on the initial content; keeping initial indices defined as the important index in a condensed index dimension; dividing, based on a preset compression rate, initial indices defined as the non-important index into at least one initial index group each mapped to a condensed index in the condensed index dimension; establishing a new structure of the embedding table according to the condensed index dimension; and performing the model training on the embedding table having the new structure to generate a condensed content.
    Type: Application
    Filed: May 17, 2022
    Publication date: October 12, 2023
    Applicant: NEUCHIPS CORPORATION
    Inventors: Yu-Da Chu, Ching-Yun Kao, Juinn-Dar Huang
  • Publication number: 20230325709
    Abstract: An embedding table generation method and an embedding table condensation method are provided. The embedding table generation method includes: building an initial architecture of an embedding table corresponding to categorical data according to an initial feature dimension; performing model training on the embedding table with the initial architecture to generate initial content of the embedding table; computing a condensed feature dimension based on the initial content of the embedding table; building a new architecture of the embedding table according to the condensed feature dimension; and performing the model training on the embedding table with the new architecture to generate condensed content of the embedding table.
    Type: Application
    Filed: May 19, 2022
    Publication date: October 12, 2023
    Applicant: NEUCHIPS CORPORATION
    Inventors: Ching-Yun Kao, Yu-Da Chu, Juinn-Dar Huang
  • Patent number: 11387843
    Abstract: A method and apparatus for encoding and decoding of floating-point number is provided. The method for encoding is used to convert at least one original floating-point number to at least one encoded floating-point number. The method for encoding includes: determining a number of exponent bits of the at least one encoded floating-point number and calculating an exponent bias according to at least one original exponent value of the at least one original floating-point number; and converting an original exponent value of a current original floating-point number of the at least one original floating-point number to an encoded exponent value of a current encoded floating-point number of the at least one encoded floating-point number according to the exponent bias.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: July 12, 2022
    Assignee: NEUCHIPS CORPORATION
    Inventors: Juinn Dar Huang, Cheng Wei Huang, Tim Wei Chen, Chiung-Liang Lin
  • Publication number: 20210357758
    Abstract: A method for deep neural network compression is provided. The method includes: using at least one weight of a deep neural network (DNN), setting a value of a P parameter, and combining every P weights in groups, and perform branch pruning and retraining, so that only one of each group has a non-zero weight, and the remaining weights are 0, wherein the remaining weights are evenly divided into branches to adjust a compression rate of the DNN and to adjust a reduction rate of the DNN.
    Type: Application
    Filed: April 27, 2021
    Publication date: November 18, 2021
    Inventors: Juinn-Dar HUANG, Ya-Chu CHANG, Wei-Chen LIN
  • Publication number: 20210326697
    Abstract: A convolution operation module comprising a first memory element, a second memory element and a first operation unit is presented. The first memory element is configured to store a first part of a first row data of an array data. The second memory element is configured to store a second part of a second row data of an array data. Wherein the second row data is adjacent to the first row data in the array data and the first part and the second part have a same amount of data. The first operation unit is coupled to the first memory element and second memory element. Wherein the first operation unit integrates the first part and the second part into a first operation matrix. Wherein the first operation unit performs a convolution operation on the first operation matrix and a first kernel map to derive a first feature value.
    Type: Application
    Filed: August 27, 2020
    Publication date: October 21, 2021
    Applicant: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Juinn-Dar HUANG, Yi LU, Yi-Lin WU
  • Publication number: 20110153709
    Abstract: A compressor tree synthesis algorithm, named DOCT, which guarantees the delay optimal implementation in LUT-based FPGAs. Given a targeted K-input LUT architecture, DOCT firstly derives a finite set of prime patterns as essential building blocks. Then, it shows that a delay optimal compressor tree can always be constructed by those derived prime patterns via integer linear programming (ILP). Without loss of delay optimality, a post-processing procedure is invoked to reduce the number of demanded LUTs for the generated compressor tree design. DOCT has been evaluated over a broad set of benchmark circuits. The DOCT reduces the depth of the compressor tree and the number of LUTs based on the modern 8-input LUT-based FPGA architecture.
    Type: Application
    Filed: March 4, 2010
    Publication date: June 23, 2011
    Inventors: Juinn-Dar HUANG, Jhih-Hong Lu, Bu-Ching Lin, Jing-Yang Jou
  • Patent number: 7881241
    Abstract: Multiplexers are basic components widely used in VLSI designs. Switching activities of a multiplexer are one of the most important factors of power consumption. A multiplexer may have some sub-multiplexers. An extra dynamic controller is applied in the present invention to reconfigure control signals for decreasing switching activities of the composed sub-multiplexers. Thus, the power consumption of the multiplexer is reduced to achieve higher power efficiency.
    Type: Grant
    Filed: June 6, 2007
    Date of Patent: February 1, 2011
    Assignee: National Chiao Tung University
    Inventors: Juinn-Dar Huang, Chia-I Chen
  • Patent number: 7577780
    Abstract: A fine-grained bandwidth control arbiter manages the shared bus usage of the requests of the masters which have real-time and/or bandwidth requirements, moreover, the masters are preset a ticket respectively. The arbiter consists of three components, a real-time handler, a bandwidth regulator, and a lottery manager with tuned weight. The real-time handler grants the most urgent request. The bandwidth regulator handles the bandwidth allocation and blocks the requests of masters that have met the bandwidth requirement. The lottery manager with tuned weight stochastically grants one of the contending masters according to the ticket assignment.
    Type: Grant
    Filed: February 28, 2007
    Date of Patent: August 18, 2009
    Assignee: National Chiao Tung University
    Inventors: Juinn-Dar Huang, Bu-Ching Lin, Geeng-Wei Lee, Jing-Yang Jou
  • Publication number: 20080209093
    Abstract: A fine-grained bandwidth control arbiter manages the shared bus usage of the requests of the masters which have real-time and/or bandwidth requirements, moreover, the masters are preset a ticket respectively. The arbiter consists of three components, a real-time handler, a bandwidth regulator, and a lottery manager with tuned weight. The real-time handler grants the most urgent request. The bandwidth regulator handles the bandwidth allocation and blocks the requests of masters that have met the bandwidth requirement. The lottery manager with tuned weight stochastically grants one of the contending masters according to the ticket assignment.
    Type: Application
    Filed: February 28, 2007
    Publication date: August 28, 2008
    Inventors: Juinn-Dar Huang, Bu-Ching Lin, Geeng-Wei Lee, Jing-Yang Jou
  • Publication number: 20080198784
    Abstract: Multiplexers are basic components widely used in VLSI designs. Switching activities of a multiplexer are one of the most important factors of power consumption. A multiplexer may have some sub-multiplexers. An extra dynamic controller is applied in the present invention to reconfigure control signals for decreasing switching activities of the composed sub-multiplexers. Thus, the power consumption of the multiplexer is reduced to achieve higher power efficiency.
    Type: Application
    Filed: June 6, 2007
    Publication date: August 21, 2008
    Applicant: National Chiao Tung University
    Inventors: Juinn-Dar Huang, Chia-I Chen