Patents by Inventor Juin-Ming Lu
Juin-Ming Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230208639Abstract: A neural network (NN) processing method is provided. An AI (artificial intelligence) compiler code of an AI compiler is transformed to a garbled circuit code by performing following steps. A circuit graph of a garbled circuit having logic gates corresponding to the garbled circuit code is sent to an electrical device by a server. Key codebooks for candidate gates corresponding to each logic gate are creating by the electrical device. Garbled truth tables for the candidate gates corresponding to each logic pate are generated and transmitted to the server by the electrical device through using OT (Oblivious Transfer) protocol. A target garbled truth table of each logic gate is generated by the server. Afterward, an NN model is encrypted according to the key codebooks by the electrical device and a compiled NN model of an encrypted NN model are generated by the server.Type: ApplicationFiled: December 27, 2021Publication date: June 29, 2023Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Ming-Chih TUNG, Hsin-Lung WU, Juin-Ming LU, Bo-Xuan ZHU
-
Patent number: 11657273Abstract: An adaptive learning power modeling method includes: sampling at least one of a plurality of network components to form a power consumption evaluation network according to at least one parameter within a parameter range; evaluating a predictive power consumption of a to-be-measured circuit by the power consumption evaluation network; training and evaluating an actual power consumption and the predictive power consumption of the to-be-measured circuit by the power consumption evaluation network to obtain an evaluation result; and performing training according to the evaluation result to determine whether to change the power consumption evaluation network.Type: GrantFiled: December 27, 2019Date of Patent: May 23, 2023Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Yao-Hua Chen, Jing-Jia Liou, Chih-Tsun Huang, Juin-Ming Lu
-
Patent number: 11551066Abstract: A DNN hardware accelerator and an operation method of the DNN hardware accelerator are provided. The DNN hardware accelerator includes: a network distributor for receiving an input data and distributing respective bandwidth of a plurality of data types of a target data amount based on a plurality of bandwidth ratios of the target data amount; and a processing element array coupled to the network distributor, for communicating data of the data types of the target data amount between the network distributor based on the distributed bandwidth of the data types.Type: GrantFiled: January 15, 2019Date of Patent: January 10, 2023Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Yao-Hua Chen, Chun-Chen Chen, Chih-Tsun Huang, Jing-Jia Liou, Chun-Hung Lai, Juin-Ming Lu
-
Publication number: 20220207323Abstract: A processing element architecture adapted to a convolution comprises a plurality of processing elements and a delayed queue circuit. The plurality of processing elements includes a first processing element and a second processing element, wherein the first processing element and the second processing element perform the convolution according to a shared datum at least. The delayed queue circuit connects to the first processing element and connects to the second processing element. The delayed queue circuit receives the shared datum sent by the first processing element, and sends the shared datum to the second processing element after receiving the shared datum and waiting for a time interval.Type: ApplicationFiled: December 29, 2020Publication date: June 30, 2022Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Yao-Hua CHEN, Yu-Xiang YEN, Wan-Shan HSIEH, Chih-Tsun HUANG, Juin-Ming LU, Jing-Jia LIOU
-
Patent number: 11113580Abstract: An image classification system includes a storage device, a computing device and a first processing device. The storage device stores a plurality of pseudo-centroid datasets, wherein the pseudo-centroid datasets correspond to a plurality of units of first image dataset, and the number of pseudo-centroid data points of each of the pseudo-centroid datasets is much smaller than the number of data points of each of the units of first image dataset. The computing device receives the second image data and computes a plurality of feature values of the second image data. The first processing device receives the feature values and the pseudo-centroid datasets, and compares the feature values with the pseudo-centroid data points to identify and classify the second image data.Type: GrantFiled: December 30, 2019Date of Patent: September 7, 2021Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Jiung-Yao Huang, Hsin-Lung Wu, Juin-Ming Lu
-
Publication number: 20210201127Abstract: An adaptive learning power modeling method includes: sampling at least one of a plurality of network components to form a power consumption evaluation network according to at least one parameter within a parameter range; evaluating a predictive power consumption of a to-be-measured circuit by the power consumption evaluation network; training and evaluating an actual power consumption and the predictive power consumption of the to-be-measured circuit by the power consumption evaluation network to obtain an evaluation result; and performing training according to the evaluation result to determine whether to change the power consumption evaluation network.Type: ApplicationFiled: December 27, 2019Publication date: July 1, 2021Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Yao-Hua CHEN, Jing-Jia LIOU, Chih-Tsun HUANG, Juin-Ming LU
-
Publication number: 20210201088Abstract: An image classification system includes a storage device, a computing device and a first processing device. The storage device stores a plurality of pseudo-centroid datasets, wherein the pseudo-centroid datasets correspond to a plurality of units of first image dataset, and the number of pseudo-centroid data points of each of the pseudo-centroid datasets is much smaller than the number of data points of each of the units of first image dataset. The computing device receives the second image data and computes a plurality of feature values of the second image data. The first processing device receives the feature values and the pseudo-centroid datasets, and compares the feature values with the pseudo-centroid data points to identify and classify the second image data.Type: ApplicationFiled: December 30, 2019Publication date: July 1, 2021Inventors: Jiung-Yao HUANG, Hsin-Lung WU, Juin-Ming LU
-
Publication number: 20210201118Abstract: A deep neural network (DNN) hardware accelerator including a processing element array is disclosed. The processing element array includes a processing element array, the processing element array including a plurality of processing element groups and each of the processing element groups including a plurality of processing elements. A first network connection implementation between a first processing element group of the processing element groups and a second processing element group of the processing element groups is different from a second network connection implementation between the processing elements in the first processing element group.Type: ApplicationFiled: December 26, 2019Publication date: July 1, 2021Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Yao-Hua CHEN, Wan-Shan HSIEH, Juin-Ming LU
-
Patent number: 10896276Abstract: Disclosed are a timing estimation method and a simulator. The method is applied to a function verification model. In the method, the model issues a first access issue at a first time point; receives a first response to the first access issue from the bus at a second time point; calculates a delay time between the first and second time points; determines whether the delay time is longer than or substantially equal to a transmission time corresponding to the first access issue; issues a second access issue if yes; and issues the second access issue in a compensation time counting from the second time point if not. The compensation time is not longer than the difference between the transmission time and the delay time.Type: GrantFiled: December 15, 2017Date of Patent: January 19, 2021Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Mei-Ling Chi, Yao-Hua Chen, Hsun-Lun Huang, Juin-Ming Lu
-
Publication number: 20200193275Abstract: A DNN hardware accelerator and an operation method of the DNN hardware accelerator are provided. The DNN hardware accelerator includes: a network distributor for receiving an input data and distributing respective bandwidth of a plurality of data types of a target data amount based on a plurality of bandwidth ratios of the target data amount; and a processing element array coupled to the network distributor, for communicating data of the data types of the target data amount between the network distributor based on the distributed bandwidth of the data types.Type: ApplicationFiled: January 15, 2019Publication date: June 18, 2020Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Yao-Hua CHEN, Chun-Chen CHEN, Chih-Tsun HUANG, Jing-Jia LIOU, Chun-Hung LAI, Juin-Ming LU
-
Patent number: 10628627Abstract: An embodiment of a thermal estimation device including a temperature model generator, a temperature gradient calculator, and a thermal sensing analyzer is disclosed. The temperature model generator generates a temperature model based on an initial power consumption, an initial area and an initial coordination of a circuit module. The temperature gradient calculator substitutes at least one of a testing area, a testing power or a testing coordinate of the circuit module into the temperature model for correspondingly estimating an temperature estimation function. The thermal sensing analyzer differentiates the temperature estimation function. When an absolute value of a differential result of the temperature estimation function resulted from a constant is closest to zero or is zero, outputting the constant as an optimized parameter.Type: GrantFiled: December 28, 2017Date of Patent: April 21, 2020Assignee: Industrial Technology Research InstituteInventors: Yeong-Jar Chang, Ya-Ting Shyu, Juin-Ming Lu, Yao-Hua Chen, Yen-Fu Chang, Jai-Ming Lin
-
Patent number: 10412331Abstract: A power consumption estimation method is applied to an image with N rows of pixels, and comprises a pixel estimation procedure comprising performing an estimation sub-procedure pixel by pixel for each of a plurality of pixels in one row of the N rows of pixels to obtain a plurality of pixel energy consumption values respectively corresponding to the plurality of pixels in said one row of the N rows, and obtaining a row power consumption value corresponding to said one row of the N rows according to the plurality of pixel energy consumption values. The estimation sub-procedure comprises obtaining pixel content information corresponding to one of the plurality of pixels, and determining the pixel energy consumption value according to the pixel content information. The pixel energy consumption value indicates pixel energy consumption generated by performing a predetermined image processing procedure for said one of the plurality of pixels.Type: GrantFiled: December 22, 2017Date of Patent: September 10, 2019Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Chun Wei Chen, Ming-Der Shieh, Juin-Ming Lu, Hsun-Lun Huang, Yao-Hua Chen
-
Patent number: 10365829Abstract: A memory transaction-level modeling method and a memory transaction-level modeling system are provided. The memory transaction-level modeling method is used for simulating the operation of outputting at least one command to the memory. The memory includes a plurality of banks each of which corresponds with a bank status table. The memory transaction-level modeling method includes the following steps: An event is received. Whether one of the bank status tables is needed to be updated is determined. If one of the bank status tables is needed to be updated, this bank status table is recovered according to a TMP queue. A command is outputted to the memory according to a command queue. The outputted command is stored in the TMP queue. Some of the bank status tables are updated and others of the bank status tables are kept unchanged.Type: GrantFiled: December 27, 2016Date of Patent: July 30, 2019Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Yao-Hua Chen, Che-Wei Hsu, Juin-Ming Lu, Wei-Shiang Lin, Jing-Jia Liou, Chih-Tsun Huang
-
Publication number: 20190147135Abstract: An embodiment of a thermal estimation device including a temperature model generator, a temperature gradient calculator, and a thermal sensing analyzer is disclosed. The temperature model generator generates a temperature model based on an initial power consumption, an initial area and an initial coordination of a circuit module. The temperature gradient calculator substitutes at least one of a testing area, a testing power or a testing coordinate of the circuit module into the temperature model for correspondingly estimating an temperature estimation function. The thermal sensing analyzer differentiates the temperature estimation function. When an absolute value of a differential result of the temperature estimation function resulted from a constant is closest to zero or is zero, outputting the constant as an optimized parameter.Type: ApplicationFiled: December 28, 2017Publication date: May 16, 2019Inventors: Yeong-Jar CHANG, Ya-Ting Shyu, Juin-Ming Lu, Yao-Hua Chen, Yen-Fu Chang, Jai-Ming Lin
-
Patent number: 10268519Abstract: A scheduling method is provided. The method includes: recording a next instruction and a ready state of each thread group in a scoreboard; determining whether there is any ready thread group whose ready state is affirmative; determining whether a load/store unit is available, wherein the load/store unit is configured to access a data memory unit; when the load/store unit is available, determining whether the ready thread groups include a data access thread group, wherein the next instruction of the data access thread group is related to accessing the data memory unit; selecting a target thread group from the data access thread groups; and dispatching the target thread group to the load/store unit for execution.Type: GrantFiled: December 29, 2015Date of Patent: April 23, 2019Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Heng-Yi Chen, Chung-Ho Chen, Chen-Chieh Wang, Juin-Ming Lu, Chun-Hung Lai, Hsun-Lun Huang
-
Publication number: 20190068904Abstract: A power consumption estimation method is applied to an image with N rows of pixels, and comprises a pixel estimation procedure comprising performing an estimation sub-procedure pixel by pixel for each of a plurality of pixels in one row of the N rows of pixels to obtain a plurality of pixel energy consumption values respectively corresponding to the plurality of pixels in said one row of the N rows, and obtaining a row power consumption value corresponding to said one row of the N rows according to the plurality of pixel energy consumption values. The estimation sub-procedure comprises obtaining pixel content information corresponding to one of the plurality of pixels, and determining the pixel energy consumption value according to the pixel content information. The pixel energy consumption value indicates pixel energy consumption generated by performing a predetermined image processing procedure for said one of the plurality of pixels.Type: ApplicationFiled: December 22, 2017Publication date: February 28, 2019Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Chun Wei CHEN, Ming-Der SHIEH, Juin-Ming LU, Hsun-Lun HUANG, Yao-Hua CHEN
-
Publication number: 20180357337Abstract: Disclosed are a timing estimation method and a simulator. The method is applied to a function verification model. In the method, the model issues a first access issue at a first time point; receives a first response to the first access issue from the bus at a second time point; calculates a delay time between the first and second time points; determines whether the delay time is longer than or substantially equal to a transmission time corresponding to the first access issue; issues a second access issue if yes; and issues the second access issue in a compensation time counting from the second time point if not. The compensation time is not longer than the difference between the transmission time and the delay time.Type: ApplicationFiled: December 15, 2017Publication date: December 13, 2018Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Mei-Ling Chi, Yao-Hua Chen, Hsun-Lun Huang, Juin-Ming Lu
-
Patent number: 9953393Abstract: An analyzing method and an analyzing system for graphics process are provided. The analyzing method includes the following steps. A graphics application program is provided and a plurality of graphics parameters of the graphics application program are obtained. The graphics application program is classified to be at least one of a plurality of groups according to the graphics parameters. A plurality weighting coefficients are obtained. A total loading of a graphics processing unit for performing the graphics application program is calculated according to the weighting coefficients and the graphics parameters.Type: GrantFiled: December 29, 2015Date of Patent: April 24, 2018Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Arthur Marmin, Chun-Hung Lai, Hsun-Lun Huang, Juin-Ming Lu
-
Publication number: 20180074702Abstract: A memory transaction-level modeling method and a memory transaction-level modeling system are provided. The memory transaction-level modeling method is used for simulating the operation of outputting at least one command to the memory. The memory includes a plurality of banks each of which corresponds with a bank status table. The memory transaction-level modeling method includes the following steps: An event is received. Whether one of the bank status tables is needed to be updated is determined. If one of the bank status tables is needed to be updated, this bank status table is recovered according to a TMP queue. A command is outputted to the memory according to a command queue. The outputted command is stored in the TMP queue. Some of the bank status tables are updated and others of the bank status tables are kept unchanged.Type: ApplicationFiled: December 27, 2016Publication date: March 15, 2018Inventors: Yao-Hua Chen, Che-Wei Hsu, Juin-Ming Lu, Wei-Shiang Lin, Jing-Jia Liou, Chih-Tsun Huang
-
Patent number: 9842180Abstract: A NoC timing power estimating method includes: estimating a plurality of transmission timing of a plurality of transmission units of at least a packet, the transmission timing indicating respective time points at which the transmission units enter/leave a plurality of passing elements of the NoC; based on the transmission timing of the transmission units, estimating respective circuit states and respective power states of the passing elements of the NoC, the circuit state indicating an operation state of the passing element and the power state being related to the circuit state; and based on the power states of the passing elements of the NoC, estimating power consumption of the NoC.Type: GrantFiled: December 30, 2014Date of Patent: December 12, 2017Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Ting-Shuo Hsu, Jing-Jia Liou, Jih-Sheng Shen, Juin-Ming Lu