Patents by Inventor Chia-Yu Chen

Chia-Yu Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12261619
    Abstract: An apparatus includes multiple analog to digital converters. Individual analog to digital converters are configured to produce a digital output from an analog input and configured to compute a least significant bit of the digital output by comparing an internal residual voltage for determination of the least significant bit and a residual voltage from another analog to digital converter.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: March 25, 2025
    Assignee: International Business Machines Corporation
    Inventors: Chia-Yu Chen, Ankur Agrawal, Andrea Fasoli, Kyu-hyoun Kim
  • Publication number: 20250077804
    Abstract: A first circuit is configured to split a first integer value into a first coarse value and a first fine value, and split a second integer value into a second coarse value and a second fine value. A second circuit performs an analog multiply and accumulate (MAC) operation on the first and second coarse values to produce a first analog output, perform an analog MAC operation on the first coarse value and the second fine value to produce a second analog output, perform an analog MAC operation on the first fine value and the second coarse value to produce a third analog output, and perform an analog MAC operation on the first and second fine values together to produce a fourth analog output. A third circuit is configured to perform analog-to-digital (A/D) conversion on and combine the analog output signals to produce a reconstructed digital output signal.
    Type: Application
    Filed: August 29, 2023
    Publication date: March 6, 2025
    Inventors: Ankur Agrawal, Andrea Fasoli, Monodeep Kar, Kyu-hyoun Kim, Sergey Rylov, Chia-Yu Chen, Xiao Sun
  • Publication number: 20250081730
    Abstract: A display may include an array of pixels such as light-emitting diode pixels. The pixels may include multiple circuitry decks that each include one or more circuit components such as transistors, capacitors, and/or resistors. The circuitry decks may be vertically stacked. Each circuitry deck may include a planarization layer formed from a siloxane material that conforms to underlying components and provides a planar upper surface. In this way, circuitry components may be vertically stacked to mitigate the size of each pixel footprint. The circuitry components may include capacitors that include both a high-k dielectric layer and a low-k dielectric layer. The display pixel may include a via with a width of less than 1 micron.
    Type: Application
    Filed: June 26, 2024
    Publication date: March 6, 2025
    Inventors: Andrew Lin, Alper Ozgurluk, Chao Liang Chien, Cheuk Chi Lo, Chia-Yu Chen, Chien-Chung Wang, Chih Pang Chang, Chih-Hung Yu, Chih-Wei Chang, Chin Wei Hsu, ChinWei Hu, Chun-Kai Tzeng, Chun-Ming Tang, Chun-Yao Huang, Hung-Che Ting, Jung Yen Huang, Lungpao Hsin, Shih Chang Chang, Tien-Pei Chou, Wen Sheng Lo, Yu-Wen Liu, Yung Da Lai
  • Publication number: 20250048555
    Abstract: A circuit board module includes a circuit board and a plurality of capacitors. The circuit board has a plurality of standing feet for erecting on a main board, and the capacitors are symmetrically fixed on a first surface and a second surface opposite to the first surface of the circuit board. An opening is formed on the circuit board of the circuit board module and the opening is located between the capacitors. In addition, an electronic device adopting the circuit board module design is also disclosed herein.
    Type: Application
    Filed: December 7, 2023
    Publication date: February 6, 2025
    Inventors: Hung-Wen CHUEH, Chia-Yu CHEN, Yan-Ru CHEN
  • Patent number: 12217158
    Abstract: An apparatus includes circuitry for a neural network that is configured to perform forward propagation neural network operations on floating point numbers having a first n-bit floating point format. The first n-bit floating point format has a configuration consisting of a sign bit, m exponent bits and p mantissa bits where m is greater than p. The circuitry is further configured to perform backward propagation neural network operations on floating point numbers having a second n-bit floating point format that is different than the first n-bit floating point format. The second n-bit floating point format has a configuration consisting of a sign bit, q exponent bits and r mantissa bits where q is greater than m and r is less than p.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: February 4, 2025
    Assignee: International Business Machines Corporation
    Inventors: Xiao Sun, Jungwook Choi, Naigang Wang, Chia-Yu Chen, Kailash Gopalakrishnan
  • Patent number: 12182274
    Abstract: An adversarial robustness testing method, system, and computer program product include testing, via an accelerator, a robustness of a black-box system under different access settings, where the testing includes tearing down the robustness testing to a subtask of a predetermined size.
    Type: Grant
    Filed: October 20, 2023
    Date of Patent: December 31, 2024
    Assignee: International Business Machines Corporation
    Inventors: Pin-Yu Chen, Sijia Liu, Lingfei Wu, Chia-Yu Chen
  • Patent number: 12175359
    Abstract: An apparatus for training and inferencing a neural network includes circuitry that is configured to generate a first weight having a first format including a first number of bits based at least in part on a second weight having a second format including a second number of bits and a residual having a third format including a third number of bits. The second number of bits and the third number of bits are each less than the first number of bits. The circuitry is further configured to update the second weight based at least in part on the first weight and to update the residual based at least in part on the updated second weight and the first weight. The circuitry is further configured to update the first weight based at least in part on the updated second weight and the updated residual.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: December 24, 2024
    Assignee: International Business Machines Corporation
    Inventors: Xiao Sun, Jungwook Choi, Naigang Wang, Chia-Yu Chen, Kailash Gopalakrishnan
  • Patent number: 12141513
    Abstract: A method for improving performance of a predefined Deep Neural Network (DNN) convolution processing on a computing device includes inputting parameters, as input data into a processor on a computer that formalizes a design space exploration of a convolution mapping, on a predefined computer architecture that will execute the predefined convolution processing. The parameters are predefined as guided by a specification for the predefined convolution processing to be implemented by the convolution mapping and by a microarchitectural specification for the processor that will execute the predefined convolution processing. The processor calculates performance metrics for executing the predefined convolution processing on the computing device, as functions of the predefined parameters, as proxy estimates of performance of different possible design choices to implement the predefined convolution processing.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: November 12, 2024
    Assignee: International Business Machines Corporation
    Inventors: Chia-Yu Chen, Jungwook Choi, Kailash Gopalakrishnan, Vijayalakshmi Srinivasan, Swagath Venkataramani, Jintao Zhang
  • Patent number: 12061991
    Abstract: Transfer learning in machine learning can include receiving a machine learning model. Target domain training data for reprogramming the machine learning model using transfer learning can be received. The target domain training data can be transformed by performing a transformation function on the target domain training data. Output labels of the machine learning model can be mapped to target labels associated with the target domain training data. The transformation function can be trained by optimizing a parameter of the transformation function. The machine learning model can be reprogrammed based on input data transformed by the transformation function and a mapping of the output labels to target labels.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: August 13, 2024
    Assignees: International Business Machines Corporation, National Tsing Hua University
    Inventors: Pin-Yu Chen, Sijia Liu, Chia-Yu Chen, I-Hsin Chung, Tsung-Yi Ho, Yun-Yun Tsai
  • Publication number: 20240204792
    Abstract: An apparatus includes multiple analog to digital converters. Individual analog to digital converters are configured to produce a digital output from an analog input and configured to compute a least significant bit of the digital output by comparing an internal residual voltage for determination of the least significant bit and a residual voltage from another analog to digital converter.
    Type: Application
    Filed: December 20, 2022
    Publication date: June 20, 2024
    Inventors: Chia-Yu CHEN, Ankur Agrawal, Andrea Fasoli, Kyu-hyoun Kim
  • Publication number: 20240176584
    Abstract: An apparatus comprising: a first plurality of inputs representing an activation input vector; a second plurality of inputs representing a weight input vector; an analog multiplier-and-accumulator to generate a first analog voltage representing a first multiply-and-accumulate result for the said first inputs and the second inputs; a voltage multiplier that takes the said first analog voltage and produces a second analog voltage representing, a second multiply-and-accumulate result by multiplying at least one scaling factor to the first analog voltage; an analog to digital converter configured to convert the said second analog voltage multiply-and-accumulate result into a digital signal using a limited-precision operation during a neural network inference operation; and a hardware controller configured to determine the at least one scaling factor based on the first multiply-and-accumulate result, or a software controller configured to determine the at least one scaling factor based on the first multiply-and-acc
    Type: Application
    Filed: November 29, 2022
    Publication date: May 30, 2024
    Inventors: Chia-Yu Chen, Andrea Fasoli, Ankur Agrawal, Kyu-hyoun Kim, Chi-Chun LIU, Mauricio J. Serrano, Monodeep Kar, Naigang Wang, Leland Chang
  • Patent number: 11977974
    Abstract: A system, having a memory that stores computer executable components, and a processor that executes the computer executable components, reduces data size in connection with training a neural network by exploiting spatial locality to weight matrices and effecting frequency transformation and compression. A receiving component receives neural network data in the form of a compressed frequency-domain weight matrix. A segmentation component segments the initial weight matrix into original sub-components, wherein respective original sub-components have spatial weights. A sampling component applies a generalized weight distribution to the respective original sub-components to generate respective normalized sub-components. A transform component applies a transform to the respective normalized sub-components.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: May 7, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chia-Yu Chen, Jungwook Choi, Kailash Gopalakrishnan, Suyog Gupta, Pritish Narayanan
  • Patent number: 11941821
    Abstract: An image sleep analysis method and system thereof are disclosed. During sleep duration, a plurality of visible-light images of a body are obtained. Positions of image differences are determined by comparing the visible-light images. A plurality of features of the visible-light images are identified and positions of the features are determined. According to the positions of the image differences and features, the motion intensities of the features are determined. Therefore, a variation of the motion intensities is analyzed and recorded to provide accurate sleep quality.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: March 26, 2024
    Assignee: YUN YUN AI BABY CAMERA CO., LTD.
    Inventors: Bo-Zong Wu, Meng-Ta Chiang, Chia-Yu Chen, Shih-Yun Shen
  • Publication number: 20240045974
    Abstract: An adversarial robustness testing method, system, and computer program product include testing, via an accelerator, a robustness of a black-box system under different access settings, where the testing includes tearing down the robustness testing to a subtask of a predetermined size.
    Type: Application
    Filed: October 20, 2023
    Publication date: February 8, 2024
    Inventors: Pin-Yu Chen, Sijia Liu, Lingfei Wu, Chia-Yu Chen
  • Patent number: 11853713
    Abstract: Techniques that facilitate graph similarity analytics are provided. In one example, a system includes an information component and a similarity component. The information component generates a first information index indicative of a first entropy measure for a first graph-structured dataset associated with a machine learning system. The information component also generates a second information index indicative of a second entropy measure for a second graph-structured dataset associated with the machine learning system. The similarity component determines similarity between the first graph-structured dataset and the second graph-structured dataset based on a graph similarity computation associated with the first information index and the second information index.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: December 26, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pin-Yu Chen, Lingfei Wu, Chia-Yu Chen, Yada Zhu
  • Patent number: 11836256
    Abstract: An adversarial robustness testing method, system, and computer program product include testing a robustness of a black-box system under different access settings via an accelerator.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: December 5, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pin-Yu Chen, Sijia Liu, Lingfei Wu, Chia-Yu Chen
  • Patent number: 11816549
    Abstract: Systems, computer-implemented methods, and computer program products to facilitate gradient weight compression are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a pointer component that can identify one or more compressed gradient weights not present in a first concatenated compressed gradient weight. The computer executable components can further comprise a compression component that can compute a second concatenated compressed gradient weight based on the one or more compressed gradient weights to update a weight of a learning entity of a machine learning system.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: November 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Wei Zhang, Chia-Yu Chen
  • Patent number: 11797851
    Abstract: A Static Random Access Memory (SRAM) device in a binary neural network is provided. The SRAM device includes an SRAM inference engine having an SRAM computation architecture with a forward path that include multiple SRAM cells forming a chain of SRAM cells such that an output of a given one of the multiple SRAM cells is an input to a following one of the multiple SRAM cells. The SRAM computation architecture is configured to compute a prediction from an input. The SRAM computation architecture is configured to store ternary data and perform local computations on the ternary data.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: October 24, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chia-Yu Chen, Jui-Hsin Lai, Ko-Tao Lee, Li-Wen Hung
  • Publication number: 20230121677
    Abstract: A Static Random Access Memory (SRAM) device in a binary neural network is provided. The SRAM device includes an SRAM inference engine having an SRAM computation architecture with a forward path that include multiple SRAM cells forming a chain of SRAM cells such that an output of a given one of the multiple SRAM cells is an input to a following one of the multiple SRAM cells. The SRAM computation architecture is configured to compute a prediction from an input. The SRAM computation architecture is configured to store ternary data and perform local computations on the ternary data.
    Type: Application
    Filed: December 20, 2022
    Publication date: April 20, 2023
    Inventors: Chia-Yu Chen, Jui-Hsin Lai, Ko-Tao Lee, Li-Wen Hung
  • Patent number: D1000922
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: October 10, 2023
    Inventor: Chia-Yu Chen