Patents by Inventor Jae-sun Seo
Jae-sun Seo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210327474Abstract: In some embodiments, an in-memory-computing SRAM macro based on capacitive-coupling computing (C3) (which is referred to herein as “C3SRAM”) is provided. In some embodiments, a C3SRAM macro can support array-level fully parallel computation, multi-bit outputs, and configurable multi-bit inputs. The macro can include circuits embedded in bitcells and peripherals to perform hardware acceleration for neural networks with binarized weights and activations in some embodiments. In some embodiments, the macro utilizes analog-mixed-signal capacitive-coupling computing to evaluate the main computations of binary neural networks, binary-multiply-and-accumulate operations. Without needing to access the stored weights by individual row, the macro can assert all of its rows simultaneously and form an analog voltage at the read bitline node through capacitive voltage division, in some embodiments.Type: ApplicationFiled: June 23, 2021Publication date: October 21, 2021Inventors: Mingoo Seok, Zhewei Jiang, Jae-sun Seo, Shihui Yin
-
Patent number: 11074498Abstract: An information processing system, which includes a control system and an artificial neural network, is disclosed. The artificial neural network includes a group of neurons and a group of synapses, which includes a first portion and a second portion. The control system selects one of a group of operating modes. The group of neurons processes information. The group of synapses provide connectivity to each of the group of neurons. During a first operating mode of the group of operating modes, the first portion of the group of synapses is enabled and the second portion of the group of synapses is enabled. During a second operating mode of the group of operating modes, the first portion of the group of synapses is enabled and the second portion of the group of synapses is disabled.Type: GrantFiled: April 13, 2017Date of Patent: July 27, 2021Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventor: Jae-sun Seo
-
Publication number: 20210082502Abstract: A resistive random-access memory (RRAM) system includes an RRAM cell. The RRAM cell includes a first select line and a second select line, a word line, a bit line, a first resistive memory device, a first switching device, a second resistive memory device, a second switching device, and a comparator. The first resistive memory device is coupled between a first access node and the bit line. The first switching device is coupled between the first select line and the first access node. The second resistive memory device is coupled between a second access node and the bit line. The second switching device is coupled between the second select line and the second access node. The comparator includes a first input coupled to the bit line, a second input, and an output.Type: ApplicationFiled: July 6, 2020Publication date: March 18, 2021Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Shimeng Yu
-
Patent number: 10827952Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining an electrocardiographic (ECG) signal of a user; obtaining a feature vector of the ECG signal of the user with neural network based feature extraction. Comparing the feature vector of the ECG signal with a stored feature vector of a registered user. Authenticating the user in response to determining that a similarity of the ECG feature vector of the ECG signal and the stored ECG feature vector of the registered user exceeds a pre-defined threshold value.Type: GrantFiled: April 28, 2017Date of Patent: November 10, 2020Assignee: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Shihui Yin, Jae-sun Seo, Sang Joon Kim, Chisung Bae
-
Publication number: 20200349247Abstract: A smart hardware security engine using biometric features and hardware-specific features is provided. The smart security engine can combine one or more entropy sources, including individually distinguishable biometric features, and hardware-specific features to perform secret key generation for user registration and authentication. Such hybrid signatures may be distinct from person-to-person (e.g., due to the biometric features) and from device-to-device (e.g., due to the hardware-specific features) while varying over time. Thus, embodiments described herein can be used for personal device authentication as well as secret random key generation, significantly reducing the scope of an attack.Type: ApplicationFiled: May 1, 2020Publication date: November 5, 2020Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Shihui Yin, Sai Kiran Cherupally
-
Patent number: 10810487Abstract: A reconfigurable neural network circuit is provided. The reconfigurable neural network circuit comprises an electronic synapse array including multiple synapses interconnecting a plurality of digital electronic neurons. Each neuron comprises an integrator that integrates input spikes and generates a signal when the integrated inputs exceed a threshold. The circuit further comprises a control module for reconfiguring the synapse array. The control module comprises a global final state machine that controls timing for operation of the circuit, and a priority encoder that allows spiking neurons to sequentially access the synapse array.Type: GrantFiled: August 22, 2016Date of Patent: October 20, 2020Assignee: International Business Machines CorporationInventors: Bernard V. Brezzo, Leland Chang, Steven K. Esser, Daniel J. Friedman, Yong Liu, Dharmendra S. Modha, Robert K. Montoye, Bipin Rajendran, Jae-sun Seo, Jose A. Tierno
-
Patent number: 10706923Abstract: A resistive random-access memory (RRAM) system includes an RRAM cell. The RRAM cell includes a first select line and a second select line, a word line, a bit line, a first resistive memory device, a first switching device, a second resistive memory device, a second switching device, and a comparator. The first resistive memory device is coupled between a first access node and the bit line. The first switching device is coupled between the first select line and the first access node. The second resistive memory device is coupled between a second access node and the bit line. The second switching device is coupled between the second select line and the second access node. The comparator includes a first input coupled to the bit line, a second input, and an output.Type: GrantFiled: September 10, 2018Date of Patent: July 7, 2020Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Jae-sun Seo, Shimeng Yu
-
Patent number: 10628732Abstract: A reconfigurable neural network circuit is provided. The reconfigurable neural network circuit comprises an electronic synapse array including multiple synapses interconnecting a plurality of digital electronic neurons. Each neuron comprises an integrator that integrates input spikes and generates a signal when the integrated inputs exceed a threshold. The circuit further comprises a control module for reconfiguring the synapse array. The control module comprises a global final state machine that controls timing for operation of the circuit, and a priority encoder that allows spiking neurons to sequentially access the synapse array.Type: GrantFiled: June 14, 2016Date of Patent: April 21, 2020Assignee: International Business Machines CorporationInventors: Bernard V. Brezzo, Leland Chang, Steven K. Esser, Daniel J. Friedman, Yong Liu, Dharmendra S. Modha, Robert K. Montoye, Bipin Rajendran, Jae-sun Seo, Jose A. Tierno
-
Patent number: 10614798Abstract: Aspects disclosed in the detailed description include memory compression in a deep neural network (DNN). To support a DNN application, a fully connected weight matrix associated with a hidden layer(s) of the DNN is divided into a plurality of weight blocks to generate a weight block matrix with a first number of rows and a second number of columns. A selected number of weight blocks are randomly designated as active weight blocks in each of the first number of rows and updated exclusively during DNN training. The weight block matrix is compressed to generate a sparsified weight block matrix including exclusively active weight blocks. The second number of columns is compressed to reduce memory footprint and computation power, while the first number of rows is retained to maintain accuracy of the DNN, thus providing the DNN in an efficient hardware implementation without sacrificing accuracy of the DNN application.Type: GrantFiled: July 27, 2017Date of Patent: April 7, 2020Assignee: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Deepak Kadetotad, Sairam Arunachalam, Chaitali Chakrabarti
-
Patent number: 10482372Abstract: Systems and methods for an interconnection scheme for reconfigurable neuromorphic hardware are disclosed. A neuromorphic processor may include a plurality of corelets, each corelet may include a plurality of synapse arrays and a neuron array. Each synapse array may include a plurality of synapses and a synapse array router coupled to synapse outputs in a synapse array. Each synapse may include a synapse input, synapse output; and a synapse memory. A neuron array may include a plurality of neurons, each neuron may include a neuron input and a neuron output. Each synapse array router may include a first logic to route one or more of the synapse outputs to one or more of the neuron inputs.Type: GrantFiled: December 23, 2015Date of Patent: November 19, 2019Assignee: Intel CorporationInventors: Gregory K. Chen, Jae-Sun Seo
-
Publication number: 20190244100Abstract: Techniques are described for efficiently reducing the amount of total computation in convolutional neural networks (CNNs) without affecting the output result or classification accuracy. Computation redundancy in CNNs is reduced by exploiting the computing nature of the convolution and subsequent pooling (e.g., sub-sampling) operations. In some implementations, the input features may be divided into a group of precision values and the operation(s) may be cascaded. A maximum may be identified (e.g., by 90% probability) using a small number of bits in the input features, and the full-precision convolution may then be performed on the maximum input. Accordingly, the total number of bits used to perform the convolution is reduced without affecting the output features or the final classification accuracy.Type: ApplicationFiled: September 21, 2017Publication date: August 8, 2019Inventors: Jae-sun Seo, Minkyu Kim
-
Publication number: 20190228289Abstract: Embodiments of the invention relate to a time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network. One embodiment comprises maintaining neuron attributes for multiple neurons and maintaining incoming firing events for different time steps. For each time step, incoming firing events for said time step are integrated in a time-division multiplexing manner. Incoming firing events are integrated based on the neuron attributes maintained. For each time step, the neuron attributes maintained are updated in parallel based on the integrated incoming firing events for said time step.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Inventors: John V. Arthur, Bernard V. Brezzo, Leland Chang, Daniel J. Friedman, Paul A. Merolla, Dharmendra S. Modha, Robert K. Montoye, Jae-sun Seo, Jose A. Tierno
-
Patent number: 10360496Abstract: An apparatus and method are described for a neuromorphic processor design in which neuron timing information is duplicated on a neuromorphic core.Type: GrantFiled: April 1, 2016Date of Patent: July 23, 2019Assignee: Intel CorporationInventors: Gregory K. Chen, Jae-Sun Seo, Thomas C Chen, Raghavan Kumar
-
Patent number: 10331998Abstract: Embodiments of the invention relate to a time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network. One embodiment comprises maintaining neuron attributes for multiple neurons and maintaining incoming firing events for different time steps. For each time step, incoming firing events for said time step are integrated in a time-division multiplexing manner. Incoming firing events are integrated based on the neuron attributes maintained. For each time step, the neuron attributes maintained are updated in parallel based on the integrated incoming firing events for said time step.Type: GrantFiled: December 8, 2015Date of Patent: June 25, 2019Assignee: International Business Machines CorporationInventors: John V. Arthur, Bernard V. Brezzo, Leland Chang, Daniel J. Friedman, Paul A. Merolla, Dharmendra S. Modha, Robert K. Montoye, Jae-sun Seo, Jose A. Tierno
-
Publication number: 20190164538Abstract: Aspects disclosed in the detailed description include memory compression in a deep neural network (DNN). To support a DNN application, a fully connected weight matrix associated with a hidden layer(s) of the DNN is divided into a plurality of weight blocks to generate a weight block matrix with a first number of rows and a second number of columns. A selected number of weight blocks are randomly designated as active weight blocks in each of the first number of rows and updated exclusively during DNN training. The weight block matrix is compressed to generate a sparsified weight block matrix including exclusively active weight blocks. The second number of columns is compressed to reduce memory footprint and computation power, while the first number of rows is retained to maintain accuracy of the DNN, thus providing the DNN in an efficient hardware implementation without sacrificing accuracy of the DNN application.Type: ApplicationFiled: July 27, 2017Publication date: May 30, 2019Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Deepak Kadetotad, Sairam Arunachalam, Chaitali Chakrabarti
-
Publication number: 20190150794Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining an electrocardiographic (ECG) signal of a user; obtaining a feature vector of the ECG signal of the user with neural network based feature extraction. Comparing the feature vector of the ECG signal with a stored feature vector of a registered user. Authenticating the user in response to determining that a similarity of the ECG feature vector of the ECG signal and the stored ECG feature vector of the registered user exceeds a pre-defined threshold value.Type: ApplicationFiled: April 28, 2017Publication date: May 23, 2019Inventors: Sarma Vrudhula, Shihui Yin, Jae-sun Seo, Sang Joon Kim, Chisung Bae
-
Publication number: 20190087719Abstract: A static random-access memory (SRAM) system includes SRAM cells configured to perform exclusive NOR operations between a stored binary weight value and a provided binary input value. In some embodiments, SRAM cells are configured to perform exclusive NOR operations between a stored binary weight value and a provided ternary input value. The SRAM cells are suitable for the efficient implementation of emerging deep neural network technologies such as binary neural networks and XNOR neural networks.Type: ApplicationFiled: September 21, 2018Publication date: March 21, 2019Inventors: Jae-sun Seo, Shihui Yin, Zhewei Jiang, Mingoo Seok
-
Publication number: 20190080755Abstract: A resistive random-access memory (RRAM) system includes an RRAM cell. The RRAM cell includes a first select line and a second select line, a word line, a bit line, a first resistive memory device, a first switching device, a second resistive memory device, a second switching device, and a comparator. The first resistive memory device is coupled between a first access node and the bit line. The first switching device is coupled between the first select line and the first access node. The second resistive memory device is coupled between a second access node and the bit line. The second switching device is coupled between the second select line and the second access node. The comparator includes a first input coupled to the bit line, a second input, and an output.Type: ApplicationFiled: September 10, 2018Publication date: March 14, 2019Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Shimeng Yu
-
Patent number: 9934463Abstract: Neuromorphic computational circuitry is disclosed that includes a cross point resistive network and line control circuitry. The cross point resistive network includes variable resistive units. One set of the variable resistive units is configured to generate a correction line current on a conductive line while other sets of the variable resistive units generate resultant line currents on other conductive lines. The line control circuitry is configured to receive the line currents from the conductive lines and generate digital vector values. Each of the digital vector values is provided in accordance with a difference between the current level of a corresponding resultant line current and a current level of the correction line current. In this manner, the digital vector values are corrected by the current level of the correction line current in order to reduce errors resulting from finite on to off conductance state ratios.Type: GrantFiled: May 16, 2016Date of Patent: April 3, 2018Assignee: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Shimeng Yu, Yu Cao, Sarma Vrudhula
-
Patent number: 9887623Abstract: An apparatus for providing on-chip voltage-regulated power includes a switched capacitor voltage conversion circuit that receives an elevated power demand signal and operates at a base rate when the elevated power demand signal is not active and at an elevated rate when the elevated power demand signal is active. The switched capacitor voltage conversion circuit comprises an auxiliary set of transistors that are disabled, when the elevated power demand signal is not active and enabled, when the elevated power demand signal is active. The apparatus may also include a droop detection circuit that monitors a monitored power signal and activates the elevated power demand signal in response to the monitored power signal dropping below a selected voltage level. The monitored power signal may be a voltage input provided by an input power supply for the switched capacitor voltage conversion circuit. A corresponding method is also disclosed herein.Type: GrantFiled: October 12, 2016Date of Patent: February 6, 2018Assignee: International Business Machines CorporationInventors: Leland Chang, Robert K. Montoye, Jae-sun Seo, Albert M. Young