Patents Examined by Ajay Ojha
-
Patent number: 11631455Abstract: A compute-in-memory bitcell is provided that includes a pair of cross-coupled inverters for storing a stored bit. The compute-in-memory bitcell includes a logic gate for multiplying the stored bit with an input vector bit. An output node for the logic gate connects to a second plate of a capacitor. A first plate of the capacitor connects to a read bit line. A write driver controls a power supply voltage to the cross-coupled inverters, the first switch, and the second switch to capacitively write the stored bit to the pair of cross-coupled inverters.Type: GrantFiled: January 19, 2021Date of Patent: April 18, 2023Assignee: QUALCOMM INCORPORATEDInventors: Seyed Arash Mirhaj, Xiaonan Chen, Ankit Srivastava, Sameer Wadhwa, Zhongze Wang
-
Patent number: 11626146Abstract: The present disclosure is drawn to, among other things, a method for accessing memory using dual standby modes, the method including receiving a first standby mode indication selecting a first standby mode from a first standby mode or a second standby mode, configuring a read bias system to provide a read bias voltage and a write bias system to provide approximately no voltage, or any voltage outside the necessary range for write operation, based on the first standby mode, receiving a second standby mode indication selecting the second standby mode, and configuring the read bias system to provide at least the read bias voltage and the write bias system to provide a write bias voltage based on the second standby mode, the read bias voltage being lower than the write bias voltage.Type: GrantFiled: November 17, 2021Date of Patent: April 11, 2023Assignee: EVERSPIN TECHNOLOGIES, INC.Inventor: Syed M. Alam
-
Patent number: 11626157Abstract: A memory device includes a memory cell array including a plurality of bit cells, each of the bit cells coupled to one of a plurality of bit lines and one of a plurality of word lines, respectively, wherein each of the plurality of bit cells is configured to: present an initial logic state during a random number generator (RNG) phase; and operate as a memory cell at a first voltage level during a SRAM phase; and a controller controlling bit line signals on the plurality of bit lines and word line signals on the plurality of word lines, wherein the controller is configured to: during the RNG phase, precharge the plurality of bit lines to a second voltage level, and determine the initial logic states of the plurality of bit cells to generate at least one random number, wherein the second voltage level is lower than the first voltage level.Type: GrantFiled: June 28, 2021Date of Patent: April 11, 2023Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.Inventors: Jui-Che Tsai, Chen-Lin Yang, Yu-Hao Hsu, Shih-Lien Linus Lu
-
Patent number: 11625588Abstract: A neuron circuit and an artificial neural network chip are provided. The neuron circuit includes a memristor and an integrator. The memristor generates a pulse train having an oscillation frequency when an applied voltage exceeds a predetermined threshold. The integrator is connected in parallel to the memristor for receiving and accumulating input pulses transmitted by a previous layer network at different times, and driving the memristor to transmit the pulse train to a next layer network when a voltage of the accumulated input pulses exceeds the predetermined threshold.Type: GrantFiled: March 4, 2020Date of Patent: April 11, 2023Assignee: Industrial Technology Research InstituteInventors: Tuo-Hung Hou, Shyh-Shyuan Sheu, Jeng-Hua Wei, Heng-Yuan Lee, Ming-Hung Wu
-
Patent number: 11609864Abstract: A low-latency, high-bandwidth, and highly scalable method delivers data from a source device to multiple communication devices on a communication network. Under this method, the communication devices (also called player nodes) provide download and upload bandwidths for each other. In this manner, the bandwidth requirement on the data source is significantly reduced. Such a data delivery network is scalable without limits with the number of player nodes. In one embodiment, a computer network includes (a) a source server that provides a data stream for delivery in the computer network, (b) player nodes that exchange data with each other to obtain a complete copy of the data stream, the network nodes being capable of dynamically joining or exiting the computer network, and (c) a control server which maintains a topology graph representing connections between the source server and the player nodes, and the connections among the player nodes themselves.Type: GrantFiled: May 17, 2021Date of Patent: March 21, 2023Inventor: Wensheng Hua
-
Patent number: 11604974Abstract: A neural network computation circuit that outputs output data according to a result of a multiply-accumulate operation between input data and connection weight coefficients, the neural network computation circuit includes computation units in each of which a non-volatile semiconductor memory element and a cell transistor are connected in series between data lines, a non-volatile semiconductor memory element and a cell transistor are connected in series between data lines, and gates of the transistors are connected to word lines. The connection weight coefficients are stored into the non-volatile semiconductor memory elements. A word line selection circuit places the word lines in a selection state or a non-selection state according to the input data. A determination circuit determines current values flowing in data lines to output output data.Type: GrantFiled: March 3, 2020Date of Patent: March 14, 2023Assignee: PANASONIC HOLDINGS CORPORATIONInventors: Kazuyuki Kouno, Takashi Ono, Masayoshi Nakayama, Reiji Mochida, Yuriko Hayata
-
Patent number: 11604971Abstract: A neuromorphic apparatus includes a three-dimensionally-stacked synaptic structure, and includes a plurality of unit synaptic modules, each of the plurality of unit synaptic modules including a plurality of synaptic layers, each of the plurality of synaptic layers including a plurality of stacked layers, and each of the plurality of unit synaptic modules further including a first decoder interposed between two among the plurality of synaptic layers. The neuromorphic apparatus further includes a second decoder that provides a level selection signal to the first decoder included in one among the plurality of unit synaptic modules to be accessed, and a third decoder that generates an address of one among a plurality of memristers to be accessed in a memrister array of one among the plurality of synaptic layers included in the one among the plurality of unit synaptic modules to be accessed.Type: GrantFiled: May 16, 2019Date of Patent: March 14, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jaechul Park, Sangwook Kim
-
Patent number: 11600311Abstract: A memory controller may control a memory device. The memory device may be coupled to the memory controller through a channel. The memory controller may include an idle time monitor and a clock signal generator. The idle time monitor may output an idle time interval of the memory device. The idle time interval may be between an end time of a previous operation of the memory device and a start time of a current operation. The clock signal generator may generate a clock signal based on the idle time interval and output the clock signal to the memory device through the channel to perform a current operation.Type: GrantFiled: July 6, 2021Date of Patent: March 7, 2023Assignee: SK hynix IncInventors: Hyun Sub Kim, Ie Ryung Park, Dong Sop Lee
-
Patent number: 11593070Abstract: According to one embodiment, an arithmetic device includes an arithmetic circuit. The arithmetic circuit includes a memory part including a plurality of memory regions, and an arithmetic part. One of the memory regions includes a capacitance including a first terminal, and a first electrical circuit electrically connected to the first terminal and configured to output a voltage signal corresponding to a potential of the first terminal.Type: GrantFiled: March 10, 2020Date of Patent: February 28, 2023Assignee: Kabushiki Kaisha ToshibaInventors: Rie Sato, Koichi Mizushima
-
Patent number: 11593068Abstract: A method for computation with recurrent neural networks includes receiving an input drive and a recurrent drive, producing at least one modulatory response; computing at least one output response, each output response including a sum of: (1) the input drive multiplied by a function of at least one of the at least one modulatory response, each input drive including a function of at least one input, and (2) the recurrent drive multiplied by a function of at least one of the at least one modulatory response, each recurrent drive including a function of the at least one output response, each modulatory response including a function of at least one of (i) the at least one input, (ii) the at least one output response, or (iii) at least one first offset, and computing a readout of the at least one output response.Type: GrantFiled: February 26, 2019Date of Patent: February 28, 2023Assignee: New York UniversityInventors: David J. Heeger, Wayne E. Mackey
-
Patent number: 11586906Abstract: A computing device receives first data on which to train an artificial neural network (ANN). Using magnetic random access memory (MRAM), the computing device trains the ANN by performing a first set of training iterations on the first data. Each of the first set of iterations includes writing values for a set of weights of the ANN to the MRAM using first write parameters corresponding to a first write error rate. After performing the first set of iterations, the computing device performs a second set of training iterations on the first data. Each of the second set of iterations includes writing values for the set of weights of the ANN to the MRAM using second write parameters corresponding to a second write error rate. The second write error rate is lower than the first write error rate. The computing device stores values for the trained ANN.Type: GrantFiled: December 17, 2018Date of Patent: February 21, 2023Assignee: Integrated Silicon Solution, (Cayman) Inc.Inventors: Michail Tzoufras, Marcin Gajek
-
Patent number: 11586563Abstract: A processor distributes memory timing parameters and data among different memory modules based upon memory access patterns. The memory access patterns indicate different types, or classes, of data for an executing workload, with each class associated with different memory access characteristics, such as different row buffer hit rate levels, different frequencies of access, different criticalities, and the like. The processor assigns each memory module to a data class and sets the memory timing parameters for each memory module according to the module's assigned data class, thereby tailoring the memory timing parameters for efficient access of the corresponding data.Type: GrantFiled: December 22, 2020Date of Patent: February 21, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Max Ruttenberg, Vendula Venkata Srikant Bharadwaj, Yasuko Eckert, Anthony Gutierrez, Mark H. Oskin
-
Patent number: 11581042Abstract: Provided are processing and an electronic device including the same. The processing apparatus includes a bit cell line comprising bit cells connected in series, a mirror circuit unit configured to generate a mirror current by replicating a current flowing through the bit cell line at a ratio, a charge charging unit configured to charge a voltage corresponding to the mirror current as the mirror current replicated by the mirror circuit unit is applied, and a voltage measuring unit configured to output a value corresponding to a multiply-accumulate (MAC) operation of weights and inputs applied to the bit cell line, based on the voltage charged by the charge charging unit.Type: GrantFiled: February 22, 2021Date of Patent: February 14, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Hyungwoo Lee, Sangjoon Kim, Seungchul Jung, Yongmin Ju
-
Patent number: 11574659Abstract: A memory system having a processing device (e.g., CPU) and memory regions (e.g., in a DRAM device) on the same chip or die. The memory regions store data used by the processing device during machine learning processing (e.g., using a neural network). One or more controllers are coupled to the memory regions and configured to: read data from a first memory region (e.g., a first bank), including reading first data from the first memory region, where the first data is for use by the processing device in processing associated with machine learning; and write data to a second memory region (e.g., a second bank), including writing second data to the second memory region. The reading of the first data and writing of the second data are performed in parallel.Type: GrantFiled: September 11, 2018Date of Patent: February 7, 2023Assignee: Micron Technology, Inc.Inventor: Gil Golov
-
Patent number: 11557327Abstract: The invention relates to a method for operating a memory assembly. A physical address is received. The physical address is associated with a first memory segment of a memory assembly. The physical address is modified to a modified physical address. The modified physical address is associated with a second memory segment of the memory assembly.Type: GrantFiled: October 1, 2019Date of Patent: January 17, 2023Assignee: TECHNISCHE UNIVERSITÄT MÜNCHENInventors: Alexandra Listl, Daniel Mueller-Gritschneder
-
Patent number: 11551072Abstract: A spiking neural networks circuit and an operation method thereof are provided. The spiking neural networks circuit includes a bit-line input synapse array and a neuron circuit. The bit-line input synapse array includes a plurality of page buffers, a plurality of bit line transistors, a plurality of bit lines, a plurality of memory cells, one word line, a plurality of source lines and a plurality of source line transistors. The page buffers provides a plurality of data signals. Each of the bit line transistors is electrically connected to one of the page buffers. Each of the bit lines receives one of the data signals. The source line transistors are connected together. The neuron circuit is for outputting a feedback pulse.Type: GrantFiled: May 12, 2020Date of Patent: January 10, 2023Assignee: MACRONIX INTERNATIONAL CO., LTD.Inventors: Cheng-Lin Sung, Teng-Hao Yeh
-
Patent number: 11551065Abstract: Hardware for implementing a Deep Neural Network (DNN) having a convolution layer, the hardware comprising a plurality of convolution engines each operable to perform a convolution operation by applying a filter to a data window, each filter comprising a set of weights for combination with respective data values of a data window, and each of the plurality of convolution engines comprising: multiplication logic operable to combine a weight of a filter with a respective data value of a data window; control logic configured to: receive configuration information identifying a set of filters for operation on a set of data windows at the plurality of convolution engines; determine, using the configuration information, a sequence of convolution operations for evaluation at the multiplication logic; in accordance with the determined sequence of convolution operations, request weights and data values for at least partially applying a filter to a data window; and cause the multiplication logic to combine the weights witType: GrantFiled: November 6, 2018Date of Patent: January 10, 2023Assignee: Imagination Technologies LimitedInventor: Christopher Martin
-
Patent number: 11544547Abstract: A non-volatile memory device includes an array of non-volatile memory cells that are configured to store weights of a neural network. Associated with the array is a data latch structure that includes a page buffer, which can store weights for a layer of the neural network that is read out of the array, and a transfer buffer, that can store inputs for the neural network. The memory device can perform multiply and accumulate operations between inputs and weight of the neural network within the latch structure, avoiding the need to transfer data out of the array and associated latch structure for portions of an inference operation. By using binary weights and inputs, multiplication can be performed by bit-wise XNOR operations. The results can then be summed and activation applied, all within the latch structure.Type: GrantFiled: June 22, 2020Date of Patent: January 3, 2023Assignee: Western Digital Technologies, Inc.Inventors: Anand Kulkarni, Won Ho Choi, Martin Lueker-Boden
-
Patent number: 11545211Abstract: A semiconductor memory device includes a memory cell array, a sense amplifier circuit and a random code generator. The memory cell array is divided into a plurality of sub array blocks arranged in a first direction and a second direction crossing the first direction. The sense amplifier circuit is arranged in the second direction with respect to the memory cell array, and includes a plurality of input/output (I/O) sense amplifiers. The random code generator generates a random code which is randomly determined based on a power stabilizing signal and an anti-fuse flag signal. A second group of I/O sense amplifiers selected from among a first group of I/O sense amplifiers performs a data I/O operation by data scrambling data bits of main data. The first group of I/O sense amplifiers correspond to a first group of sub array blocks accessed by an access address.Type: GrantFiled: August 12, 2021Date of Patent: January 3, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kiheung Kim, Junhyung Kim, Sungchul Park, Hangyun Jung, Hyojin Jung, Kyungsoo Ha
-
Patent number: 11537863Abstract: A resistive processing unit cell includes a weight storage device to store a weight value of the resistive processing unit cell, and multiple circuit blocks. Each circuit block includes a weight update circuit coupled to dedicated update control lines, and a weight read circuit coupled to dedicated read control lines. The circuit blocks are configured to operate in parallel to (i) perform separate weight read operations in which each read circuit generates a read current based on a stored weight value, and outputs the read current on the dedicated read control lines of the read circuit, and (ii) perform separate weight update operations in which each update circuit receives respective update control signals on the dedicated update control lines, generates update currents based on the respective update control signals, and applies the update current to the weight storage device to adjust the weight value based on the update current.Type: GrantFiled: September 12, 2019Date of Patent: December 27, 2022Assignee: International Business Machines CorporationInventors: Effendi Leobandung, Zhibin Ren, Malte Rasch