Patents by Inventor Dmitri Yudanov
Dmitri Yudanov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12380323Abstract: The disclosed embodiments are related to storing critical data in a memory device such as Flash or DRAM memory device. In one embodiment, a device comprising a plurality of parallel processors is disclosed, the plurality of parallel processors configured to: perform a search and match operation, the search and match operation loading a plurality of synaptic identifier bit strings and a plurality of spike identifier bit strings, the search and match operation further generating a plurality of bitmasks; perform a synaptic integration phase, the synaptic integration phase generating a plurality of synaptic current vectors based on the plurality of bitmasks, the synaptic current vectors associated with respective synthetic neurons; solve a neural membrane equation for each of the synthetic neurons; and update membrane potentials associated with the synthetic neurons, the membrane potentials stored in a memory device.Type: GrantFiled: May 28, 2021Date of Patent: August 5, 2025Assignee: Micron Technology, Inc.Inventor: Dmitri Yudanov
-
Patent number: 12333304Abstract: Methods, apparatuses, and systems for in-or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-parallel way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of a memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.Type: GrantFiled: February 20, 2024Date of Patent: June 17, 2025Assignee: Micron Technology, Inc.Inventors: Dmitri Yudanov, Sean S. Eilert, Sivagnanam Parthasarathy, Shivasankar Gunasekaran, Ameen D. Akel
-
Publication number: 20250173093Abstract: Methods, systems, and devices related to write broadcast operations associated with a memory device are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory device may enable write broadcast operations. A write broadcast may occur from one or more signal development components or from one or more multiplexers to multiple locations of the memory array.Type: ApplicationFiled: December 3, 2024Publication date: May 29, 2025Inventors: Dmitri Yudanov, Shanky Kumar Jain
-
Publication number: 20250173144Abstract: Methods, devices, and systems for in-or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-parallel way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of a memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.Type: ApplicationFiled: January 16, 2025Publication date: May 29, 2025Inventors: Dmitri Yudanov, Sean S. Eilert, Sivagnanam Parthasarathy, Shivasankar Gunasekaran, Ameen D. Akel
-
Publication number: 20250165762Abstract: Systems and methods are disclosed. A system may include a number of memory arrays and circuitry coupled to the number of memory arrays. The circuitry may store synaptic connections of a destination neuron in a first memory array of the number of memory arrays. The circuitry may also store pre-synaptic spike events from respective source neurons in a second memory array of the number of memory arrays. In response to a match of a neuron identification of the synaptic connections of the destination neuron with a neuron identification of the source neurons, the circuitry may generate a signal. The circuitry may further drive, based on the signal, at least one word line of a third memory array of the number of memory arrays.Type: ApplicationFiled: January 14, 2025Publication date: May 22, 2025Inventors: Dmitri Yudanov, Sean S. Eilert, Hernan A. Castro, Ameen D. Akel
-
Patent number: 12301659Abstract: Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device.Type: GrantFiled: August 30, 2022Date of Patent: May 13, 2025Assignee: Micron Technology, Inc.Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
-
Publication number: 20250147897Abstract: Systems, methods and apparatuses of fine grain data migration in using memory as a service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used.Type: ApplicationFiled: January 10, 2025Publication date: May 8, 2025Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
-
Patent number: 12283318Abstract: Systems and methods for performing a pattern matching operation in a memory device are disclosed. The memory device may include a controller and memory arrays where the memory arrays store different patterns along bit lines. An input pattern is applied to the memory array(s) to determine whether the pattern is stored in the memory device. Word lines may be activated in series or in parallel to search for patterns within the memory array. The memory array may include memory cells that store binary digits, discrete values or analog values.Type: GrantFiled: November 21, 2023Date of Patent: April 22, 2025Assignee: Lodestar Licensing Group LLCInventor: Dmitri Yudanov
-
Publication number: 20250124102Abstract: Memories might include a plurality of strings of series-connected memory cells, each corresponding to a respective digit of a plurality of digits of a multiplicand, and might further include a controller configured to cause the memory to generate respective current flows through the plurality of strings of series-connected memory cells for each digit of a plurality of digits of a multiplier having respective current levels indicative of values of each digit of the plurality of digits of the multiplier times the multiplicand, to convert the respective current levels to respective digital values indicative of the values and magnitudes of each digit of the plurality of digits of the multiplier times the multiplicand, and to sum the respective digital value of each digit of the plurality of digits of the multiplier.Type: ApplicationFiled: June 28, 2024Publication date: April 17, 2025Applicant: MICRON TECHNOLOGY, INC.Inventors: Dmitri Yudanov, Lawrence Celso Miranda, Sheyang Ning, Aliasger Zaidy
-
Publication number: 20250069629Abstract: Processing can occur in registers of a memory sub-system. A first plurality of registers coupled to the plurality of sense amplifiers can store the first plurality of bits received from the plurality of sense amplifiers. Processing circuitry coupled to the first plurality of registers can receive the first plurality of bits from the first plurality of registers and can perform an operation on the first plurality of bits to generate result bits. A second plurality of registers coupled to the processing circuitry and the plurality of registers can store the result bits received from the processing circuitry and can provide the result bits to a plurality of data input/output (I/O) lines prior to storing a second plurality of bits.Type: ApplicationFiled: July 27, 2024Publication date: February 27, 2025Inventors: Dmitri Yudanov, James B. Johnson, Peter L. Brown, Glen E. Hush
-
Patent number: 12229060Abstract: A memory module having a plurality of memory chips, at least one controller (e.g., a central processing unit or special-purpose controller), and at least one interface device configured to communicate input and output data for the memory module. The input and output data bypasses at least one processor (e.g., a central processing unit) of a computing device in which the memory module is installed. And, the at least one interface device can be configured to communicate the input and output data to at least one other memory module in the computing device. Also, the memory module can be one module in a plurality of memory modules of a memory module system.Type: GrantFiled: December 17, 2021Date of Patent: February 18, 2025Assignee: Micron Technology, Inc.Inventor: Dmitri Yudanov
-
Publication number: 20250045096Abstract: Customized root processes for groups of applications in a computing device. A computing device (e.g., a mobile device) can monitor usage of applications. The device can then store data related to the usage of the applications, and group the applications into groups according to the stored data. The device can customize and execute a root process for a group of applications according to usage common to each application in the group. The device can generate patterns of prior executions shared amongst the applications in the group based on the stored data common to each application in the group, and execute the root process of the group according to the patterns. The device can receive a request to start an application from the group from a user of the device, and start the application upon receiving the request and by using the root process of the group of applications.Type: ApplicationFiled: October 17, 2024Publication date: February 6, 2025Inventors: Dmitri Yudanov, Samuel E. Bradshaw
-
Publication number: 20250028676Abstract: The present disclosure is directed to a distributed graphics processor unit (GPU) architecture that includes an array of processing nodes. Each processing node may include a GPU node that is coupled to its own fast memory unit and its own storage unit. The fast memory unit and storage unit may be integrated into a single unit or may be separately coupled to the GPU node. The processing node may have its fast memory unit coupled to both the GPU node and the storage node. The various architectures provide a GPU-based system that may be treated as a storage unit, such as solid state drive (SSD) that performs onboard processing to perform memory-oriented operations. In this respect, the system may be viewed as a “smart drive” for big-data near-storage processing.Type: ApplicationFiled: October 4, 2024Publication date: January 23, 2025Inventor: Dmitri Yudanov
-
Patent number: 12135985Abstract: Customized root processes for groups of applications in a computing device. A computing device (e.g., a mobile device) can monitor usage of applications. The device can then store data related to the usage of the applications, and group the applications into groups according to the stored data. The device can customize and execute a root process for a group of applications according to usage common to each application in the group. The device can generate patterns of prior executions shared amongst the applications in the group based on the stored data common to each application in the group, and execute the root process of the group according to the patterns. The device can receive a request to start an application from the group from a user of the device, and start the application upon receiving the request and by using the root process of the group of applications.Type: GrantFiled: August 30, 2022Date of Patent: November 5, 2024Assignee: Micron Technology, Inc.Inventors: Dmitri Yudanov, Samuel E. Bradshaw
-
Publication number: 20240345957Abstract: Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices.Type: ApplicationFiled: June 21, 2024Publication date: October 17, 2024Inventors: Kenneth Marion Curewitz, Ameen D. Akel, Samuel E. Bradshaw, Sean Stephen Eilert, Dmitri Yudanov
-
Patent number: 12111789Abstract: The present disclosure is directed to a distributed graphics processor unit (GPU) architecture that includes an array of processing nodes. Each processing node may include a GPU node that is coupled to its own fast memory unit and its own storage unit. The fast memory unit and storage unit may be integrated into a single unit or may be separately coupled to the GPU node. The processing node may have its fast memory unit coupled to both the GPU node and the storage node. The various architectures provide a GPU-based system that may be treated as a storage unit, such as solid state drive (SSD) that performs onboard processing to perform memory-oriented operations. In this respect, the system may be viewed as a “smart drive” for big-data near-storage processing.Type: GrantFiled: April 22, 2020Date of Patent: October 8, 2024Assignee: Micron Technology, Inc.Inventor: Dmitri Yudanov
-
Publication number: 20240330667Abstract: Methods, apparatuses, and systems for in- or near-memory processing are described. Bits of a first number may be stored on a number of memory elements, wherein each memory element of the number of memory elements intersects a digit line and an access line of a number of access lines. A number of signals corresponding to bits of a second number may be driven on the number of access lines to generate a number of output signals. A value equal to a product of the first number and the second number may be generated based on the number of output signals.Type: ApplicationFiled: June 10, 2024Publication date: October 3, 2024Inventors: Dmitri Yudanov, Sean S. Eilert, Hernan A. Castro, William A. Melton
-
Publication number: 20240290391Abstract: A system for providing complex page access in memory devices, such as hybrid-bonded memory is disclosed. The system receives a plurality of requests for data, such as from a host device. The system identifies a memory page of a memory device storing data bits corresponding to the requested data. The memory page may be spread across a plurality of sections of a memory bank of the memory device. Each section of the memory bank being utilized for a portion of the memory page may be addressable by a separate row address. The system activates the memory page as a whole and enables the data to be accessed from different memory rows in different sections of the memory page of the memory device using the separate row addresses. The system accomplishes the foregoing instead of requiring access from only a single location of the memory bank at a time.Type: ApplicationFiled: January 19, 2024Publication date: August 29, 2024Inventors: Dmitri Yudanov, Jeongsu Jeong
-
Publication number: 20240273349Abstract: Spiking events in a spiking neural network may be processed via a memory system. A memory system may store data corresponding to a group of destination neurons. The memory system may, at each time interval of a SNN, pass through data corresponding to a group of pre-synaptic spike events from respective source neurons. The data corresponding to the group of pre-synaptic spike events may be subsequently stored in the memory system.Type: ApplicationFiled: April 22, 2024Publication date: August 15, 2024Inventors: Dmitri Yudanov, Sean S. Eilert, Hernan A. Castro, Ameen D. Akel
-
Patent number: 12056599Abstract: Methods, apparatuses, and systems for in-or near-memory processing are described. Bits of a first number may be stored on a number of memory elements, wherein each memory element of the number of memory elements intersects a bit line and a word line of a number of word lines. A number of signals corresponding to bits of a second number may be driven on the number of word lines to generate a number of output signals. A value equal to a product of the first number and the second number may be generated based on the number of output signals.Type: GrantFiled: December 2, 2022Date of Patent: August 6, 2024Assignee: Micron Technology, Inc.Inventors: Dmitri Yudanov, Sean S. Eilert, Hernan A. Castro, William A. Melton