Patents by Inventor Dmitri Yudanov

Dmitri Yudanov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11256624
    Abstract: Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: February 22, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Kenneth Marion Curewitz, Ameen D. Akel, Samuel E. Bradshaw, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20220050776
    Abstract: Methods, systems, and devices related to content-addressable memory for signal development caching are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory device may also include storage, such as a content-addressable memory, configured to store a mapping between addresses of the signal development cache and addresses of the memory array. In various examples, accessing the memory device may include determining and storing a mapping between addresses of the signal development cache and addresses of the memory array, or determining whether to access the signal development cache or the memory array based on such a mapping.
    Type: Application
    Filed: December 20, 2019
    Publication date: February 17, 2022
    Inventors: Dmitri A. Yudanov, Shanky Kumar Jain
  • Publication number: 20220044713
    Abstract: Methods, systems, and devices related to write broadcast operations associated with a memory device are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory device may enable read broadcast operations. A read broadcast may occur from the memory array to multiple locations of the signal development cache, for example via one or more multiplexers.
    Type: Application
    Filed: December 20, 2019
    Publication date: February 10, 2022
    Inventors: Dmitri A. Yudanov, Shanky Kumar Jain
  • Publication number: 20220044723
    Abstract: Methods, systems, and devices related to signal development caching in a memory device are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). In various examples, accessing the memory device may include accessing information from the signal development cache, or the memory array, or both, based on various mappings or operations of the memory device.
    Type: Application
    Filed: December 20, 2019
    Publication date: February 10, 2022
    Inventors: Dmitri A. Yudanov, Shanky Kumar Jain
  • Publication number: 20220037920
    Abstract: Methods, systems, and devices for inductive energy harvesting and signal development for a memory device are described. One or more inductors may be included in or coupled with a memory device and used to provide current for various operations of the memory device based on energy harvested by the inductors. An inductor may harvest energy based on current being routed through the inductor or based on being inductively coupled with a second inductor through which current is routed. After harvesting energy, an inductor may provide current, and the current provided by the inductor may be used to drive access lines or otherwise as part of executing one or more operations at the memory device. Such techniques may improve energy efficiency or improve the drive strength of signals for the memory device, among other benefits.
    Type: Application
    Filed: July 28, 2020
    Publication date: February 3, 2022
    Inventor: Dmitri A. Yudanov
  • Publication number: 20220027285
    Abstract: Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used.
    Type: Application
    Filed: October 7, 2021
    Publication date: January 27, 2022
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Patent number: 11232049
    Abstract: A memory module having a plurality of memory chips, at least one controller (e.g., a central processing unit or special-purpose controller), and at least one interface device configured to communicate input and output data for the memory module. The input and output data bypasses at least one processor (e.g., a central processing unit) of a computing device in which the memory module is installed. And, the at least one interface device can be configured to communicate the input and output data to at least one other memory module in the computing device. Also, the memory module can be one module in a plurality of memory modules of a memory module system.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: January 25, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Dmitri Yudanov
  • Publication number: 20220020414
    Abstract: Methods, systems, and devices related to page policies for signal development caching in a memory device are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory device may be configured to receive a read command for data stored in the memory array and transfer the data from the memory array to the signal development cache. The memory device may be configured to sense the data using an array of sense amplifiers. The memory device may be configured to write the data from the signal development cache back to the memory array based on one or more policies.
    Type: Application
    Filed: December 20, 2019
    Publication date: January 20, 2022
    Inventors: Dmitri A. Yudanov, Shanky Kumar Jain
  • Publication number: 20220020424
    Abstract: Methods, systems, and devices related to write broadcast operations associated with a memory device are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory device may enable write broadcast operations. A write broadcast may occur from one or more signal development components or from one or more multiplexers to multiple locations of the memory array.
    Type: Application
    Filed: December 20, 2019
    Publication date: January 20, 2022
    Inventors: Dmitri A. Yudanov, Shanky Kumar Jain
  • Publication number: 20220019442
    Abstract: An example system implementing a processing-in-memory pipeline includes: a memory array to store a plurality of look-up tables (LUTs) and data; a control block coupled to the memory array, the control block to control a computational pipeline by activating one or more LUTs of the plurality of LUTs; and a logic array coupled to the memory array and the control block, the logic array to perform, based on control inputs received from the control block, logic operations on the activated LUTs and the data.
    Type: Application
    Filed: July 17, 2020
    Publication date: January 20, 2022
    Inventor: Dmitri Yudanov
  • Patent number: 11221797
    Abstract: Methods, systems, and devices related to domain-based access in a memory device are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory array may be organized according to domains, which may refer to various configurations or collections of access lines, and selections thereof, of different portions of the memory array. In various examples, a memory device may determine a plurality of domains for a received access command, or an order for accessing a plurality of domains for a received access command, or combinations thereof, based on an availability of the signal development cache.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: January 11, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Dmitri A. Yudanov, Shanky Kumar Jain
  • Publication number: 20210406176
    Abstract: An apparatus having a memory array. The memory array having a first section and a second section. The first section of the memory array including a first sub-array of memory cells made up of a first type of memory. The second section of the memory array including a second sub-array of memory cells made up of the first type of memory with a configuration to each memory cell of the second sub-array that is different from the configuration to each cell of the first sub-array. Alternatively, the section can include memory cells made up of a second type of memory that is different from the first type of memory. Either way, the second type of memory or the differently configured first type of memory has memory cells in the second sub-array having less memory latency than each memory cell of the first type of memory in the first sub-array.
    Type: Application
    Filed: September 8, 2021
    Publication date: December 30, 2021
    Inventor: Dmitri Yudanov
  • Publication number: 20210398578
    Abstract: Methods, systems, and devices for a magnetic cache for a memory device are described. Magnetic storage elements (e.g., magnetic memory cells, such as spin-transfer torque (STT) memory cells or magnetic tunnel junction (MTJ) memory cells) may be configured to act as a cache for a memory array, where the memory array includes a different type of memory cells. The magnetic storage elements may be inductively coupled to access lines for the memory array. Based on this inductive coupling, when a memory value is written to or read from a memory cell of the array, the memory value may concurrently be written to a magnetic storage element based on associated current through an access line used to write or read the memory cell. Subsequent read requests may be executed by reading the memory value from the magnetic storage element rather than from the memory cell of the array.
    Type: Application
    Filed: June 22, 2020
    Publication date: December 23, 2021
    Inventor: Dmitri A. Yudanov
  • Publication number: 20210397932
    Abstract: Methods, apparatuses, and systems for in-or near-memory processing are described. Bits of a first number may be stored on a number of memory elements, wherein each memory element of the number of memory elements intersects a bit line and a word line of a number of word lines. A number of signals corresponding to bits of a second number may be driven on the number of word lines to generate a number of output signals. A value equal to a product of the first number and the second number may be generated based on the number of output signals.
    Type: Application
    Filed: June 23, 2020
    Publication date: December 23, 2021
    Inventors: Dmitri Yudanov, Sean S. Eilert, Hernan A. Castro, William A. Melton
  • Publication number: 20210391004
    Abstract: Systems and methods for performing a pattern matching operation in a memory device are disclosed. The memory device may include a controller and memory arrays where the memory arrays store different patterns along bit lines. An input pattern is applied to the memory array(s) to determine whether the pattern is stored in the memory device. Word lines may be activated in series or in parallel to search for patterns within the memory array. The memory array may include memory cells that store binary digits, discrete values or analog values.
    Type: Application
    Filed: June 16, 2020
    Publication date: December 16, 2021
    Inventor: Dmitri Yudanov
  • Patent number: 11169930
    Abstract: Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: November 9, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Publication number: 20210342274
    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.
    Type: Application
    Filed: July 14, 2021
    Publication date: November 4, 2021
    Inventors: Samuel E. Bradshaw, Ameen D. Akel, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20210334234
    Abstract: The present disclosure is directed to a distributed graphics processor unit (GPU) architecture that includes an array of processing nodes. Each processing node may include a GPU node that is coupled to its own fast memory unit and its own storage unit. The fast memory unit and storage unit may be integrated into a single unit or may be separately coupled to the GPU node. The processing node may have its fast memory unit coupled to both the GPU node and the storage node. The various architectures provide a GPU-based system that may be treated as a storage unit, such as solid state drive (SSD) that performs onboard processing to perform memory-oriented operations. In this respect, the system may be viewed as a “smart drive” for big-data near-storage processing.
    Type: Application
    Filed: April 22, 2020
    Publication date: October 28, 2021
    Inventor: Dmitri Yudanov
  • Publication number: 20210303265
    Abstract: The present disclosure is directed to systems and methods for a memory device such as, for example, a Processing-In-Memory Device that is configured to perform multiplication operations in memory using a popcount operation. A multiplication operation may include a summation of multipliers being multiplied with corresponding multiplicands. The inputs may be arranged in particular configurations within a memory array. Sense amplifiers may be used to perform the popcount by counting active bits along bit lines. One or more registers may accumulate results for performing the multiplication operations.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Inventor: Dmitri Yudanov
  • Publication number: 20210294746
    Abstract: A memory module system with a global shared context. A memory module system can include a plurality of memory modules and at least one processor, which can implement the global shared context. The memory modules of the system can provide the global shared context at least in part by providing an address space shared between the modules and applications running on the modules. The address space sharing can be achieved by having logical addresses global to the modules, and each logical address can be associated with a certain physical address of a specific module.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 23, 2021
    Inventor: Dmitri Yudanov