Patents by Inventor Dmitri Yudanov

Dmitri Yudanov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210294746
    Abstract: A memory module system with a global shared context. A memory module system can include a plurality of memory modules and at least one processor, which can implement the global shared context. The memory modules of the system can provide the global shared context at least in part by providing an address space shared between the modules and applications running on the modules. The address space sharing can be achieved by having logical addresses global to the modules, and each logical address can be associated with a certain physical address of a specific module.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 23, 2021
    Inventor: Dmitri Yudanov
  • Publication number: 20210294741
    Abstract: An apparatus having a memory array. The memory array having a first section and a second section. The first section of the memory array including a first sub-array of memory cells made up of a first type of memory. The second section of the memory array including a second sub-array of memory cells made up of the first type of memory with a configuration to each memory cell of the second sub-array that is different from the configuration to each cell of the first sub-array. Alternatively, the section can include memory cells made up of a second type of memory that is different from the first type of memory. Either way, the second type of memory or the differently configured first type of memory has memory cells in the second sub-array having less memory latency than each memory cell of the first type of memory in the first sub-array.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 23, 2021
    Inventor: Dmitri Yudanov
  • Publication number: 20210294608
    Abstract: The present disclosure is directed to systems and methods for a Processing-In-Memory Device that is configured to perform dot product calculations. A sequence control may be used to store data in a memory array according to an allocation pattern. The cells of the memory array may correspond to array elements of the data. The sequence control may apply another array of data to groups of elements within the memory array using the allocation pattern to perform dot product calculations. The dot product calculations may be used, for example, to implement a layer in a convolutional neural network.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 23, 2021
    Inventor: Dmitri Yudanov
  • Patent number: 11126548
    Abstract: An apparatus having a memory array. The memory array having a first section and a second section. The first section of the memory array including a first sub-array of memory cells made up of a first type of memory. The second section of the memory array including a second sub-array of memory cells made up of the first type of memory with a configuration to each memory cell of the second sub-array that is different from the configuration to each cell of the first sub-array. Alternatively, the section can include memory cells made up of a second type of memory that is different from the first type of memory. Either way, the second type of memory or the differently configured first type of memory has memory cells in the second sub-array having less memory latency than each memory cell of the first type of memory in the first sub-array.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: September 21, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Dmitri Yudanov
  • Publication number: 20210263856
    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.
    Type: Application
    Filed: May 12, 2021
    Publication date: August 26, 2021
    Inventors: Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Patent number: 11100007
    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: August 24, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Ameen D. Akel, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Patent number: 11061819
    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: July 13, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20210182119
    Abstract: Systems and methods for implementing shadow computations in base stations. The systems and methods can include a method including initiating, at a base station (such as a cellular base station), a shadow computation of a main computation executing for a mobile device. The main computation can include a computational task, and the shadow computation can be at least a part of or a derivative of the main computation. The method can also include executing, by the base station, the shadow computation.
    Type: Application
    Filed: December 13, 2019
    Publication date: June 17, 2021
    Inventor: Dmitri Yudanov
  • Publication number: 20210182220
    Abstract: A memory module having a plurality of memory chips, at least one controller (e.g., a central processing unit or special-purpose controller), and at least one interface device configured to communicate input and output data for the memory module. The input and output data bypasses at least one processor (e.g., a central processing unit) of a computing device in which the memory module is installed. And, the at least one interface device can be configured to communicate the input and output data to at least one other memory module in the computing device. Also, the memory module can be one module in a plurality of memory modules of a memory module system.
    Type: Application
    Filed: December 13, 2019
    Publication date: June 17, 2021
    Inventor: Dmitri Yudanov
  • Publication number: 20210157718
    Abstract: Reduction of page migration while maintaining benefits of migration can include operations that include scoring objects and executables of application processes of a computing device based on placement and movement of the objects and executables in memory of the device, as well as grouping the objects and executables based on the placement and movement of the objects and executables in the memory. The operations can also include controlling loading and storing, in a first type of memory of the memory, at a first plurality of pages of the memory, a first group of the objects and executables at least according to the scoring. And, the operations can include controlling loading and storing, in at least one additional type of memory of the memory, at one or more additional pluralities of pages of the memory, at least one additional group of the objects and executables at least according to the scoring.
    Type: Application
    Filed: November 25, 2019
    Publication date: May 27, 2021
    Inventors: Dmitri Yudanov, Samuel E. Bradshaw
  • Publication number: 20210157646
    Abstract: Enhancement or reduction of page migration can include operations that include scoring, in a computing device, each executable of at least a first group and a second group of executables in the computing device. The executables can be related to user interface elements of applications and associated with pages of memory in the computing device. For each executable, the scoring can be based at least partly on an amount of user interface elements using the executable. The first group can be located at first pages of the memory, and the second group can be located at second pages. When the scoring of the executables in the first group is higher than the scoring of the executables in the second group, the operations can include allocating or migrating the first pages to a first type of memory, and allocating or migrating the second pages to a second type of memory.
    Type: Application
    Filed: November 25, 2019
    Publication date: May 27, 2021
    Inventors: Dmitri Yudanov, Samuel E. Bradshaw
  • Publication number: 20210132690
    Abstract: An apparatus having a computing device and a user interface—such as a user interface having a display that can provide a graphical user interface (GUI). The apparatus also includes a camera, and a processor in the computing device. The camera can be connected to the computing device and/or the user interface, and the camera can be configured to capture pupil location and/or eye movement of a user. The processor can be configured to: identify a visual focal point of the user relative to the user interface based on the captured pupil location, and/or identify a type of eye movement of the user (such as a saccade) based on the captured eye movement. The processor can also be configured to control parameters of the user interface based at least partially on the identified visual focal point and/or the identified type of eye movement.
    Type: Application
    Filed: November 5, 2019
    Publication date: May 6, 2021
    Inventors: Dmitri Yudanov, Samuel E. Bradshaw
  • Publication number: 20210132689
    Abstract: An apparatus having a wearable structure, a computing device, a display, and a camera. The wearable structure is configured to be worn by a user and can be connected to the computing device, the display, and/or the camera. The computing device can be connected to the wearable structure, the display, and/or the camera. The display can be connected to the wearable structure, the computing device, and/or the camera. The display is configured to provide a graphical user interface (GUI). The camera can be connected to the computing device, the wearable structure, and/or the display. The camera is configured to capture eye movement of the user. A processor in the computing device is configured to identify one or more eye gestures from the captured eye movement. And, the processor is configured to control one or more parameters of the display and/or the GUI based on the identified eye gesture(s).
    Type: Application
    Filed: November 5, 2019
    Publication date: May 6, 2021
    Inventors: Dmitri Yudanov, Samuel E. Bradshaw
  • Publication number: 20210103463
    Abstract: Customized root processes for groups of applications in a computing device. A computing device (e.g., a mobile device) can monitor usage of applications. The device can then store data related to the usage of the applications, and group the applications into groups according to the stored data. The device can customize and execute a root process for a group of applications according to usage common to each application in the group. The device can generate patterns of prior executions shared amongst the applications in the group based on the stored data common to each application in the group, and execute the root process of the group according to the patterns. The device can receive a request to start an application from the group from a user of the device, and start the application upon receiving the request and by using the root process of the group of applications.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Dmitri Yudanov, Samuel E. Bradshaw
  • Publication number: 20210103446
    Abstract: In a mobile device, processes of an application can be monitored and scored for initial data distribution. Specifically, a method can include monitoring processes of an application, and scoring objects or components used by the processes to determine placement of the objects or components in memory during initiation of the application. The method can also include, during initiation of the application, loading, into a first portion of the memory, at least partially, the objects or components scored at a first level. The method can also include, during initiation of the application, loading, into a second portion of the memory, at least partially, the objects or components scored at a second level. The objects or components scored at the second level can be less critical to the application than the objects or components scored at the first level.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Dmitri Yudanov, Samuel E. Bradshaw
  • Publication number: 20210103462
    Abstract: A computing device (e.g., a mobile device) can execute a root process of an application to an initial point according to patterns of prior executions of the application. The root process can be one of many respective customized root processes of individual applications in the computing device. The device can receive a request to start the application from a user of the device. And, the device can start the application upon receiving the request to start the application and by using the root process of the application. At least one of the executing, receiving, or starting can be performed by an operating system in the device. The device can also fork the root process of the application into multiple processes, and can start upon receiving the request to start the application and by using at least one of the multiple processes according to the request to start the application.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Dmitri Yudanov, Samuel E. Bradshaw
  • Publication number: 20210073623
    Abstract: Methods, apparatuses, and systems for in-or near-memory processing are described. Spiking events in a spiking neural network may be processed via a memory system. A memory system may store data corresponding to a group of destination neurons. The memory system may, at each time interval of a SNN, pass through data corresponding to a group of pre-synaptic spike events from respective source neurons. The data corresponding to the group of pre-synaptic spike events may be subsequently stored in the memory system.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 11, 2021
    Inventors: Dmitri Yudanov, Sean S. Eilert, Hernan A. Castro, Ameen D. Akel
  • Publication number: 20210073622
    Abstract: Methods, apparatuses, and systems for in- or near-memory processing are described. Spiking events in a spiking neural network may be processed via a memory system. A memory system may store a group of destination neurons, and at each time interval in a series of time intervals of a spiking neural network (SNN), pass through a group of pre-synaptic spike events from respective source neurons, wherein the group of pre-synaptic spike events are subsequently stored in memory.
    Type: Application
    Filed: May 29, 2020
    Publication date: March 11, 2021
    Inventors: Dmitri Yudanov, Sean S. Eilert, Hernan A. Castro, Ameen D. Akel
  • Publication number: 20210072986
    Abstract: Methods, apparatuses, and systems for in- or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-serial way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of the memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.
    Type: Application
    Filed: December 17, 2019
    Publication date: March 11, 2021
    Inventors: Dmitri Yudanov, Sean S. Eilert, Sivagnanam Parthasarathy, Shivasankar Gunasekaran, Ameen D. Akel
  • Publication number: 20210072987
    Abstract: Methods, apparatuses, and systems for in-or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-parallel way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of a memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.
    Type: Application
    Filed: April 6, 2020
    Publication date: March 11, 2021
    Inventors: Dmitri Yudanov, Sean S. Eilert, Sivagnanam Parthasarathy, Shivasankar Gunasekaran, Ameen D. Akel