Patents by Inventor Dmitri Yudanov

Dmitri Yudanov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190252385
    Abstract: A modified 1C1T cell detects when the charge in the memory cell drops below a predetermined voltage due to leakage and asserts a refresh signal indicating that refresh needs to be performed on those memory cells associated with the modified 1C1T memory cell. The associated memory cells may be a row, a bank, or other groupings of memory cells. Because temperature affects leakage current, the modified memory cell automatically adjusts for temperature.
    Type: Application
    Filed: February 13, 2018
    Publication date: August 15, 2019
    Inventors: Dmitri Yudanov, David A. Roberts
  • Publication number: 20190196742
    Abstract: A technique for accessing memory in an accelerated processing device coupled to stacked memory dies is provided herein. The technique includes receiving a memory access request from an execution unit and identifying whether the memory access request corresponds to memory cells of the stacked dies that are considered local to the execution unit or non-local. For local accesses, the access is made “directly”, that is, without using a bus. A control die coordinates operations for such local accesses, activating particular through-silicon-vias associated with the memory cells that include the data for the access. Non-local accesses are made via a distributed cache fabric and an interconnect bus in the control die. Various other features and details are provided below.
    Type: Application
    Filed: December 21, 2017
    Publication date: June 27, 2019
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Dmitri Yudanov, Jiasheng Chen
  • Publication number: 20190180176
    Abstract: An artificial neural network that includes first subnetworks to implement known functions and second subnetworks to implement unknown functions is trained. The first subnetworks are trained separately and in parallel on corresponding known training datasets to determine first parameter values that define the first subnetworks. The first subnetworks are executing on a plurality of processing elements in a processing system. Input values from a network training data set are provided to the artificial neural network including the trained first subnetworks. Error values are generated by comparing output values produced by the artificial neural network to labeled output values of the network training data set. The second subnetworks are trained by back propagating the error values to modify second parameter values that define the second subnetworks without modifying the first parameter values. The first and second parameter values are stored in a storage component.
    Type: Application
    Filed: December 13, 2017
    Publication date: June 13, 2019
    Inventors: Dmitri YUDANOV, Nicholas Penha MALAYA
  • Publication number: 20190065100
    Abstract: A method and apparatus of performing a memory operation includes receiving a memory operation request at a first memory controller that is in communication with a second memory controller. The first memory controller forwards the memory operation request to the second memory controller. Upon receipt of the memory operation request, the second memory controller provides first information or second information depending on a condition of a pseudo-bank of the second memory controller and a type of the memory operation request.
    Type: Application
    Filed: August 24, 2017
    Publication date: February 28, 2019
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Dmitri Yudanov
  • Patent number: 10216454
    Abstract: A method and apparatus of performing a memory operation includes receiving a memory operation request at a first memory controller that is in communication with a second memory controller. The first memory controller forwards the memory operation request to the second memory controller. Upon receipt of the memory operation request, the second memory controller provides first information or second information depending on a condition of a pseudo-bank of the second memory controller and a type of the memory operation request.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: February 26, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Dmitri Yudanov
  • Publication number: 20180341613
    Abstract: A method and apparatus of integrating memory stacks includes providing a first memory die of a first memory technology and a second memory die of a second memory technology. A first logic die is in communication with the first memory die of the first memory technology, and includes a first memory controller including a first memory control function for interpreting requests in accordance with a first protocol for the first memory technology. A second logic die is in communication with the second memory die of the second memory technology and includes a second memory controller including a second memory control function for interpreting requests in accordance with a second protocol for the second memory technology. A memory operation request is received at the first or second memory controller, and the memory operation request is performed in accordance with the associated first memory protocol or the second memory protocol.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Dmitri Yudanov, Michael Ignatowski
  • Publication number: 20180301026
    Abstract: Various traffic management systems and methods are disclosed. In one aspect, a traffic management system is provided that includes an intelligent traffic signal for managing traffic flow at an intersection. The intelligent traffic signal is operable to periodically acquire positions and speeds of one or more vehicles traveling toward the intersection, to compute a recommended speed for the one or more vehicles to transit the intersection without stopping based on the positions and speeds and a time to red or a time to green of the intelligent traffic signal, and transmit the recommended speed to the one or more vehicles. The system further includes one or more vehicles that have a receiver to receive and a device to convey the recommended speeds to the one or more vehicles.
    Type: Application
    Filed: April 12, 2017
    Publication date: October 18, 2018
    Inventor: Dmitri Yudanov
  • Patent number: 10019283
    Abstract: A processing device includes a first memory that includes a context buffer. The processing device also includes a processor core to execute threads based on context information stored in registers of the processor core and a memory controller to selectively move a subset of the context information between the context buffer and the registers based on one or more latencies of the threads.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: July 10, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Dmitri Yudanov, Sergey Blagodurov, Arkaprava Basu, Sooraj Puthoor, Joseph L. Greathouse
  • Patent number: 9983655
    Abstract: A method and apparatus for performing inter-lane power management includes de-energizing one or more execution lanes upon a determination that the one or more execution lanes are to be predicated. Energy from the predicated execution lanes is redistributed to one or more active execution lanes.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: May 29, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mitesh R. Meswani, David A. Roberts, Dmitri Yudanov, Arkaprava Basu, Sergey Blagodurov
  • Patent number: 9898287
    Abstract: A method, a non-transitory computer readable medium, and a processor for repacking dynamic wavefronts during program code execution on a processing unit, each dynamic wavefront including multiple threads are presented. If a branch instruction is detected, a determination is made whether all wavefronts following a same control path in the program code have reached a compaction point, which is the branch instruction. If no branch instruction is detected in executing the program code, a determination is made whether all wavefronts following the same control path have reached a reconvergence point, which is a beginning of a program code segment to be executed by both a taken branch and a not taken branch from a previous branch instruction. The dynamic wavefronts are repacked with all threads that follow the same control path, if all wavefronts following the same control path have reached the branch instruction or the reconvergence point.
    Type: Grant
    Filed: April 9, 2015
    Date of Patent: February 20, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Sooraj Puthoor, Bradford M. Beckmann, Dmitri Yudanov
  • Publication number: 20170371720
    Abstract: A method and processing apparatus for accelerating program processing is provided that includes a plurality of processors configured to process a plurality of tasks of a program and a controller. The controller is configured to determine, from the plurality of tasks being processed by the plurality of processors, a task being processed on a first processor to be a lagging task causing a delay in execution of one or more other tasks of the plurality of tasks. The controller is further configured to provide the determined lagging task to a second processor to be executed by the second processor to accelerate execution of the lagging task.
    Type: Application
    Filed: June 23, 2016
    Publication date: December 28, 2017
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Arkaprava Basu, Dmitri Yudanov, David A. Roberts, Mitesh R. Meswani, Sergey Blagodurov
  • Publication number: 20170168546
    Abstract: A method and apparatus for performing inter-lane power management includes de-energizing one or more execution lanes upon a determination that the one or more execution lanes are to be predicated. Energy from the predicated execution lanes is redistributed to one or more active execution lanes.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 15, 2017
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Mitesh R. Meswani, David A. Roberts, Dmitri Yudanov, Arkaprava Basu, Sergey Blagodurov
  • Publication number: 20170147228
    Abstract: A plurality of memory blocks are connected to a computation-enabled switch that provides data paths between the plurality of memory blocks. The computation-enabled switch performs one or more computations on data stored in one or more of the plurality of memory blocks during transfer of the data along one or more of the data paths between the plurality of memory blocks.
    Type: Application
    Filed: November 25, 2015
    Publication date: May 25, 2017
    Inventors: Dmitri Yudanov, Sergey Blagodurov, David A. Roberts, Mitesh R. Meswani, Nuwan Jayasena, Michael Ignatowski
  • Publication number: 20160378667
    Abstract: A processor employs multiple prefetchers at a processor to identify patterns in memory accesses to different memory modules. The memory accesses can include transfers between the memory modules, and the prefetchers can prefetch data directly from one memory module to another based on patterns in the transfers. This allows the processor to efficiently organize data at the memory modules without direct intervention by software or by a processor core, thereby improving processing efficiency.
    Type: Application
    Filed: June 23, 2015
    Publication date: December 29, 2016
    Inventors: David Andrew Roberts, Mitesh R. Meswani, Sergey Blagodurov, Dmitri Yudanov, Indrani Paul
  • Publication number: 20160371082
    Abstract: A processing device includes a first memory that includes a context buffer. The processing device also includes a processor core to execute threads based on context information stored in registers of the processor core and a memory controller to selectively move a subset of the context information between the context buffer and the registers based on one or more latencies of the threads.
    Type: Application
    Filed: June 22, 2015
    Publication date: December 22, 2016
    Inventors: Dmitri Yudanov, Sergey Blagodurov, Arkaprava Basu, Sooraj Puthoor, Joseph L. Greathouse
  • Publication number: 20160239302
    Abstract: A method, a non-transitory computer readable medium, and a processor for repacking dynamic wavefronts during program code execution on a processing unit, each dynamic wavefront including multiple threads are presented. If a branch instruction is detected, a determination is made whether all wavefronts following a same control path in the program code have reached a compaction point, which is the branch instruction. If no branch instruction is detected in executing the program code, a determination is made whether all wavefronts following the same control path have reached a reconvergence point, which is a beginning of a program code segment to be executed by both a taken branch and a not taken branch from a previous branch instruction. The dynamic wavefronts are repacked with all threads that follow the same control path, if all wavefronts following the same control path have reached the branch instruction or the reconvergence point.
    Type: Application
    Filed: April 9, 2015
    Publication date: August 18, 2016
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Sooraj Puthoor, Bradford M. Beckmann, Dmitri Yudanov
  • Publication number: 20160085551
    Abstract: A compute unit configured to execute multiple threads in parallel is presented. The compute unit includes one or more single instruction multiple data (SIMD) units and a fetch and decode logic. The SIMD units have differing numbers of arithmetic logic units (ALUs), such that each SIMD unit can execute a different number of threads. The fetch and decode logic is in communication with each of the SIMD units, and is configured to assign the threads to the SIMD units for execution based on such differing numbers of ALUs.
    Type: Application
    Filed: September 18, 2014
    Publication date: March 24, 2016
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Joseph L. Greathouse, Mitesh R. Meswani, Sooraj Puthoor, Dmitri Yudanov, James M. O'Connor