Patents by Inventor Tung Duc Le

Tung Duc Le has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240269063
    Abstract: A method (100) for synthesising multi-core magnetic metal oxide nanoparticles is disclosed. The method comprises providing a first precursor mixture (102) comprising a first metal-containing precursor, a first solvent, and a first nanoparticle clustering agent, and heating the first precursor mixture to thermally decompose the first metal-containing precursor to produce a nanoparticle mixture (104) comprising multi-core magnetic metal oxide nanoparticles. The method further comprises performing at least one seeded growth step (106), each comprising a feeding step in which a further precursor mixture (108) is added to the nanoparticle mixture, the further precursor mixture comprising a further metal-containing precursor and a further solvent, and a heating step in which the nanoparticle mixture is heated to thermally decompose the second metal-containing precursor to achieve growth of the multi-core magnetic metal oxide nanoparticles.
    Type: Application
    Filed: June 8, 2022
    Publication date: August 15, 2024
    Inventors: Liudmyla Storozhuk, Maximilian Bresenhard, Asterios Gavriilidis, Thanh Thi Kim Nguyen, Tung Duc Le
  • Patent number: 10949746
    Abstract: A system and method provides efficient parallel training of a neural network model on multiple graphics processing units. A training module reduces the time and communication overhead of gradient accumulation and parameter updating of the network model in a neural network by overlapping processes in an advantageous way. In a described embodiment, a training module overlaps backpropagation, gradient transfer and accumulation in a Synchronous Stochastic Gradient Decent algorithm on a convolution neural network. The training module collects gradients of multiple layers during backpropagation of training from a plurality of graphics processing units (GPUs), accumulates the gradients on at least one processor and then delivers the gradients of the layers to the plurality of GPUs during the backpropagation of the training. The whole model parameters can then be updated on the GPUs after receipt of the gradient of the last layer.
    Type: Grant
    Filed: February 3, 2017
    Date of Patent: March 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Imai Haruki, Tung Duc Le, Yasushi Negishi
  • Publication number: 20180121806
    Abstract: A system and method provides efficient parallel training of a neural network model on multiple graphics processing units. A training module reduces the time and communication overhead of gradient accumulation and parameter updating of the network model in a neural network by overlapping processes in an advantageous way. In a described embodiment, a training module overlaps backpropagation, gradient transfer and accumulation in a Synchronous Stochastic Gradient Decent algorithm on a convolution neural network. The training module collects gradients of multiple layers during backpropagation of training from a plurality of graphics processing units (GPUs), accumulates the gradients on at least one processor and then delivers the gradients of the layers to the plurality of GPUs during the backpropagation of the training. The whole model parameters can then be updated on the GPUs after receipt of the gradient of the last layer.
    Type: Application
    Filed: February 3, 2017
    Publication date: May 3, 2018
    Inventors: Imai Haruki, Tung Duc Le, Yasushi Negishi