Patents by Inventor Andrey Zhmoginov

Andrey Zhmoginov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240153116
    Abstract: Provided are computing systems, methods, and platforms for using machine-learned models to generate a depth map. The operations can include projecting, using a dot illuminator, near-infrared (NIR) dots on the scene. The NIR dots can have a uniform pattern. Additionally, the operations can include capturing, using a single NIR camera, the projected NIR dots on the scene. Moreover, the operations can include generating a dot image based on the captured NIR dots on the scene. Furthermore, the operations can include processing the dot image with a machine-learned model to generate a depth map of the scene. Subsequently, the operations can further include evaluating the generated depth map of the scene and a ground truth depth map, and performing an action based on the evaluation.
    Type: Application
    Filed: November 7, 2022
    Publication date: May 9, 2024
    Inventors: Kuntal Sengupta, Adarsh Kowdle, Andrey Zhmoginov, Hart Levy
  • Publication number: 20240119256
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Application
    Filed: October 13, 2023
    Publication date: April 11, 2024
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11823024
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: November 21, 2023
    Assignee: GOOGLE LLC
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Publication number: 20230316081
    Abstract: The present disclosure provides a new type of generalized artificial neural network where neurons and synapses maintain multiple states. While classical gradient-based backpropagation in artificial neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients with update rules derived from the chain rule, example implementations of the generalized framework proposed herein may additionally: have neither explicit notion of nor ever receive gradients; contain more than two states; and/or implement or apply learned (e.g., meta-learned) update rules that control updates to the state(s) of the neuron during forward and/or backward propagation of information.
    Type: Application
    Filed: May 6, 2022
    Publication date: October 5, 2023
    Inventors: Mark Sandler, Andrey Zhmoginov, Thomas Edward Madams, Maksym Vladymyrov, Nolan Andrew Miller, Blaise Aguera-Arcas, Andrew Michael Jackson
  • Publication number: 20230297852
    Abstract: Example implementations of the present disclosure combine efficient model design and dynamic inference. With a standalone lightweight model, the unnecessary computation on easy examples is avoided and the information extracted by the lightweight model also guide the synthesis of a specialist network from the basis models. With extensive experiments on ImageNet it is shown that a proposed example BasisNet is particularly effective for image classification and a BasisNet-MV3 achieves 80.3% top-1 accuracy with 290 M MAdds without early termination.
    Type: Application
    Filed: July 29, 2021
    Publication date: September 21, 2023
    Inventors: Li Zhang, Andrew Gerald Howard, Brendan Wesley Jou, Yukun Zhu, Mingda Zhang, Andrey Zhmoginov
  • Publication number: 20230267330
    Abstract: The present disclosure provides systems and methods that enable parameter-efficient transfer learning, multi-task learning, and/or other forms of model re-purposing such as model personalization or domain adaptation. In particular, as one example, a computing system can obtain a machine-learned model that has been previously trained on a first training dataset to perform a first task. The machine-learned model can include a first set of learnable parameters. The computing system can modify the machine-learned model to include a model patch, where the model patch includes a second set of learnable parameters. The computing system can train the machine-learned model on a second training dataset to perform a second task that is different from the first task, which may include learning new values for the second set of learnable parameters included in the model patch while keeping at least some (e.g., all) of the first set of parameters fixed.
    Type: Application
    Filed: May 2, 2023
    Publication date: August 24, 2023
    Inventors: Mark Sandler, Andrew Gerald Howard, Andrey Zhmoginov, Pramod Kaushik Mudrakarta
  • Patent number: 11734545
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11676008
    Abstract: The present disclosure provides systems and methods that enable parameter-efficient transfer learning, multi-task learning, and/or other forms of model re-purposing such as model personalization or domain adaptation. In particular, as one example, a computing system can obtain a machine-learned model that has been previously trained on a first training dataset to perform a first task. The machine-learned model can include a first set of learnable parameters. The computing system can modify the machine-learned model to include a model patch, where the model patch includes a second set of learnable parameters. The computing system can train the machine-learned model on a second training dataset to perform a second task that is different from the first task, which may include learning new values for the second set of learnable parameters included in the model patch while keeping at least some (e.g., all) of the first set of parameters fixed.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: June 13, 2023
    Assignee: GOOGLE LLC
    Inventors: Mark Sandler, Andrey Zhmoginov, Andrew Gerald Howard, Pramod Kaushik Mudrakarta
  • Publication number: 20210350206
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Application
    Filed: July 22, 2021
    Publication date: November 11, 2021
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Publication number: 20200104706
    Abstract: The present disclosure provides systems and methods that enable parameter-efficient transfer learning, multi-task learning, and/or other forms of model re-purposing such as model personalization or domain adaptation. In particular, as one example, a computing system can obtain a machine-learned model that has been previously trained on a first training dataset to perform a first task. The machine-learned model can include a first set of learnable parameters. The computing system can modify the machine-learned model to include a model patch, where the model patch includes a second set of learnable parameters. The computing system can train the machine-learned model on a second training dataset to perform a second task that is different from the first task, which may include learning new values for the second set of learnable parameters included in the model patch while keeping at least some (e.g., all) of the first set of parameters fixed.
    Type: Application
    Filed: September 20, 2019
    Publication date: April 2, 2020
    Inventors: Mark Sandler, Andrey Zhmoginov, Andrew Gerald Howard, Pramod Kaushik Mudrakarta
  • Publication number: 20190279092
    Abstract: Systems and methods of convolutional neural network compression are provided. For instance, a convolutional neural network can include an input convolutional layer having a plurality of associated input filters and an output convolutional layer having a plurality of associated output filters. The convolutional neural network implements a connection pattern defining connections between the plurality of input filters and the plurality of output filers. The connection pattern specifies that at least one output filter of the plurality of output filters is connected to only a subset of the plurality of input filters.
    Type: Application
    Filed: September 29, 2017
    Publication date: September 12, 2019
    Inventors: Mark Sandler, Andrey Zhmoginov, Soravit Changpinyo
  • Publication number: 20190147318
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Application
    Filed: February 17, 2018
    Publication date: May 16, 2019
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu