Patents by Inventor Menglong Zhu

Menglong Zhu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961298
    Abstract: Systems and methods for detecting objects in a video are provided. A method can include inputting a video comprising a plurality of frames into an interleaved object detection model comprising a plurality of feature extractor networks and a shared memory layer. For each of one or more frames, the operations can include selecting one of the plurality of feature extractor networks to analyze the one or more frames, analyzing the one or more frames by the selected feature extractor network to determine one or more features of the one or more frames, determining an updated set of features based at least in part on the one or more features and one or more previously extracted features extracted from a previous frame stored in the shared memory layer, and detecting an object in the one or more frames based at least in part on the updated set of features.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: April 16, 2024
    Assignee: GOOGLE LLC
    Inventors: Menglong Zhu, Mason Liu, Marie Charisse White, Dmitry Kalenichenko, Yinxiao Li
  • Publication number: 20240119256
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Application
    Filed: October 13, 2023
    Publication date: April 11, 2024
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11823024
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: November 21, 2023
    Assignee: GOOGLE LLC
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11734545
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Publication number: 20220189170
    Abstract: Systems and methods for detecting objects in a video are provided. A method can include inputting a video comprising a plurality of frames into an interleaved object detection model comprising a plurality of feature extractor networks and a shared memory layer. For each of one or more frames, the operations can include selecting one of the plurality of feature extractor networks to analyze the one or more frames, analyzing the one or more frames by the selected feature extractor network to determine one or more features of the one or more frames, determining an updated set of features based at least in part on the one or more features and one or more previously extracted features extracted from a previous frame stored in the shared memory layer, and detecting an object in the one or more frames based at least in part on the updated set of features.
    Type: Application
    Filed: February 22, 2019
    Publication date: June 16, 2022
    Inventors: Menglong Zhu, Mason Liu, Marie Charisse White, Dmitry Kalenichenko, Yinxiao Li
  • Publication number: 20210350206
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Application
    Filed: July 22, 2021
    Publication date: November 11, 2021
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11157814
    Abstract: The present disclosure provides systems and methods to reduce computational costs associated with convolutional neural networks. In addition, the present disclosure provides a class of efficient models termed “MobileNets” for mobile and embedded vision applications. MobileNets are based on a straight-forward architecture that uses depthwise separable convolutions to build light weight deep neural networks. The present disclosure further provides two global hyper-parameters that efficiently trade-off between latency and accuracy. These hyper-parameters allow the entity building the model to select the appropriately sized model for the particular application based on the constraints of the problem. MobileNets and associated computational cost reduction techniques are effective across a wide range of applications and use cases.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: October 26, 2021
    Assignee: Google LLC
    Inventors: Andrew Gerald Howard, Bo Chen, Dmitry Kalenichenko, Tobias Christoph Weyand, Menglong Zhu, Marco Andreetto, Weijun Wang
  • Patent number: 11157815
    Abstract: The present disclosure provides systems and methods to reduce computational costs associated with convolutional neural networks. In addition, the present disclosure provides a class of efficient models termed “MobileNets” for mobile and embedded vision applications. MobileNets are based on a straight-forward architecture that uses depthwise separable convolutions to build light weight deep neural networks. The present disclosure further provides two global hyper-parameters that efficiently trade-off between latency and accuracy. These hyper-parameters allow the entity building the model to select the appropriately sized model for the particular application based on the constraints of the problem. MobileNets and associated computational cost reduction techniques are effective across a wide range of applications and use cases.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: October 26, 2021
    Assignee: Google LLC
    Inventors: Andrew Gerald Howard, Bo Chen, Dmitry Kalenichenko, Tobias Christoph Weyand, Menglong Zhu, Marco Andreetto, Weijun Wang
  • Patent number: 10713491
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing object detection. In one aspect, a method includes receiving multiple video frames. The video frames are sequentially processed using an object detection neural network to generate an object detection output for each video frame. The object detection neural network includes a convolutional neural network layer and a recurrent neural network layer. For each video frame after an initial video frame, processing the video frame using the object detection neural network includes generating a spatial feature map for the video frame using the convolutional neural network layer and generating a spatio-temporal feature map for the video frame using the recurrent neural network layer.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: July 14, 2020
    Assignee: Google LLC
    Inventors: Menglong Zhu, Mason Liu
  • Publication number: 20200034627
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing object detection. In one aspect, a method includes receiving multiple video frames. The video frames are sequentially processed using an object detection neural network to generate an object detection output for each video frame. The object detection neural network includes a convolutional neural network layer and a recurrent neural network layer. For each video frame after an initial video frame, processing the video frame using the object detection neural network includes generating a spatial feature map for the video frame using the convolutional neural network layer and generating a spatio-temporal feature map for the video frame using the recurrent neural network layer.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Inventors: Menglong Zhu, Mason Liu
  • Publication number: 20190347537
    Abstract: The present disclosure provides systems and methods to reduce computational costs associated with convolutional neural networks. In addition, the present disclosure provides a class of efficient models termed “MobileNets” for mobile and embedded vision applications. MobileNets are based on a straight-forward architecture that uses depthwise separable convolutions to build light weight deep neural networks. The present disclosure further provides two global hyper-parameters that efficiently trade-off between latency and accuracy. These hyper-parameters allow the entity building the model to select the appropriately sized model for the particular application based on the constraints of the problem. MobileNets and associated computational cost reduction techniques are effective across a wide range of applications and use cases.
    Type: Application
    Filed: July 29, 2019
    Publication date: November 14, 2019
    Inventors: Andrew Gerald Howard, Bo Chen, Dmitry Kalenichenko, Tobias Christoph Weyand, Menglong Zhu, Marco Andreetto, Weijun Wang
  • Publication number: 20190147318
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Application
    Filed: February 17, 2018
    Publication date: May 16, 2019
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Publication number: 20180137406
    Abstract: The present disclosure provides systems and methods to reduce computational costs associated with convolutional neural networks. In addition, the present disclosure provides a class of efficient models termed “MobileNets” for mobile and embedded vision applications. MobileNets are based on a straight-forward architecture that uses depthwise separable convolutions to build light weight deep neural networks. The present disclosure further provides two global hyper-parameters that efficiently trade-off between latency and accuracy. These hyper-parameters allow the entity building the model to select the appropriately sized model for the particular application based on the constraints of the problem. MobileNets and associated computational cost reduction techniques are effective across a wide range of applications and use cases.
    Type: Application
    Filed: September 18, 2017
    Publication date: May 17, 2018
    Inventors: Andrew Gerald Howard, Bo Chen, Dmitry Kalenichenko, Tobias Christoph Weyand, Menglong Zhu, Marco Andreetto, Weijun Wang
  • Patent number: 8831290
    Abstract: Poses of a movable camera relative to an environment are obtained by determining point correspondences from a set of initial images and then applying 2-point motion estimation to the point correspondences to determine a set of initial poses of the camera. A point cloud is generated from the set of initial poses and the point correspondences. Then, for each next image, the point correspondences and corresponding poses are determined, while updating the point cloud.
    Type: Grant
    Filed: August 1, 2012
    Date of Patent: September 9, 2014
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Srikumar Ramalingam, Yuichi Taguchi, Menglong Zhu
  • Publication number: 20140037136
    Abstract: Poses of a movable camera relative to an environment are obtained by determining point correspondences from a set of initial images and then applying 2-point motion estimation to the point correspondences to determine a set of initial poses of the camera. A point cloud is generated from the set of initial poses and the point correspondences. Then, for each next image, the point correspondences and corresponding poses are determined, while updating the point cloud.
    Type: Application
    Filed: August 1, 2012
    Publication date: February 6, 2014
    Inventors: Srikumar Ramalingam, Yuichi Taguchi, Menglong Zhu