Patents by Inventor Athmanarayanan LAKSHMI NARAYANAN

Athmanarayanan LAKSHMI NARAYANAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11610125
    Abstract: According to one aspect, a method for sensor fusion associated with a long short-term memory (LSTM) cell may include generating a first adjusted sensor encoding based on a first sensor encoding from a first sensor, generating a second adjusted sensor encoding based on a second sensor encoding from a second sensor, generating a fusion result based on the first adjusted sensor encoding and the second adjusted sensor encoding, generating a first product based on the fusion result and the first adjusted sensor encoding, generating a second product based on the second adjusted sensor encoding, and generating a fused state based on the first product and the second product.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: March 21, 2023
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Athmanarayanan Lakshmi Narayanan, Avinash Siravuru
  • Patent number: 11580365
    Abstract: According to one aspect, a long short-term memory (LSTM) cell for sensor fusion may include M number of forget gates, M number of input gates, and M number output gates. The M number of forget gates may receive M sets of sensor encoding data from M number of sensors and a shared hidden state. The M number of input gates may receive the corresponding M sets of sensor data and the shared hidden state. The M number output gates may generate M partial shared cell state outputs and M partial shared hidden state outputs based on the M sets of sensor encoding data, the shared hidden state, and a shared cell state.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: February 14, 2023
    Assignee: HONDA MOTOR CO., LTD.
    Inventor: Athmanarayanan Lakshmi Narayanan
  • Publication number: 20220391675
    Abstract: According to one aspect, a long short-term memory (LSTM) cell for sensor fusion may include a first architecture receiving a first sensor encoding, a first shared cell state, and a first shared hidden state and generating a first set of outputs based on the first sensor encoding, the first shared cell state, and the first shared hidden state, a second architecture receiving a second sensor encoding, the first shared cell state, and the first shared hidden state and generating a second set of outputs based on the second sensor encoding, the first shared cell state, and the first shared hidden state, a hidden state gate generating a second shared hidden state based on the first set of outputs and the second set of outputs, and a cell state gate generating a second shared cell state based on the first set of outputs and the second set of outputs.
    Type: Application
    Filed: August 15, 2022
    Publication date: December 8, 2022
    Inventor: Athmanarayanan LAKSHMI NARAYANAN
  • Patent number: 11514319
    Abstract: According to one aspect, action prediction may be implemented via a spatio-temporal feature pyramid graph convolutional network (ST-FP-GCN) including a first pyramid layer, a second pyramid layer, a third pyramid layer, etc. The first pyramid layer may include a first graph convolution network (GCN), a fusion gate, and a first long-short-term-memory (LSTM) gate. The second pyramid layer may include a first convolution operator, a first summation operator, a first mask pool operator, a second GCN, a first upsampling operator, and a second LSTM gate. An output summation operator may sum a first LSTM output and a second LSTM output to generate an output indicative of an action prediction for an inputted image sequence and an inputted pose sequence.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: November 29, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Athmanarayanan Lakshmi Narayanan, Behzad Dariush, Behnoosh Parsa
  • Patent number: 11195030
    Abstract: According to one aspect, scene classification may be provided. An image capture device may capture a series of image frames of an environment from a moving vehicle. A temporal classifier may classify image frames with temporal predictions and generate a series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification of image frames based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a fully connected layer. The scene classifier may classify image frames based on a CNN, global average pooling, and a fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. A controller of a vehicle may activate or deactivate vehicle sensors or vehicle systems of the vehicle based on the scene prediction.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: December 7, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
  • Patent number: 11034357
    Abstract: Systems and techniques for scene classification and prediction is provided herein. A first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network. Additionally, scene classification may occur based on global average pooling. Feature vectors may be generated based on different series of image frames and a fusion feature vector may be obtained by performing data fusion based on a first feature vector, a second feature vector, a third feature vector, etc. In this way, a behavior predictor may generate a predicted driver behavior based on the fusion feature.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: June 15, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
  • Publication number: 20210081782
    Abstract: According to one aspect, action prediction may be implemented via a spatio-temporal feature pyramid graph convolutional network (ST-FP-GCN) including a first pyramid layer, a second pyramid layer, a third pyramid layer, etc. The first pyramid layer may include a first graph convolution network (GCN), a fusion gate, and a first long-short-term-memory (LSTM) gate. The second pyramid layer may include a first convolution operator, a first summation operator, a first mask pool operator, a second GCN, a first upsampling operator, and a second LSTM gate. An output summation operator may sum a first LSTM output and a second LSTM output to generate an output indicative of an action prediction for an inputted image sequence and an inputted pose sequence.
    Type: Application
    Filed: April 16, 2020
    Publication date: March 18, 2021
    Inventors: Athmanarayanan Lakshmi Narayanan, Behzad Dariush, Behnoosh Parsa
  • Publication number: 20210004664
    Abstract: According to one aspect, a long short-term memory (LSTM) cell for sensor fusion may include M number of forget gates, M number of input gates, and M number output gates. The M number of forget gates may receive M sets of sensor encoding data from M number of sensors and a shared hidden state. The M number of input gates may receive the corresponding M sets of sensor data and the shared hidden state. The M number output gates may generate M partial shared cell state outputs and M partial shared hidden state outputs based on the M sets of sensor encoding data, the shared hidden state, and a shared cell state.
    Type: Application
    Filed: October 25, 2019
    Publication date: January 7, 2021
    Inventor: Athmanarayanan Lakshmi Narayanan
  • Publication number: 20210004687
    Abstract: According to one aspect, a method for sensor fusion associated with a long short-term memory (LSTM) cell may include generating a first adjusted sensor encoding based on a first sensor encoding from a first sensor, generating a second adjusted sensor encoding based on a second sensor encoding from a second sensor, generating a fusion result based on the first adjusted sensor encoding and the second adjusted sensor encoding, generating a first product based on the fusion result and the first adjusted sensor encoding, generating a second product based on the second adjusted sensor encoding, and generating a fused state based on the first product and the second product.
    Type: Application
    Filed: November 25, 2019
    Publication date: January 7, 2021
    Inventors: Athmanarayanan Lakshmi Narayanan, Avinash Siravuru
  • Patent number: 10885398
    Abstract: The present disclosure generally relates to methods and systems for identifying objects from a 3D point cloud and a 2D image. The method may include determining a first set of 3D proposals using Euclidean clustering on the 3D point cloud and determining a second set of 3D proposals from the 3D point cloud based on a 3D convolutional neural network. The method may include pooling the first and second sets of 3D proposals to determine a set of 3D candidates. The method may include projecting the first set of 3D proposals onto the 2D image and determining a first set of 2D proposals using 2D convolutional neural network. The method may include pooling the projected first set of 3D proposals and the first set of 2D proposals to determine a set of 2D candidates then pooling the set of 3D candidates and the set of 2D candidates.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: January 5, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Chien-Yi Wang, Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
  • Patent number: 10860873
    Abstract: Driver behavior recognition or driver behavior prediction are described herein. A first image sequence including image frames associated with a forward-facing image capture device of a vehicle and a corresponding vehicle data signal sequence may be received. A second image sequence including image frames associated with a rear or driver facing image capture device of the vehicle may be received. Feature vectors may be generated for respective sequences using neural networks, such as a convolutional neural network (CNN), a depth CNN, a recurrent neural network (RNN), a fully connected layer, a long short term memory (LSTM) layer, etc. A fusion feature may be generated by performing data fusion on any combination of the feature vectors. A predicted driver behavior may be generated based on the LSTM layer and n image frames on an image sequence and include x number of prediction frames.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: December 8, 2020
    Assignee: Honda Motor Co., Ltd.
    Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
  • Patent number: 10650531
    Abstract: A system, computer-readable medium, and method for improving semantic mapping and traffic participant detection for an autonomous vehicle are provided. The methods and systems may include obtain a two-dimensional image, obtain a three-dimensional point cloud comprising a plurality of points, perform semantic segmentation on the image to map objects with a discrete pixel color, and overlaying the semantic segmentation on the image to generate a updated image, generate superpixel clusters from the semantic segmentation to group like pixels together, project the point cloud onto the updated image comprising the superpixel clusters, and remove points determined to be noise/errors from the point cloud based on determining noisy points within each superpixel cluster.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: May 12, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
  • Patent number: 10635927
    Abstract: Performing semantic segmentation of an image can include processing the image using a plurality of convolutional layers to generate one or more feature maps, providing at least one of the one or more feature maps to multiple segmentation branches, and generating segmentations of the image based on the multiple segmentation branches, including providing feedback to, or generating feedback from, at least one of the multiple segmentation branches in performing segmentation in another of the segmentation branches.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: April 28, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yi-Ting Chen, Athmanarayanan Lakshmi Narayanan
  • Publication number: 20200086879
    Abstract: Systems and techniques for scene classification and prediction is provided herein. A first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network. Additionally, scene classification may occur based on global average pooling. Feature vectors may be generated based on different series of image frames and a fusion feature vector may be obtained by performing data fusion based on a first feature vector, a second feature vector, a third feature vector, etc. In this way, a behavior predictor may generate a predicted driver behavior based on the fusion feature.
    Type: Application
    Filed: June 11, 2019
    Publication date: March 19, 2020
    Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
  • Publication number: 20200089977
    Abstract: Driver behavior recognition or driver behavior prediction are described herein. A first image sequence including image frames associated with a forward-facing image capture device of a vehicle and a corresponding vehicle data signal sequence may be received. A second image sequence including image frames associated with a rear or driver facing image capture device of the vehicle may be received. Feature vectors may be generated for respective sequences using neural networks, such as a convolutional neural network (CNN), a depth CNN, a recurrent neural network (RNN), a fully connected layer, a long short term memory (LSTM) layer, etc. A fusion feature may be generated by performing data fusion on any combination of the feature vectors. A predicted driver behavior may be generated based on the LSTM layer and n image frames on an image sequence and include x number of prediction frames.
    Type: Application
    Filed: June 11, 2019
    Publication date: March 19, 2020
    Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
  • Publication number: 20200089969
    Abstract: According to one aspect, scene classification may be provided. An image capture device may capture a series of image frames of an environment from a moving vehicle. A temporal classifier may classify image frames with temporal predictions and generate a series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification of image frames based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a fully connected layer. The scene classifier may classify image frames based on a CNN, global average pooling, and a fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. A controller of a vehicle may activate or deactivate vehicle sensors or vehicle systems of the vehicle based on the scene prediction.
    Type: Application
    Filed: April 3, 2019
    Publication date: March 19, 2020
    Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
  • Patent number: 10482334
    Abstract: Driver behavior recognition may be provided using a processor and a memory. The memory may receive an image sequence and a corresponding vehicle data signal sequence. The processor may generate or process features for each frame of the respective sequences. The processor may generate a first feature vector based on the image sequence and a first neural network. The processor may generate a second feature vector based on a fully connected layer and the vehicle data signal sequence. The processor may generate a fusion feature by performing data fusion based on the first feature vector and the second feature vector. The processor may process the fusion feature using a long short term memory layer and store the processed fusion feature as a recognized driver behavior associated with each corresponding frame. The processor may, according to other aspects, generate the fusion feature based on a third feature vector.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: November 19, 2019
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yi-Ting Chen, Athmanarayanan Lakshmi Narayanan
  • Publication number: 20190287254
    Abstract: A system, computer-readable medium, and method for improving semantic mapping and traffic participant detection for an autonomous vehicle are provided. The methods and systems may include obtain a two-dimensional image, obtain a three-dimensional point cloud comprising a plurality of points, perform semantic segmentation on the image to map objects with a discrete pixel color, and overlaying the semantic segmentation on the image to generate a updated image, generate superpixel clusters from the semantic segmentation to group like pixels together, project the point cloud onto the updated image comprising the superpixel clusters, and remove points determined to be noise/errors from the point cloud based on determining noisy points within each superpixel cluster.
    Type: Application
    Filed: March 16, 2018
    Publication date: September 19, 2019
    Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
  • Publication number: 20190188541
    Abstract: The present disclosure generally relates to methods and systems for identifying objects from a 3D point cloud and a 2D image. The method may include determining a first set of 3D proposals using Euclidean clustering on the 3D point cloud and determining a second set of 3D proposals from the 3D point cloud based on a 3D convolutional neural network. The method may include pooling the first and second sets of 3D proposals to determine a set of 3D candidates. The method may include projecting the first set of 3D proposals onto the 2D image and determining a first set of 2D proposals using 2D convolutional neural network. The method may include pooling the projected first set of 3D proposals and the first set of 2D proposals to determine a set of 2D candidates then pooling the set of 3D candidates and the set of 2D candidates.
    Type: Application
    Filed: March 16, 2018
    Publication date: June 20, 2019
    Inventors: Chien-Yi WANG, Athmanarayanan LAKSHMI NARAYANAN, Yi-Ting CHEN
  • Publication number: 20180253622
    Abstract: Performing semantic segmentation of an image can include processing the image using a plurality of convolutional layers to generate one or more feature maps, providing at least one of the one or more feature maps to multiple segmentation branches, and generating segmentations of the image based on the multiple segmentation branches, including providing feedback to, or generating feedback from, at least one of the multiple segmentation branches in performing segmentation in another of the segmentation branches.
    Type: Application
    Filed: March 5, 2018
    Publication date: September 6, 2018
    Inventors: Yi-Ting CHEN, Athmanarayanan LAKSHMI NARAYANAN