Patents by Inventor Athmanarayanan LAKSHMI NARAYANAN
Athmanarayanan LAKSHMI NARAYANAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240412366Abstract: Systems, apparatus, articles of manufacture, and methods to detect anomalies in three-dimensional (3D) images are disclosed. Example apparatus disclosed herein generate a first two-dimensional (2D) anomaly map corresponding to a first 2D image slice of a 3D image, the first 2D image slice corresponding to a first axis of the 3D image. Disclosed example apparatus also generate a second 2D anomaly map corresponding to a second 2D image slice of the 3D image, the second 2D image slice corresponding to a second axis of the 3D image. Disclosed example apparatus further generate a 3D anomaly volume based on the first 2D anomaly map and the second 2D anomaly detection, the 3D anomaly volume corresponding to the 3D image.Type: ApplicationFiled: August 22, 2024Publication date: December 12, 2024Inventors: Jiaxiang Jiang, Athmanarayanan Lakshmi Narayanan, Nilesh Ahuja, Ibrahima Jacques Ndiour, Ergin Utku Genc, Mahesh Subedar, Omesh Tickoo
-
Patent number: 11610125Abstract: According to one aspect, a method for sensor fusion associated with a long short-term memory (LSTM) cell may include generating a first adjusted sensor encoding based on a first sensor encoding from a first sensor, generating a second adjusted sensor encoding based on a second sensor encoding from a second sensor, generating a fusion result based on the first adjusted sensor encoding and the second adjusted sensor encoding, generating a first product based on the fusion result and the first adjusted sensor encoding, generating a second product based on the second adjusted sensor encoding, and generating a fused state based on the first product and the second product.Type: GrantFiled: November 25, 2019Date of Patent: March 21, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Avinash Siravuru
-
Patent number: 11580365Abstract: According to one aspect, a long short-term memory (LSTM) cell for sensor fusion may include M number of forget gates, M number of input gates, and M number output gates. The M number of forget gates may receive M sets of sensor encoding data from M number of sensors and a shared hidden state. The M number of input gates may receive the corresponding M sets of sensor data and the shared hidden state. The M number output gates may generate M partial shared cell state outputs and M partial shared hidden state outputs based on the M sets of sensor encoding data, the shared hidden state, and a shared cell state.Type: GrantFiled: October 25, 2019Date of Patent: February 14, 2023Assignee: HONDA MOTOR CO., LTD.Inventor: Athmanarayanan Lakshmi Narayanan
-
Publication number: 20220391675Abstract: According to one aspect, a long short-term memory (LSTM) cell for sensor fusion may include a first architecture receiving a first sensor encoding, a first shared cell state, and a first shared hidden state and generating a first set of outputs based on the first sensor encoding, the first shared cell state, and the first shared hidden state, a second architecture receiving a second sensor encoding, the first shared cell state, and the first shared hidden state and generating a second set of outputs based on the second sensor encoding, the first shared cell state, and the first shared hidden state, a hidden state gate generating a second shared hidden state based on the first set of outputs and the second set of outputs, and a cell state gate generating a second shared cell state based on the first set of outputs and the second set of outputs.Type: ApplicationFiled: August 15, 2022Publication date: December 8, 2022Inventor: Athmanarayanan LAKSHMI NARAYANAN
-
Patent number: 11514319Abstract: According to one aspect, action prediction may be implemented via a spatio-temporal feature pyramid graph convolutional network (ST-FP-GCN) including a first pyramid layer, a second pyramid layer, a third pyramid layer, etc. The first pyramid layer may include a first graph convolution network (GCN), a fusion gate, and a first long-short-term-memory (LSTM) gate. The second pyramid layer may include a first convolution operator, a first summation operator, a first mask pool operator, a second GCN, a first upsampling operator, and a second LSTM gate. An output summation operator may sum a first LSTM output and a second LSTM output to generate an output indicative of an action prediction for an inputted image sequence and an inputted pose sequence.Type: GrantFiled: April 16, 2020Date of Patent: November 29, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Behzad Dariush, Behnoosh Parsa
-
Patent number: 11195030Abstract: According to one aspect, scene classification may be provided. An image capture device may capture a series of image frames of an environment from a moving vehicle. A temporal classifier may classify image frames with temporal predictions and generate a series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification of image frames based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a fully connected layer. The scene classifier may classify image frames based on a CNN, global average pooling, and a fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. A controller of a vehicle may activate or deactivate vehicle sensors or vehicle systems of the vehicle based on the scene prediction.Type: GrantFiled: April 3, 2019Date of Patent: December 7, 2021Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
-
Patent number: 11034357Abstract: Systems and techniques for scene classification and prediction is provided herein. A first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network. Additionally, scene classification may occur based on global average pooling. Feature vectors may be generated based on different series of image frames and a fusion feature vector may be obtained by performing data fusion based on a first feature vector, a second feature vector, a third feature vector, etc. In this way, a behavior predictor may generate a predicted driver behavior based on the fusion feature.Type: GrantFiled: June 11, 2019Date of Patent: June 15, 2021Assignee: Honda Motor Co., Ltd.Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
-
Publication number: 20210081782Abstract: According to one aspect, action prediction may be implemented via a spatio-temporal feature pyramid graph convolutional network (ST-FP-GCN) including a first pyramid layer, a second pyramid layer, a third pyramid layer, etc. The first pyramid layer may include a first graph convolution network (GCN), a fusion gate, and a first long-short-term-memory (LSTM) gate. The second pyramid layer may include a first convolution operator, a first summation operator, a first mask pool operator, a second GCN, a first upsampling operator, and a second LSTM gate. An output summation operator may sum a first LSTM output and a second LSTM output to generate an output indicative of an action prediction for an inputted image sequence and an inputted pose sequence.Type: ApplicationFiled: April 16, 2020Publication date: March 18, 2021Inventors: Athmanarayanan Lakshmi Narayanan, Behzad Dariush, Behnoosh Parsa
-
Publication number: 20210004664Abstract: According to one aspect, a long short-term memory (LSTM) cell for sensor fusion may include M number of forget gates, M number of input gates, and M number output gates. The M number of forget gates may receive M sets of sensor encoding data from M number of sensors and a shared hidden state. The M number of input gates may receive the corresponding M sets of sensor data and the shared hidden state. The M number output gates may generate M partial shared cell state outputs and M partial shared hidden state outputs based on the M sets of sensor encoding data, the shared hidden state, and a shared cell state.Type: ApplicationFiled: October 25, 2019Publication date: January 7, 2021Inventor: Athmanarayanan Lakshmi Narayanan
-
Publication number: 20210004687Abstract: According to one aspect, a method for sensor fusion associated with a long short-term memory (LSTM) cell may include generating a first adjusted sensor encoding based on a first sensor encoding from a first sensor, generating a second adjusted sensor encoding based on a second sensor encoding from a second sensor, generating a fusion result based on the first adjusted sensor encoding and the second adjusted sensor encoding, generating a first product based on the fusion result and the first adjusted sensor encoding, generating a second product based on the second adjusted sensor encoding, and generating a fused state based on the first product and the second product.Type: ApplicationFiled: November 25, 2019Publication date: January 7, 2021Inventors: Athmanarayanan Lakshmi Narayanan, Avinash Siravuru
-
Patent number: 10885398Abstract: The present disclosure generally relates to methods and systems for identifying objects from a 3D point cloud and a 2D image. The method may include determining a first set of 3D proposals using Euclidean clustering on the 3D point cloud and determining a second set of 3D proposals from the 3D point cloud based on a 3D convolutional neural network. The method may include pooling the first and second sets of 3D proposals to determine a set of 3D candidates. The method may include projecting the first set of 3D proposals onto the 2D image and determining a first set of 2D proposals using 2D convolutional neural network. The method may include pooling the projected first set of 3D proposals and the first set of 2D proposals to determine a set of 2D candidates then pooling the set of 3D candidates and the set of 2D candidates.Type: GrantFiled: March 16, 2018Date of Patent: January 5, 2021Assignee: HONDA MOTOR CO., LTD.Inventors: Chien-Yi Wang, Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
-
Patent number: 10860873Abstract: Driver behavior recognition or driver behavior prediction are described herein. A first image sequence including image frames associated with a forward-facing image capture device of a vehicle and a corresponding vehicle data signal sequence may be received. A second image sequence including image frames associated with a rear or driver facing image capture device of the vehicle may be received. Feature vectors may be generated for respective sequences using neural networks, such as a convolutional neural network (CNN), a depth CNN, a recurrent neural network (RNN), a fully connected layer, a long short term memory (LSTM) layer, etc. A fusion feature may be generated by performing data fusion on any combination of the feature vectors. A predicted driver behavior may be generated based on the LSTM layer and n image frames on an image sequence and include x number of prediction frames.Type: GrantFiled: June 11, 2019Date of Patent: December 8, 2020Assignee: Honda Motor Co., Ltd.Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
-
Patent number: 10650531Abstract: A system, computer-readable medium, and method for improving semantic mapping and traffic participant detection for an autonomous vehicle are provided. The methods and systems may include obtain a two-dimensional image, obtain a three-dimensional point cloud comprising a plurality of points, perform semantic segmentation on the image to map objects with a discrete pixel color, and overlaying the semantic segmentation on the image to generate a updated image, generate superpixel clusters from the semantic segmentation to group like pixels together, project the point cloud onto the updated image comprising the superpixel clusters, and remove points determined to be noise/errors from the point cloud based on determining noisy points within each superpixel cluster.Type: GrantFiled: March 16, 2018Date of Patent: May 12, 2020Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
-
Patent number: 10635927Abstract: Performing semantic segmentation of an image can include processing the image using a plurality of convolutional layers to generate one or more feature maps, providing at least one of the one or more feature maps to multiple segmentation branches, and generating segmentations of the image based on the multiple segmentation branches, including providing feedback to, or generating feedback from, at least one of the multiple segmentation branches in performing segmentation in another of the segmentation branches.Type: GrantFiled: March 5, 2018Date of Patent: April 28, 2020Assignee: HONDA MOTOR CO., LTD.Inventors: Yi-Ting Chen, Athmanarayanan Lakshmi Narayanan
-
Publication number: 20200089977Abstract: Driver behavior recognition or driver behavior prediction are described herein. A first image sequence including image frames associated with a forward-facing image capture device of a vehicle and a corresponding vehicle data signal sequence may be received. A second image sequence including image frames associated with a rear or driver facing image capture device of the vehicle may be received. Feature vectors may be generated for respective sequences using neural networks, such as a convolutional neural network (CNN), a depth CNN, a recurrent neural network (RNN), a fully connected layer, a long short term memory (LSTM) layer, etc. A fusion feature may be generated by performing data fusion on any combination of the feature vectors. A predicted driver behavior may be generated based on the LSTM layer and n image frames on an image sequence and include x number of prediction frames.Type: ApplicationFiled: June 11, 2019Publication date: March 19, 2020Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
-
Publication number: 20200089969Abstract: According to one aspect, scene classification may be provided. An image capture device may capture a series of image frames of an environment from a moving vehicle. A temporal classifier may classify image frames with temporal predictions and generate a series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification of image frames based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a fully connected layer. The scene classifier may classify image frames based on a CNN, global average pooling, and a fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. A controller of a vehicle may activate or deactivate vehicle sensors or vehicle systems of the vehicle based on the scene prediction.Type: ApplicationFiled: April 3, 2019Publication date: March 19, 2020Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
-
Publication number: 20200086879Abstract: Systems and techniques for scene classification and prediction is provided herein. A first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network. Additionally, scene classification may occur based on global average pooling. Feature vectors may be generated based on different series of image frames and a fusion feature vector may be obtained by performing data fusion based on a first feature vector, a second feature vector, a third feature vector, etc. In this way, a behavior predictor may generate a predicted driver behavior based on the fusion feature.Type: ApplicationFiled: June 11, 2019Publication date: March 19, 2020Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
-
Patent number: 10482334Abstract: Driver behavior recognition may be provided using a processor and a memory. The memory may receive an image sequence and a corresponding vehicle data signal sequence. The processor may generate or process features for each frame of the respective sequences. The processor may generate a first feature vector based on the image sequence and a first neural network. The processor may generate a second feature vector based on a fully connected layer and the vehicle data signal sequence. The processor may generate a fusion feature by performing data fusion based on the first feature vector and the second feature vector. The processor may process the fusion feature using a long short term memory layer and store the processed fusion feature as a recognized driver behavior associated with each corresponding frame. The processor may, according to other aspects, generate the fusion feature based on a third feature vector.Type: GrantFiled: September 17, 2018Date of Patent: November 19, 2019Assignee: HONDA MOTOR CO., LTD.Inventors: Yi-Ting Chen, Athmanarayanan Lakshmi Narayanan
-
Publication number: 20190287254Abstract: A system, computer-readable medium, and method for improving semantic mapping and traffic participant detection for an autonomous vehicle are provided. The methods and systems may include obtain a two-dimensional image, obtain a three-dimensional point cloud comprising a plurality of points, perform semantic segmentation on the image to map objects with a discrete pixel color, and overlaying the semantic segmentation on the image to generate a updated image, generate superpixel clusters from the semantic segmentation to group like pixels together, project the point cloud onto the updated image comprising the superpixel clusters, and remove points determined to be noise/errors from the point cloud based on determining noisy points within each superpixel cluster.Type: ApplicationFiled: March 16, 2018Publication date: September 19, 2019Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
-
Publication number: 20190188541Abstract: The present disclosure generally relates to methods and systems for identifying objects from a 3D point cloud and a 2D image. The method may include determining a first set of 3D proposals using Euclidean clustering on the 3D point cloud and determining a second set of 3D proposals from the 3D point cloud based on a 3D convolutional neural network. The method may include pooling the first and second sets of 3D proposals to determine a set of 3D candidates. The method may include projecting the first set of 3D proposals onto the 2D image and determining a first set of 2D proposals using 2D convolutional neural network. The method may include pooling the projected first set of 3D proposals and the first set of 2D proposals to determine a set of 2D candidates then pooling the set of 3D candidates and the set of 2D candidates.Type: ApplicationFiled: March 16, 2018Publication date: June 20, 2019Inventors: Chien-Yi WANG, Athmanarayanan LAKSHMI NARAYANAN, Yi-Ting CHEN