Patents by Inventor Behzad Dariush
Behzad Dariush has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11873012Abstract: A system and method for providing social-stage spatio-temporal multi-modal future forecasting that include receiving environment data associated with a surrounding environment of an ego vehicle and implementing graph convolutions to obtain attention weights that are respectively associated with agents that are located within the surrounding environment. The system and method also include decoding multi modal trajectories and probabilities for each of the agents. The system and method further include controlling at least one vehicle system of the ego vehicle based on predicted trajectories associated with each of the agents and the rankings associated with probabilities that are associated with each of the predicted trajectories.Type: GrantFiled: January 28, 2021Date of Patent: January 16, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Srikanth Malla, Chiho Choi, Behzad Dariush
-
Patent number: 11741723Abstract: A system and method for performing intersection scenario retrieval that includes receiving a video stream of a surrounding environment of an ego vehicle. The system and method also include analyzing the video stream to trim the video stream into video clips of an intersection scene associated with the travel of the ego vehicle. The system and method additionally include annotating the ego vehicle, dynamic objects, and their motion paths that are included within the intersection scene with action units that describe an intersection scenario. The system and method further include retrieving at least one intersection scenario based on a query of an electronic dataset that stores a combination of action units to operably control a presentation of at least one intersection scenario video clip that includes the at least one intersection scenario.Type: GrantFiled: June 29, 2020Date of Patent: August 29, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Yi-Ting Chen, Nakul Agarwal, Behzad Dariush, Ahmed Taha
-
Publication number: 20230141037Abstract: A system and method for providing weakly-supervised online action segmentation that include receiving image data associated with multi-view videos of a procedure, wherein the procedure involves a plurality of atomic actions. The system and method also include analyzing the image data using weakly-supervised action segmentation to identify each of the plurality of atomic actions by using an ordered sequence of action labels. The system and method additionally include training a neural network with data pertaining to the plurality of atomic actions based on the weakly-supervised action segmentation. The system and method further include executing online action segmentation to label atomic actions that are occurring in real-time based on the plurality of atomic actions trained to the neural network.Type: ApplicationFiled: February 1, 2022Publication date: May 11, 2023Inventors: Reza GHODDOOSIAN, Isht DWIVEDI, Nakul AGARWAL, Chiho CHOI, Behzad DARIUSH
-
Publication number: 20230081247Abstract: A system and method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data to classify dynamic objects as agents and to detect and annotate actions that are completed by the agents that are located within the surrounding environment of the ego vehicle and analyzing the dynamic data to process an ego motion history that is associated with the ego vehicle that includes vehicle dynamic parameters during a predetermined period of time. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle based on the annotated actions.Type: ApplicationFiled: November 16, 2022Publication date: March 16, 2023Inventors: Srikanth MALLA, Chiho CHOI, Behzad DARIUSH
-
Patent number: 11577757Abstract: A system for method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data and detecting actions associated with agents located within the surrounding environment of the ego vehicle and analyzing the dynamic data and processing an ego motion history of the ego vehicle. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle.Type: GrantFiled: June 26, 2020Date of Patent: February 14, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Srikanth Malla, Chiho Choi, Behzad Dariush
-
Patent number: 11580743Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.Type: GrantFiled: March 25, 2022Date of Patent: February 14, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Yi-Ting Chen, Behzad Dariush, Nakul Agarwal, Ming-Hsuan Yang
-
Publication number: 20220414887Abstract: Systems and methods for bird's eye view (BEV) segmentation are provided. In one embodiment, a method includes receiving an input image from an image sensor on an agent. The input image is a perspective space image defined relative to the position and viewing direction of the agent. The method includes extracting features from the input image. The method includes estimating a depth map that includes depth values for pixels of the plurality of pixels of the input image. The method includes generating a 3D point map including points corresponding to the pixels of the input image. The method includes generating a voxel grid by voxelizing the 3D point map into a plurality voxels. The method includes generating a feature map by extracting feature vectors for pixels based on the points included in the voxels of the plurality of voxels and generating a BEV segmentation based on the feature map.Type: ApplicationFiled: March 31, 2022Publication date: December 29, 2022Inventors: Isht DWIVEDI, Yi-Ting CHEN, Behzad DARIUSH
-
Patent number: 11514319Abstract: According to one aspect, action prediction may be implemented via a spatio-temporal feature pyramid graph convolutional network (ST-FP-GCN) including a first pyramid layer, a second pyramid layer, a third pyramid layer, etc. The first pyramid layer may include a first graph convolution network (GCN), a fusion gate, and a first long-short-term-memory (LSTM) gate. The second pyramid layer may include a first convolution operator, a first summation operator, a first mask pool operator, a second GCN, a first upsampling operator, and a second LSTM gate. An output summation operator may sum a first LSTM output and a second LSTM output to generate an output indicative of an action prediction for an inputted image sequence and an inputted pose sequence.Type: GrantFiled: April 16, 2020Date of Patent: November 29, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Behzad Dariush, Behnoosh Parsa
-
Patent number: 11449065Abstract: A system and method for controlling an autonomous carriage based on user intentions that include receiving data associated with a user that uses the autonomous carriage to transport at least one occupant. The system and method also include analyzing the data associated with the user to determine at least one intention of the user and analyzing a scene associated with a current point of interest location of the autonomous carriage. The system and method further include determining at least one travel path to be followed by the autonomous carriage during autonomous operation that is based on the at least one intention of the user and the scene associated with the current point of interest of the autonomous carriage.Type: GrantFiled: September 3, 2019Date of Patent: September 20, 2022Assignee: HONDA MOTOR CO., LTD.Inventor: Behzad Dariush
-
Patent number: 11403850Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a surrounding environment of a vehicle. The system and method also include completing an action localization model to model a temporal context of actions occurring within the surrounding environment of the vehicle based on the video data and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses from the action localization model and the action adaption model to complete spatio-temporal action localization of individuals and actions that occur within the surrounding environment of the vehicle.Type: GrantFiled: February 28, 2020Date of Patent: August 2, 2022Assignee: Honda Motor Co., Ltd.Inventors: Yi-Ting Chen, Behzad Dariush, Nakul Agarwal, Ming-Hsuan Yang
-
Publication number: 20220215661Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.Type: ApplicationFiled: March 25, 2022Publication date: July 7, 2022Inventors: Yi-Ting CHEN, Behzad DARIUSH, Nakul AGARWAL, Ming-Hsuan YANG
-
Patent number: 11328433Abstract: According to one aspect, composite field based single shot trajectory prediction may include receiving an image of an environment including a number of agents, extracting a set of features from the image, receiving the image of the environment, encoding a set of trajectories from the image, concatenating the set of features and the set of trajectories from the image to generate an interaction module input, receiving the interaction module input, encoding a set of interactions between the number of agents and between the number of agents and the environment, concatenating the set of interactions and a localization composite field map to generate a decoder input, receiving the decoder input, generating the localization composite field map and an association composite field map, and generating a set of trajectory predictions for the number of agents based on the localization composite field map and the association composite field map.Type: GrantFiled: June 30, 2020Date of Patent: May 10, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Isht Dwivedi, Chiho Choi, Srikanth Malla, Behzad Dariush
-
Patent number: 11302110Abstract: In some examples, a first set of image data is received, the first set of image data corresponding to images of a first type and being of a person in an environment of a vehicle and including a first plurality of images of the person over a time interval. In some examples, a second set of image data is received, the second set of image data corresponding to images of a second type and being of the person in the environment of the vehicle and including a second plurality of images of the person over the time interval. In some examples, the first set of image data and the second set of image data are processed to determine a recognized action of the person, which includes using a first neural network to determine the recognized action of the person.Type: GrantFiled: February 28, 2020Date of Patent: April 12, 2022Assignee: Honda Motor Co., Ltd.Inventors: Jun Hayakawa, Behzad Dariush
-
Publication number: 20220017122Abstract: A system and method for providing social-stage spatio-temporal multi-modal future forecasting that include receiving environment data associated with a surrounding environment of an ego vehicle and implementing graph convolutions to obtain attention weights that are respectively associated with agents that are located within the surrounding environment. The system and method also include decoding multi modal trajectories and probabilities for each of the agents. The system and method further include controlling at least one vehicle system of the ego vehicle based on predicted trajectories associated with each of the agents and the rankings associated with probabilities that are associated with each of the predicted trajectories.Type: ApplicationFiled: January 28, 2021Publication date: January 20, 2022Inventors: Srikanth Malla, Chiho Choi, Behzad Dariush
-
Patent number: 11195030Abstract: According to one aspect, scene classification may be provided. An image capture device may capture a series of image frames of an environment from a moving vehicle. A temporal classifier may classify image frames with temporal predictions and generate a series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification of image frames based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a fully connected layer. The scene classifier may classify image frames based on a CNN, global average pooling, and a fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. A controller of a vehicle may activate or deactivate vehicle sensors or vehicle systems of the vehicle based on the scene prediction.Type: GrantFiled: April 3, 2019Date of Patent: December 7, 2021Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
-
Patent number: 11155259Abstract: A system and method for egocentric-vision based future vehicle localization that include receiving at least one egocentric first person view image of a surrounding environment of a vehicle. The system and method also include encoding at least one past bounding box trajectory associated with at least one traffic participant that is captured within the at least one egocentric first person view image and encoding a dense optical flow of the egocentric first person view image associated with the at least one traffic participant. The system and method further include decoding at least one future bounding box associated with the at least one traffic participant based on a final hidden state of the at least one past bounding box trajectory encoding and the final hidden state of the dense optical flow encoding.Type: GrantFiled: April 17, 2019Date of Patent: October 26, 2021Assignee: Honda Motor Co., Ltd.Inventors: Yu Yao, Mingze Xu, Chiho Choi, Behzad Dariush
-
Publication number: 20210271866Abstract: In some examples, a first set of image data is received, the first set of image data corresponding to images of a first type and being of a person in an environment of a vehicle and including a first plurality of images of the person over a time interval. In some examples, a second set of image data is received, the second set of image data corresponding to images of a second type and being of the person in the environment of the vehicle and including a second plurality of images of the person over the time interval. In some examples, the first set of image data and the second set of image data are processed to determine a recognized action of the person, which includes using a first neural network to determine the recognized action of the person.Type: ApplicationFiled: February 28, 2020Publication date: September 2, 2021Inventors: Jun HAYAKAWA, Behzad DARIUSH
-
Publication number: 20210271898Abstract: A system and method for performing intersection scenario retrieval that includes receiving a video stream of a surrounding environment of an ego vehicle. The system and method also include analyzing the video stream to trim the video stream into video clips of an intersection scene associated with the travel of the ego vehicle. The system and method additionally include annotating the ego vehicle, dynamic objects, and their motion paths that are included within the intersection scene with action units that describe an intersection scenario. The system and method further include retrieving at least one intersection scenario based on a query of an electronic dataset that stores a combination of action units to operably control a presentation of at least one intersection scenario video clip that includes the at least one intersection scenario.Type: ApplicationFiled: June 29, 2020Publication date: September 2, 2021Inventors: Yi-Ting Chen, Nakul Agarwal, Behzad Dariush, Ahmed Taha
-
Publication number: 20210264617Abstract: According to one aspect, composite field based single shot trajectory prediction may include receiving an image of an environment including a number of agents, extracting a set of features from the image, receiving the image of the environment, encoding a set of trajectories from the image, concatenating the set of features and the set of trajectories from the image to generate an interaction module input, receiving the interaction module input, encoding a set of interactions between the number of agents and between the number of agents and the environment, concatenating the set of interactions and a localization composite field map to generate a decoder input, receiving the decoder input, generating the localization composite field map and an association composite field map, and generating a set of trajectory predictions for the number of agents based on the localization composite field map and the association composite field map.Type: ApplicationFiled: June 30, 2020Publication date: August 26, 2021Inventors: Isht Dwivedi, Chiho Choi, Srikanth Malla, Behzad Dariush
-
Patent number: 11059493Abstract: Systems and methods for estimating velocity of an autonomous vehicle and state information of a surrounding vehicle are provided. In some aspects, the system includes a memory that stores instructions for executing processes for estimating velocity of an autonomous vehicle and state information of the surrounding vehicle and a processor configured to execute the instructions. In various aspects, the processes include: receiving image data from an image capturing device; performing a ground plane estimation by predicting a depth of points on a road surface based on an estimated pixel-level depth; determining a three-dimensional (3D) bounding box of the surrounding vehicle; determining the state information of the surrounding vehicle based on the ground plane estimation and the 3D bounding box; and determining the velocity of the autonomous vehicle based on an immovable object relative to the autonomous vehicle.Type: GrantFiled: April 10, 2019Date of Patent: July 13, 2021Assignee: HONDA MOTOR CO., LTD.Inventors: Jun Hayakawa, Behzad Dariush