Patents by Inventor Behzad Dariush
Behzad Dariush has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12169964Abstract: A system and method for providing weakly-supervised online action segmentation that include receiving image data associated with multi-view videos of a procedure, wherein the procedure involves a plurality of atomic actions. The system and method also include analyzing the image data using weakly-supervised action segmentation to identify each of the plurality of atomic actions by using an ordered sequence of action labels. The system and method additionally include training a neural network with data pertaining to the plurality of atomic actions based on the weakly-supervised action segmentation. The system and method further include executing online action segmentation to label atomic actions that are occurring in real-time based on the plurality of atomic actions trained to the neural network.Type: GrantFiled: February 1, 2022Date of Patent: December 17, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Reza Ghoddoosian, Isht Dwivedi, Nakul Agarwal, Chiho Choi, Behzad Dariush
-
Publication number: 20240404297Abstract: Systems and methods for training a neural network for generating a reasoning statement are provided. In one embodiment, a method includes receiving sensor data from a perspective of an ego agent. The method includes identifying a plurality of captured objects in the at least one roadway environment. The method includes receiving a set of ranking classifications for a captured object of the plurality of captured objects. The annotator reasoning statement is a natural language explanation for the applied attribute. The method includes generating a training dataset for the object type including the annotator reasoning statements of the set of ranking classifications that include the applied attribute from the plurality of importance attributes in the importance category. The method includes training the neural network to generate a generated reasoning statement based on the training dataset in response to a training agent detecting a detected object of the object type.Type: ApplicationFiled: June 5, 2023Publication date: December 5, 2024Inventors: Enna SACHDEVA, Nakul AGARWAL, Sean F. ROELOFS, Jiachen LI, Behzad DARIUSH, Chiho CHOI
-
Publication number: 20240394309Abstract: According to one aspect, causal graph chain reasoning predictions may be implemented by generating a causal graph of one or more participants within an operating environment including an ego-vehicle, one or more agents, and one or more potential obstacles, generating a prediction for each participant within the operating environment based on the causal graph, and generating an action for the ego-vehicle based on the prediction for each participant within the operating environment. Nodes of the causal graph may represent the ego-vehicle or one or more of the agents. Edges of the causal graph may represent a causal relationship between two nodes of the causal graph. The causal relationship may be a leader-follower relationship, a trajectory-dependency relationship, or a collision relationship.Type: ApplicationFiled: May 24, 2023Publication date: November 28, 2024Inventors: Aolin XU, Enna SACHDEVA, Yichen SONG, Teruhisa MISU, Behzad DARIUSH, Kikuo FUJIMURA, Kentaro YAMADA
-
Publication number: 20240371166Abstract: According to one aspect, weakly-supervised action segmentation may include performing feature extraction to extract one or more features associated with a current frame of a video including a series of one or more actions, feeding one or more of the features to a recognition network to generate a predicted action score for the current frame of the video, feeding one or more of the features and the predicted action score to an action transition model to generate a potential subsequent action, feeding the potential subsequent action and the predicted action score to a hybrid segmentation model to generate a predicted sequence of actions from a first frame of the video to the current frame of the video, and segmenting or labeling one or more frames of the video based on the predicted sequence of actions from the first frame of the video to the current frame of the video.Type: ApplicationFiled: April 27, 2023Publication date: November 7, 2024Inventors: Reza GHODDOOSIAN, Isht DWIVEDI, Nakul AGARWAL, Behzad DARIUSH
-
Patent number: 12073563Abstract: Systems and methods for bird's eye view (BEV) segmentation are provided. In one embodiment, a method includes receiving an input image from an image sensor on an agent. The input image is a perspective space image defined relative to the position and viewing direction of the agent. The method includes extracting features from the input image. The method includes estimating a depth map that includes depth values for pixels of the plurality of pixels of the input image. The method includes generating a 3D point map including points corresponding to the pixels of the input image. The method includes generating a voxel grid by voxelizing the 3D point map into a plurality voxels. The method includes generating a feature map by extracting feature vectors for pixels based on the points included in the voxels of the plurality of voxels and generating a BEV segmentation based on the feature map.Type: GrantFiled: March 31, 2022Date of Patent: August 27, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Isht Dwivedi, Yi-Ting Chen, Behzad Dariush
-
Patent number: 11873012Abstract: A system and method for providing social-stage spatio-temporal multi-modal future forecasting that include receiving environment data associated with a surrounding environment of an ego vehicle and implementing graph convolutions to obtain attention weights that are respectively associated with agents that are located within the surrounding environment. The system and method also include decoding multi modal trajectories and probabilities for each of the agents. The system and method further include controlling at least one vehicle system of the ego vehicle based on predicted trajectories associated with each of the agents and the rankings associated with probabilities that are associated with each of the predicted trajectories.Type: GrantFiled: January 28, 2021Date of Patent: January 16, 2024Assignee: HONDA MOTOR CO., LTD.Inventors: Srikanth Malla, Chiho Choi, Behzad Dariush
-
Patent number: 11741723Abstract: A system and method for performing intersection scenario retrieval that includes receiving a video stream of a surrounding environment of an ego vehicle. The system and method also include analyzing the video stream to trim the video stream into video clips of an intersection scene associated with the travel of the ego vehicle. The system and method additionally include annotating the ego vehicle, dynamic objects, and their motion paths that are included within the intersection scene with action units that describe an intersection scenario. The system and method further include retrieving at least one intersection scenario based on a query of an electronic dataset that stores a combination of action units to operably control a presentation of at least one intersection scenario video clip that includes the at least one intersection scenario.Type: GrantFiled: June 29, 2020Date of Patent: August 29, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Yi-Ting Chen, Nakul Agarwal, Behzad Dariush, Ahmed Taha
-
Publication number: 20230141037Abstract: A system and method for providing weakly-supervised online action segmentation that include receiving image data associated with multi-view videos of a procedure, wherein the procedure involves a plurality of atomic actions. The system and method also include analyzing the image data using weakly-supervised action segmentation to identify each of the plurality of atomic actions by using an ordered sequence of action labels. The system and method additionally include training a neural network with data pertaining to the plurality of atomic actions based on the weakly-supervised action segmentation. The system and method further include executing online action segmentation to label atomic actions that are occurring in real-time based on the plurality of atomic actions trained to the neural network.Type: ApplicationFiled: February 1, 2022Publication date: May 11, 2023Inventors: Reza GHODDOOSIAN, Isht DWIVEDI, Nakul AGARWAL, Chiho CHOI, Behzad DARIUSH
-
Publication number: 20230081247Abstract: A system and method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data to classify dynamic objects as agents and to detect and annotate actions that are completed by the agents that are located within the surrounding environment of the ego vehicle and analyzing the dynamic data to process an ego motion history that is associated with the ego vehicle that includes vehicle dynamic parameters during a predetermined period of time. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle based on the annotated actions.Type: ApplicationFiled: November 16, 2022Publication date: March 16, 2023Inventors: Srikanth MALLA, Chiho CHOI, Behzad DARIUSH
-
Patent number: 11577757Abstract: A system for method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data and detecting actions associated with agents located within the surrounding environment of the ego vehicle and analyzing the dynamic data and processing an ego motion history of the ego vehicle. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle.Type: GrantFiled: June 26, 2020Date of Patent: February 14, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Srikanth Malla, Chiho Choi, Behzad Dariush
-
Patent number: 11580743Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.Type: GrantFiled: March 25, 2022Date of Patent: February 14, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Yi-Ting Chen, Behzad Dariush, Nakul Agarwal, Ming-Hsuan Yang
-
Publication number: 20220414887Abstract: Systems and methods for bird's eye view (BEV) segmentation are provided. In one embodiment, a method includes receiving an input image from an image sensor on an agent. The input image is a perspective space image defined relative to the position and viewing direction of the agent. The method includes extracting features from the input image. The method includes estimating a depth map that includes depth values for pixels of the plurality of pixels of the input image. The method includes generating a 3D point map including points corresponding to the pixels of the input image. The method includes generating a voxel grid by voxelizing the 3D point map into a plurality voxels. The method includes generating a feature map by extracting feature vectors for pixels based on the points included in the voxels of the plurality of voxels and generating a BEV segmentation based on the feature map.Type: ApplicationFiled: March 31, 2022Publication date: December 29, 2022Inventors: Isht DWIVEDI, Yi-Ting CHEN, Behzad DARIUSH
-
Patent number: 11514319Abstract: According to one aspect, action prediction may be implemented via a spatio-temporal feature pyramid graph convolutional network (ST-FP-GCN) including a first pyramid layer, a second pyramid layer, a third pyramid layer, etc. The first pyramid layer may include a first graph convolution network (GCN), a fusion gate, and a first long-short-term-memory (LSTM) gate. The second pyramid layer may include a first convolution operator, a first summation operator, a first mask pool operator, a second GCN, a first upsampling operator, and a second LSTM gate. An output summation operator may sum a first LSTM output and a second LSTM output to generate an output indicative of an action prediction for an inputted image sequence and an inputted pose sequence.Type: GrantFiled: April 16, 2020Date of Patent: November 29, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Behzad Dariush, Behnoosh Parsa
-
Patent number: 11449065Abstract: A system and method for controlling an autonomous carriage based on user intentions that include receiving data associated with a user that uses the autonomous carriage to transport at least one occupant. The system and method also include analyzing the data associated with the user to determine at least one intention of the user and analyzing a scene associated with a current point of interest location of the autonomous carriage. The system and method further include determining at least one travel path to be followed by the autonomous carriage during autonomous operation that is based on the at least one intention of the user and the scene associated with the current point of interest of the autonomous carriage.Type: GrantFiled: September 3, 2019Date of Patent: September 20, 2022Assignee: HONDA MOTOR CO., LTD.Inventor: Behzad Dariush
-
Patent number: 11403850Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a surrounding environment of a vehicle. The system and method also include completing an action localization model to model a temporal context of actions occurring within the surrounding environment of the vehicle based on the video data and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses from the action localization model and the action adaption model to complete spatio-temporal action localization of individuals and actions that occur within the surrounding environment of the vehicle.Type: GrantFiled: February 28, 2020Date of Patent: August 2, 2022Assignee: Honda Motor Co., Ltd.Inventors: Yi-Ting Chen, Behzad Dariush, Nakul Agarwal, Ming-Hsuan Yang
-
Publication number: 20220215661Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.Type: ApplicationFiled: March 25, 2022Publication date: July 7, 2022Inventors: Yi-Ting CHEN, Behzad DARIUSH, Nakul AGARWAL, Ming-Hsuan YANG
-
Patent number: 11328433Abstract: According to one aspect, composite field based single shot trajectory prediction may include receiving an image of an environment including a number of agents, extracting a set of features from the image, receiving the image of the environment, encoding a set of trajectories from the image, concatenating the set of features and the set of trajectories from the image to generate an interaction module input, receiving the interaction module input, encoding a set of interactions between the number of agents and between the number of agents and the environment, concatenating the set of interactions and a localization composite field map to generate a decoder input, receiving the decoder input, generating the localization composite field map and an association composite field map, and generating a set of trajectory predictions for the number of agents based on the localization composite field map and the association composite field map.Type: GrantFiled: June 30, 2020Date of Patent: May 10, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Isht Dwivedi, Chiho Choi, Srikanth Malla, Behzad Dariush
-
Patent number: 11302110Abstract: In some examples, a first set of image data is received, the first set of image data corresponding to images of a first type and being of a person in an environment of a vehicle and including a first plurality of images of the person over a time interval. In some examples, a second set of image data is received, the second set of image data corresponding to images of a second type and being of the person in the environment of the vehicle and including a second plurality of images of the person over the time interval. In some examples, the first set of image data and the second set of image data are processed to determine a recognized action of the person, which includes using a first neural network to determine the recognized action of the person.Type: GrantFiled: February 28, 2020Date of Patent: April 12, 2022Assignee: Honda Motor Co., Ltd.Inventors: Jun Hayakawa, Behzad Dariush
-
Publication number: 20220017122Abstract: A system and method for providing social-stage spatio-temporal multi-modal future forecasting that include receiving environment data associated with a surrounding environment of an ego vehicle and implementing graph convolutions to obtain attention weights that are respectively associated with agents that are located within the surrounding environment. The system and method also include decoding multi modal trajectories and probabilities for each of the agents. The system and method further include controlling at least one vehicle system of the ego vehicle based on predicted trajectories associated with each of the agents and the rankings associated with probabilities that are associated with each of the predicted trajectories.Type: ApplicationFiled: January 28, 2021Publication date: January 20, 2022Inventors: Srikanth Malla, Chiho Choi, Behzad Dariush
-
Patent number: 11195030Abstract: According to one aspect, scene classification may be provided. An image capture device may capture a series of image frames of an environment from a moving vehicle. A temporal classifier may classify image frames with temporal predictions and generate a series of image frames associated with respective temporal predictions based on a scene classification model. The temporal classifier may perform classification of image frames based on a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a fully connected layer. The scene classifier may classify image frames based on a CNN, global average pooling, and a fully connected layer and generate an associated scene prediction based on the scene classification model and respective temporal predictions. A controller of a vehicle may activate or deactivate vehicle sensors or vehicle systems of the vehicle based on the scene prediction.Type: GrantFiled: April 3, 2019Date of Patent: December 7, 2021Assignee: HONDA MOTOR CO., LTD.Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush