Patents by Inventor PRAVEEN PALANISAMY

PRAVEEN PALANISAMY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11532188
    Abstract: A vehicle and a system and method for operating a vehicle. The system includes a state estimator and a processor. A detected value of a parameter of the vehicle is determined using sensor data obtained by in-vehicle detectors. The processor determines a check value of the parameter based on crowdsourced data, validates the detected value of the parameter based on the check value of the parameter, and operates the vehicle based on the validation.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: December 20, 2022
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Arun Adiththan, Praveen Palanisamy, SeyedAlireza Kasaiezadeh Mahabadi, Ramesh S
  • Patent number: 11157784
    Abstract: System and method for explaining driving behavior actions of autonomous vehicles. Combined sensor information collected at a scene understanding module is used to produce a state representation. The state representation includes predetermined types of image representations that, along with a state prediction, are used by a decision making module for determining one or more weighted behavior policies. A driving behavior action is selected and performed based on the determined one or more behavior policies. Information is then provided indicating why the selected driving behavior action was chosen in a particular driving context of the autonomous vehicle. In one or more embodiments, a user interface is configured to depict the predetermined types of image representations corresponding with the driving behavior action performed via the autonomous vehicle.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: October 26, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Upali P. Mudalige
  • Patent number: 11016495
    Abstract: Systems and methods are provided for end-to-end learning of commands for controlling an autonomous vehicle. A pre-processor pre-processes image data acquired by sensors at a current time step (CTS) to generate pre-processed image data that is concatenated with additional input(s) (e.g., a segmentation map and/or optical flow map) to generate a dynamic scene output. A convolutional neural network (CNN) processes the dynamic scene output to generate a feature map that includes extracted spatial features that are concatenated with vehicle kinematics to generate a spatial context feature vector. An LSTM network processes, during the (CTS), the spatial context feature vector at the (CTS) and one or more previous LSTM outputs at corresponding previous time steps to generate an encoded temporal context vector at the (CTS). The fully connected layer processes the encoded temporal context vector to learn control command(s) (e.g., steering angle, acceleration rate and/or a brake rate control commands).
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: May 25, 2021
    Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITY
    Inventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
  • Publication number: 20210074162
    Abstract: Systems and methods are provided for controlling a vehicle. In one embodiment, a method includes: determining, by a processor, that a lane change is desired; determining, by the processor, a lane change action based on a reinforcement learning method and a rule-based method, wherein each of the methods evaluates lane data, vehicle data, map data, and actor data; and controlling, by the processor, the vehicle to perform the lane change based on the lane action.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 11, 2021
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Sayyed Rouhollah Jafari Tafti, Pinaki Gupta, Syed B. Mehdi, Praveen Palanisamy
  • Patent number: 10940863
    Abstract: Systems and methods are provided that employ spatial and temporal attention-based deep reinforcement learning of hierarchical lane-change policies for controlling an autonomous vehicle. An actor-critic network architecture includes an actor network that process image data received from an environment to learn the lane-change policies as a set of hierarchical actions, and a critic network that evaluates the lane-change policies to calculate loss and gradients to predict an action-value function (Q) that is used to drive learning and update parameters of the lane-change policies. The actor-critic network architecture implements a spatial attention module to select relevant regions in the image data that are of importance, and a temporal attention module to learn temporal attention weights to be applied to past frames of image data to indicate relative importance in deciding which lane-change policy to select.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: March 9, 2021
    Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITY
    Inventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
  • Publication number: 20210056779
    Abstract: A vehicle and a system and method for operating a vehicle. The system includes a state estimator and a processor. A detected value of a parameter of the vehicle is determined using sensor data obtained by in-vehicle detectors. The processor determines a check value of the parameter based on crowdsourced data, validates the detected value of the parameter based on the check value of the parameter, and operates the vehicle based on the validation.
    Type: Application
    Filed: August 22, 2019
    Publication date: February 25, 2021
    Inventors: Arun Adiththan, Praveen Palanisamy, SeyedAlireza Kasaiezadeh Mahabadi, Ramesh S
  • Patent number: 10845815
    Abstract: Systems and methods are provided autonomous driving policy generation. The system can include a set of autonomous driver agents, and a driving policy generation module that includes a set of driving policy learner modules for generating and improving policies based on the collective experiences collected by the driver agents. The driver agents can collect driving experiences to create a knowledge base. The driving policy learner modules can process the collective driving experiences to extract driving policies. The driver agents can be trained via the driving policy learner modules in a parallel and distributed manner to find novel and efficient driving policies and behaviors faster and more efficiently. Parallel and distributed learning can enable accelerated training of multiple autonomous intelligent driver agents.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: November 24, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Upali P. Mudalige
  • Publication number: 20200356828
    Abstract: System and method for explaining driving behavior actions of autonomous vehicles. Combined sensor information collected at a scene understanding module is used to produce a state representation. The state representation includes predetermined types of image representations that, along with a state prediction, are used by a decision making module for determining one or more weighted behavior policies. A driving behavior action is selected and performed based on the determined one or more behavior policies. Information is then provided indicating why the selected driving behavior action was chosen in a particular driving context of the autonomous vehicle. In one or more embodiments, a user interface is configured to depict the predetermined types of image representations corresponding with the driving behavior action performed via the autonomous vehicle.
    Type: Application
    Filed: May 8, 2019
    Publication date: November 12, 2020
    Inventors: Praveen Palanisamy, Upali P. Mudalige
  • Publication number: 20200293041
    Abstract: A system and method for determining a vehicle action to be carried out by an autonomous vehicle based on a composite behavior policy. The method includes the steps of: obtaining a behavior query that indicates which of a plurality of constituent behavior policies are to be used to execute the composite behavior policy, wherein each of the constituent behavior policies maps a vehicle state to one or more vehicle actions; determining an observed vehicle state based on onboard vehicle sensor data, wherein the onboard vehicle sensor data is obtained from one or more onboard vehicle sensors of the vehicle; selecting a vehicle action based on the composite behavior policy; and carrying out the selected vehicle action at the vehicle.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 17, 2020
    Inventor: Praveen Palanisamy
  • Patent number: 10732639
    Abstract: The present application generally relates to a method and apparatus for generating an action policy for controlling an autonomous vehicle. In particular, the system performs a deep learning algorithm in order to determine the action policy and an automatically generated curriculum system to determine a number of increasingly difficult tasks in order to refine the action policy.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: August 4, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Zhiqian Qiao, Upali P. Mudalige, Katharina Muelling, John M. Dolan
  • Patent number: 10678252
    Abstract: Systems, Apparatuses and Methods for implementing a neural network system for controlling an autonomous vehicle (AV) are provided, which includes: a neural network having a plurality of nodes with context to vector (context2vec) contextual embeddings to enable operations of the AV; a plurality of encoded context2vec AV words in a sequence of timing to embed data of context and behavior; a set of inputs which comprise: at least one of a current, a prior, and a subsequent encoded context2vec AV word; a neural network solution applied by the at least one computer to determine a target context2vec AV word of each set of the inputs based on the current context2vec AV word; an output vector computed by the neural network that represents the embedded distributional one-hot scheme of the input encoded context2vec AV word; and a set of behavior control operations for controlling a behavior of the AV.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: June 9, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Marcus J. Huber, Praveen Palanisamy
  • Patent number: 10678241
    Abstract: Systems and method are provided for controlling a vehicle.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: June 9, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Upali P. Mudalige
  • Publication number: 20200139973
    Abstract: Systems and methods are provided that employ spatial and temporal attention-based deep reinforcement learning of hierarchical lane-change policies for controlling an autonomous vehicle. An actor-critic network architecture includes an actor network that process image data received from an environment to learn the lane-change policies as a set of hierarchical actions, and a critic network that evaluates the lane-change policies to calculate loss and gradients to predict an action-value function (Q) that is used to drive learning and update parameters of the lane-change policies. The actor-critic network architecture implements a spatial attention module to select relevant regions in the image data that are of importance, and a temporal attention module to learn temporal attention weights to be applied to past frames of image data to indicate relative importance in deciding which lane-change policy to select.
    Type: Application
    Filed: November 1, 2018
    Publication date: May 7, 2020
    Applicants: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITY
    Inventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
  • Publication number: 20200142421
    Abstract: Systems and methods are provided for end-to-end learning of commands for controlling an autonomous vehicle. A pre-processor pre-processes image data acquired by sensors at a current time step (CTS) to generate pre-processed image data that is concatenated with additional input(s) (e.g., a segmentation map and/or optical flow map) to generate a dynamic scene output. A convolutional neural network (CNN) processes the dynamic scene output to generate a feature map that includes extracted spatial features that are concatenated with vehicle kinematics to generate a spatial context feature vector. An LSTM network processes, during the (CTS), the spatial context feature vector at the (CTS) and one or more previous LSTM outputs at corresponding previous time steps to generate an encoded temporal context vector at the (CTS). The fully connected layer processes the encoded temporal context vector to learn control command(s) (e.g., steering angle, acceleration rate and/or a brake rate control commands).
    Type: Application
    Filed: November 5, 2018
    Publication date: May 7, 2020
    Applicants: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITY
    Inventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
  • Patent number: 10591914
    Abstract: Systems and methods are provided for controlling a vehicle. Control signals are generated at a high-level controller based on one or more sources of input data, comprising at least one of: sensors that provide sensor output information, map data and goals. The high-level controller comprises first controller modules comprising: an input processing module, a projection module, a memories module, a world model module, and a decision processing module that comprises a control model executor module. The control signals are processed at a low-level controller to generate commands that control a plurality of vehicle actuators of the vehicle in accordance with the control signals to execute one or more scheduled actions to be performed to automate driving tasks.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: March 17, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Marcus J. Huber, Upali P. Mudalige
  • Publication number: 20200050207
    Abstract: Systems, Apparatuses and Methods for implementing a neural network system for controlling an autonomous vehicle (AV) are provided, which includes: a neural network having a plurality of nodes with context to vector (context2vec) contextual embeddings to enable operations of the of the AV; a plurality of encoded context2vec AV words in a sequence of timing to embed data of context and behavior; a set of inputs which comprise: at least one of a current, a prior, and a subsequent encoded context2vec AV word; a neural network solution applied by the at least one computer to determine a target context2vec AV word of each set of the inputs based on the current context2vec AV word; an output vector computed by the neural network that represents the embedded distributional one-hot scheme of the input encoded context2vec AV word; and a set of behavior control operations for controlling a behavior of the AV.
    Type: Application
    Filed: August 9, 2018
    Publication date: February 13, 2020
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Marcus J. Huber, Praveen Palanisamy
  • Publication number: 20200033868
    Abstract: Systems and methods are provided autonomous driving policy generation. The system can include a set of autonomous driver agents, and a driving policy generation module that includes a set of driving policy learner modules for generating and improving policies based on the collective experiences collected by the driver agents. The driver agents can collect driving experiences to create a knowledge base. The driving policy learner modules can process the collective driving experiences to extract driving policies. The driver agents can be trained via the driving policy learner modules in a parallel and distributed manner to find novel and efficient driving policies and behaviors faster and more efficiently. Parallel and distributed learning can enable accelerated training of multiple autonomous intelligent driver agents.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Upali P. Mudalige
  • Publication number: 20200033869
    Abstract: Systems, methods and controllers are provided for controlling autonomous vehicles. The systems, methods and controllers implement autonomous driver agents and a policy server for serving policies to autonomous driver agents for controlling an autonomous vehicle. The system can include a set of autonomous driver agents, an experience memory that stores experiences captured by the driver agents, a set of driving policy learner modules for generating and improving policies based on the collective experiences stored in the experience memory, and a policy server that serves parameters for policies to the driver agents. The driver agents can collect driving experiences to create a knowledge base that is stored in an experience memory. The driving policy learner modules can process the collective driving experiences to extract driving policies.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Upali P. Mudalige
  • Publication number: 20200026277
    Abstract: A method in an autonomous vehicle (AV) is provided. The method includes determining, from vehicle sensor data and road geometry data, a plurality of range measurements and obstacle velocity data; determining vehicle state data wherein the vehicle state data includes a velocity of the AV, a distance to a stop line, a distance to a midpoint of an intersection, and a distance to a goal; determining, based on the plurality of range measurements, the obstacle velocity data and the vehicle state data, a set of discrete behavior actions and a unique trajectory control action associated with each discrete behavior action; choosing a discrete behavior action and a unique trajectory control action to perform; and communicating a message to vehicle controls conveying the unique trajectory control action associated with the discrete behavior action.
    Type: Application
    Filed: July 19, 2018
    Publication date: January 23, 2020
    Applicants: GM GLOBAL TECHNOLOGY OPERATIONS LLC, Carnegie Mellon University
    Inventors: Praveen Palanisamy, Zhiqian Qiao, Katharina Muelling, John M. Dolan, Upali P. Mudalige
  • Patent number: 10520940
    Abstract: A system and method to perform autonomous operation of a vehicle include obtaining one or more image frames for a time instance t from corresponding one or more sensors. Processing the one or more image frames includes performing convolutional processing to obtain a multi-dimensional matrix xt. The method includes operating on the multi-dimensional matrix xt to obtain output ht. The operating includes using an output ht?1 of the operating for a previous time instance t?1. The method also includes post-processing the output ht to obtain one or more control signals to affect operation of the vehicle.
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: December 31, 2019
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Praveen Palanisamy, Upali P. Mudalige