Patents by Inventor Upali P. Mudalige
Upali P. Mudalige has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11204417Abstract: The vehicle mounted perception sensor gathers environment perception data from a scene using first and second heterogeneous (different modality) sensors, at least one of the heterogeneous sensors is directable to a predetermined region of interest. A perception processor receives the environment perception data and performs object recognition to identify objects each with a computed confidence score. The processor assesses the confidence score vis-à-vis a predetermined threshold, and based on that assessment, generates an attention signal to redirect the one of the heterogeneous sensors to a region of interest identified by the other heterogeneous sensor. In this way information from one sensor primes the other sensor to increase accuracy and provide deeper knowledge about the scene and thus do a better job of object tracking in vehicular applications.Type: GrantFiled: May 8, 2019Date of Patent: December 21, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige, Lawrence A. Bush
-
Patent number: 11157784Abstract: System and method for explaining driving behavior actions of autonomous vehicles. Combined sensor information collected at a scene understanding module is used to produce a state representation. The state representation includes predetermined types of image representations that, along with a state prediction, are used by a decision making module for determining one or more weighted behavior policies. A driving behavior action is selected and performed based on the determined one or more behavior policies. Information is then provided indicating why the selected driving behavior action was chosen in a particular driving context of the autonomous vehicle. In one or more embodiments, a user interface is configured to depict the predetermined types of image representations corresponding with the driving behavior action performed via the autonomous vehicle.Type: GrantFiled: May 8, 2019Date of Patent: October 26, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Praveen Palanisamy, Upali P. Mudalige
-
Patent number: 11052914Abstract: Automated driving systems, control logic, and methods execute maneuver criticality analysis to provide intelligent vehicle operation in transient driving conditions. A method for controlling an automated driving operation includes a vehicle controller receiving path plan data with location, destination, and predicted path data for a vehicle. From the received path plan data, the controller predicts an upcoming maneuver for driving the vehicle between start and goal lane segments. The vehicle controller determines a predicted route with lane segments connecting the start and goal lane segments, and segment maneuvers for moving the vehicle between the start, goal, and route lane segments. A cost value is calculated for each segment maneuver; the controller determines if a cost values exceeds a corresponding criticality value. If so, the controller commands a resident vehicle subsystem to execute a control operation associated with taking the predicted route.Type: GrantFiled: March 14, 2019Date of Patent: July 6, 2021Assignee: GM Global Technology Operations LLCInventors: Syed B. Mehdi, Pinaki Gupta, Upali P. Mudalige
-
Patent number: 11016495Abstract: Systems and methods are provided for end-to-end learning of commands for controlling an autonomous vehicle. A pre-processor pre-processes image data acquired by sensors at a current time step (CTS) to generate pre-processed image data that is concatenated with additional input(s) (e.g., a segmentation map and/or optical flow map) to generate a dynamic scene output. A convolutional neural network (CNN) processes the dynamic scene output to generate a feature map that includes extracted spatial features that are concatenated with vehicle kinematics to generate a spatial context feature vector. An LSTM network processes, during the (CTS), the spatial context feature vector at the (CTS) and one or more previous LSTM outputs at corresponding previous time steps to generate an encoded temporal context vector at the (CTS). The fully connected layer processes the encoded temporal context vector to learn control command(s) (e.g., steering angle, acceleration rate and/or a brake rate control commands).Type: GrantFiled: November 5, 2018Date of Patent: May 25, 2021Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
-
Publication number: 20210149415Abstract: A system and method for monitoring a road segment includes determining a geographic position of a vehicle in context of a digitized roadway map. A perceived point cloud and a mapped point cloud associated with the road segment are determined. An error vector is determined based upon a transformation between the mapped point cloud and the perceived point cloud. A first confidence interval is derived from a Gaussian process that is composed from past observations. A second confidence interval associated with a longitudinal dimension and a third confidence interval associated with a lateral dimension are determined based upon the mapped point cloud and the perceived point cloud. A Kalman filter analysis is executed to dynamically determine a position of the vehicle relative to the roadway map based upon the error vector, the first confidence interval, the second confidence interval, and the third confidence interval.Type: ApplicationFiled: November 20, 2019Publication date: May 20, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Brent N. Bacchus, James P. Neville, Upali P. Mudalige
-
Publication number: 20210146827Abstract: An exemplary automotive vehicle includes a first actuator configured to control acceleration and braking of the automotive vehicle, a second actuator configured to control steering of the automotive vehicle, a vehicle sensor configured to generate data regarding the presence, location, classification, and path of detected features in a vicinity of the automotive vehicle and a controller in communication with the vehicle sensor, and the first and second actuators. The controller is configured to selectively control the first and second actuators in an autonomous mode along a first trajectory according to an automated driving system. The controller is also configured to receive the data regarding the detected features from the vehicle sensor, determine a predicted vehicle maneuver from the data regarding the detected features, map the predicted vehicle maneuver with an indication symbol, and generate a control signal to display the indication symbol.Type: ApplicationFiled: November 20, 2019Publication date: May 20, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Zachariah E. Tyree, Prabhjot Kaur, Upali P. Mudalige
-
Patent number: 10984534Abstract: Systems and methods to identify an attention region in sensor-based detection involve obtaining a detection result that indicates one or more detection areas where one or more objects of interest are detected. The detection result is based on using a first detection algorithm. The method includes obtaining a reference detection result that indicates one or more reference detection areas where one or more objects of interest are detected. The reference detection result is based on using a second detection algorithm. The method also includes identifying the attention region as one of the one or more reference detection areas without a corresponding one or more detection areas. The first detection algorithm is used to perform detection in the attention region.Type: GrantFiled: March 28, 2019Date of Patent: April 20, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige
-
Publication number: 20210086695Abstract: The present application relates to a method and apparatus for generating a graphical user interface indicative of a vehicle underbody view including a LIDAR operative to generate a depth map of an off-road surface, a camera for capturing an image of the off-road surface, a chassis sensor operative to detect an orientation of a host vehicle, a processor operative to generate an augmented image in response to the depth map, the image and the orientation, wherein the augmented image depicts an underbody view of the host vehicle and a graphic representative of a host vehicle suspension system, and a display operative to display the augmented image to a host vehicle operator. A static and dynamic model of the vehicle underbody is compared vs the 3-D terrain model to identify contact points between the underbody and terrain are highlighted.Type: ApplicationFiled: September 24, 2019Publication date: March 25, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Brian Mahnken, Bradford G. Schreiber, Jeffrey Louis Brown, Shawn W. Ryan, Brent T. Deep, Kurt A. Heier, Upali P. Mudalige, Shuqing Zeng
-
Patent number: 10955842Abstract: Systems and methods are provided for controlling an autonomous vehicle (AV). A scene understanding module of a high-level controller selects a particular combination of sensorimotor primitive modules to be enabled and executed for a particular driving scenario from a plurality of sensorimotor primitive modules. Each one of the particular combination of the sensorimotor primitive modules addresses a sub-task in a sequence of sub-tasks that address a particular driving scenario. A primitive processor module executes the particular combination of the sensorimotor primitive modules such that each generates a vehicle trajectory and speed profile. An arbitration module selects one of the vehicle trajectory and speed profiles having the highest priority ranking for execution, and a vehicle control module processes the selected one of vehicle trajectory and speed profiles to generate control signals used to execute one or more control actions to automatically control the AV.Type: GrantFiled: May 24, 2018Date of Patent: March 23, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige
-
Patent number: 10940863Abstract: Systems and methods are provided that employ spatial and temporal attention-based deep reinforcement learning of hierarchical lane-change policies for controlling an autonomous vehicle. An actor-critic network architecture includes an actor network that process image data received from an environment to learn the lane-change policies as a set of hierarchical actions, and a critic network that evaluates the lane-change policies to calculate loss and gradients to predict an action-value function (Q) that is used to drive learning and update parameters of the lane-change policies. The actor-critic network architecture implements a spatial attention module to select relevant regions in the image data that are of importance, and a temporal attention module to learn temporal attention weights to be applied to past frames of image data to indicate relative importance in deciding which lane-change policy to select.Type: GrantFiled: November 1, 2018Date of Patent: March 9, 2021Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
-
Patent number: 10929719Abstract: Systems and methods to generate an adversarial attack on a black box object detection algorithm of a sensor involve obtaining an initial training data set from the black box object detection algorithm. The black box object detection algorithm performs object detection on initial input data to provide black box object detection algorithm output that provides the initial training data set. A substitute model is trained with the initial training data set such that output from the substitute model replicates the black box object detection algorithm output that makes up the initial training data set. Details of operation of the black box object detection algorithm are unknown and details of operation of the substitute model are known. The substitute model is used to perform the adversarial attack. The adversarial attack refers to identifying adversarial input data for which the black box object detection algorithm will fail to perform accurate detection.Type: GrantFiled: March 28, 2019Date of Patent: February 23, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige
-
Publication number: 20210018928Abstract: Methods and systems are provided for detecting objects within an environment of a vehicle. In one embodiment, a method includes: receiving, by a processor, image data sensed from the environment of the vehicle; determining, by a processor, an area within the image data that object identification is uncertain; controlling, by the processor, a position of a lighting device to illuminate a location in the environment of the vehicle, wherein the location is associated with the area; controlling, by the processor, a position of one or more sensors to obtain sensor data from the location of the environment of the vehicle while the lighting device is illuminating the location; identifying, by the processor, one or more objects from the sensor data; and controlling, by the processor, the vehicle based on the one or more objects.Type: ApplicationFiled: July 18, 2019Publication date: January 21, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Upali P. Mudalige, Zachariah E. Tyree, Wei Tong, Shuqing Zeng
-
Patent number: 10893183Abstract: An attention-based imaging system is described, including a camera that can adjust its field of view (FOV) and resolution and a control routine that can determine one or more regions of interest (ROI) within the FOV to prioritize camera resources. The camera includes an image sensor, an internal lens, a steerable mirror, an external lens, and a controller. The external lens is disposed to monitor a viewable region, and the steerable mirror is interposed between the internal lenses and the external lenses. The steerable mirrors are arranged to project the viewable region from the external lens onto the image sensor via the internal lens. The steerable mirror modifies the viewable region that is projected onto the image sensor and controls the image sensor to capture an image. The associated control routine can be deployed either inside the camera or in a separate external processor.Type: GrantFiled: November 18, 2019Date of Patent: January 12, 2021Assignee: GM Global Technology Operations LLCInventors: Guangyu J. Zou, Shuqing Zeng, Upali P. Mudalige
-
Publication number: 20200393252Abstract: A system for generating a map includes a processing device configured to determine a provenance of each received map of a plurality of maps, parse each received map into objects of interest, and compare the objects of interest to identify one or more sets of objects representing a common feature. For each set of objects, the processing device is configured to select a subset of the set of objects based on the provenance associated with each object in the set of objects, and calculate a similarity metric for each object in the subset. The similarity metric is selected from an alignment between an object and a reference object in the subset, and/or a positional relationship between the object and the reference object. The processing device is configured to generate a common object representing the common feature based on the similarity metric, and generate a merged map including the common object.Type: ApplicationFiled: June 12, 2019Publication date: December 17, 2020Inventors: Lawrence A. Bush, Michael A. Losh, Aravindhan Mani, Upali P. Mudalige
-
Patent number: 10845815Abstract: Systems and methods are provided autonomous driving policy generation. The system can include a set of autonomous driver agents, and a driving policy generation module that includes a set of driving policy learner modules for generating and improving policies based on the collective experiences collected by the driver agents. The driver agents can collect driving experiences to create a knowledge base. The driving policy learner modules can process the collective driving experiences to extract driving policies. The driver agents can be trained via the driving policy learner modules in a parallel and distributed manner to find novel and efficient driving policies and behaviors faster and more efficiently. Parallel and distributed learning can enable accelerated training of multiple autonomous intelligent driver agents.Type: GrantFiled: July 27, 2018Date of Patent: November 24, 2020Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Praveen Palanisamy, Upali P. Mudalige
-
Publication number: 20200355820Abstract: The vehicle mounted perception sensor gathers environment perception data from a scene using first and second heterogeneous (different modality) sensors, at least one of the heterogeneous sensors is directable to a predetermined region of interest. A perception processor receives the environment percpetion data and performs object recognition to identify objects each with a computed confidence score. The processor assesses the confidence score vis-à-vis a predetermined threshold, and based on that assessment, generates an attention signal to redirect the one of the heterogeneous sensors to a region of interest identified by the other heterogeneous sensor. In this way information from one sensor primes the other sensor to increase accuracy and provide deeper knowledge about the scene and thus do a better job of object tracking in vehicular applications.Type: ApplicationFiled: May 8, 2019Publication date: November 12, 2020Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige, Lawrence A. Bush
-
Publication number: 20200356828Abstract: System and method for explaining driving behavior actions of autonomous vehicles. Combined sensor information collected at a scene understanding module is used to produce a state representation. The state representation includes predetermined types of image representations that, along with a state prediction, are used by a decision making module for determining one or more weighted behavior policies. A driving behavior action is selected and performed based on the determined one or more behavior policies. Information is then provided indicating why the selected driving behavior action was chosen in a particular driving context of the autonomous vehicle. In one or more embodiments, a user interface is configured to depict the predetermined types of image representations corresponding with the driving behavior action performed via the autonomous vehicle.Type: ApplicationFiled: May 8, 2019Publication date: November 12, 2020Inventors: Praveen Palanisamy, Upali P. Mudalige
-
Patent number: 10824943Abstract: Described herein are systems, methods, and computer-readable media for generating and training a high precision low bit convolutional neural network (CNN). A filter of each convolutional layer of the CNN is approximated using one or more binary filters and a real-valued activation function is approximated using a linear combination of binary activations. More specifically, a non-1×1 filter (e.g., a k×k filter, where k>1) is approximated using a scaled binary filter and a 1×1 filter is approximated using a linear combination of binary filters. Thus, a different strategy is employed for approximating different weights (e.g., 1×1 filter vs. a non-1×1 filter). In this manner, convolutions performed in convolutional layer(s) of the high precision low bit CNN become binary convolutions that yield a lower computational cost while still maintaining a high performance (e.g., a high accuracy).Type: GrantFiled: August 21, 2018Date of Patent: November 3, 2020Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige, Shige Wang
-
Patent number: 10817728Abstract: A method of updating an identification algorithm of a vehicle includes sensing an image and drawing boundary boxes in the image. The algorithm attempts to identify an object-of-interest within each respective boundary box. The algorithm also attempts to identify a component of the object-of-interest within each respective boundary box, and if component is identified, calculates an excluded amount of a component boundary that is outside an object boundary. When the excluded amount is greater than a coverage threshold, the algorithm communicates the image to a processing center, which may identify a previously un-identified the object-of-interest in the image. The processing center may add the image to a training set of images to define a revised training set of images, and retrain the identification algorithm using the revised training set of images. The updated identification algorithm may then be uploaded onto the vehicle.Type: GrantFiled: January 23, 2019Date of Patent: October 27, 2020Assignee: GM Global Technology Operations LLCInventors: Syed B. Mehdi, Yasen Hu, Upali P. Mudalige
-
Publication number: 20200318976Abstract: Systems and methods are provided for controlling a vehicle. In one embodiment, a method includes: receiving, by a processor, landmark data obtained from an image sensor of the vehicle; fusing, by the processor, the landmark data with vehicle pose data to produce fused lane data, wherein the fusing is based on a Kalman filter; retrieving, by the processor, map data from a lane map based on the vehicle pose data; selectively updating, by the processor, the lane map based on a change in the fused lane data from the map data; and controlling, by the processor, the vehicle based on the updated lane map.Type: ApplicationFiled: April 3, 2019Publication date: October 8, 2020Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Michael A. Losh, Brent N. Bacchus, Upali P. Mudalige