Patents by Inventor Upali P. Mudalige
Upali P. Mudalige has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230139521Abstract: A system comprises a computer including a processor and a memory. The memory includes instructions such that the processor is programmed to: receive, at a first neural network, unlabeled sensor data, wherein the first neural network generates output based on the unlabeled sensor data, receive, at a second neural network, the unlabeled sensor data, wherein the second neural network generates output based on the unlabeled sensor data during a validation mode, the second neural network different from the first neural network, compare the output generated by the first neural network with the output generated by the second neural network, and generate an alert when a difference between the output generated by the first neural network and the output generated by the second neural network is greater than a predetermined comparison threshold.Type: ApplicationFiled: November 2, 2021Publication date: May 4, 2023Inventors: Wei Tong, Shige Wang, Ramesh Sethu, Jeffrey D. Scheu, Prashanth Radhakrishan, Upali P. Mudalige, Ryan Ahmed
-
Patent number: 11607999Abstract: The present application relates to a method and apparatus for generating a graphical user interface indicative of a vehicle underbody view including a LIDAR operative to generate a depth map of an off-road surface, a camera for capturing an image of the off-road surface, a chassis sensor operative to detect an orientation of a host vehicle, a processor operative to generate an augmented image in response to the depth map, the image and the orientation, wherein the augmented image depicts an underbody view of the host vehicle and a graphic representative of a host vehicle suspension system, and a display operative to display the augmented image to a host vehicle operator. A static and dynamic model of the vehicle underbody is compared vs the 3-D terrain model to identify contact points between the underbody and terrain are highlighted.Type: GrantFiled: September 24, 2019Date of Patent: March 21, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Brian Mahnken, Bradford G. Schreiber, Jeffrey Louis Brown, Shawn W. Ryan, Brent T. Deep, Kurt A. Heier, Upali P. Mudalige, Shuqing Zeng
-
Publication number: 20230068046Abstract: Systems and methods of detecting a traffic object outside of a vehicle and controlling the vehicle. The systems and methods receive perception data from a sensor system included in the vehicle, determine a focused Region Of Interest (ROI) in the perception data, scale the perception data of the at least one focused ROI, process the scaled perception data of the focused ROI using a neural network (NN)-based traffic object detection algorithm to provide traffic object detection data, and control at least one vehicle feature based, in part, on the traffic object detection data.Type: ApplicationFiled: August 24, 2021Publication date: March 2, 2023Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Guangyu J. Zou, Aravindhan Mani, Upali P. Mudalige
-
Publication number: 20220351622Abstract: A method for reducing parking violations includes: searching for an empty parking spot in an area surrounding a vehicle; receiving, by a controller of the vehicle, parking restriction information in the area surrounding the vehicle, wherein the controller receives the parking restriction information from sensors of the vehicle; determining, by the controller of the vehicle, that the empty parking spot is invalid; and activating, by the controller of the vehicle, an alarm to alert a vehicle operator of the vehicle that the empty parking spot is invalid.Type: ApplicationFiled: April 28, 2021Publication date: November 3, 2022Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Prabhjot Kaur, Alexander Telosa, Upali P. Mudalige
-
Patent number: 11481738Abstract: A vehicle communication and control system includes a servicing host capable of exchanging data with a vehicle. The servicing host provides a vehicle service and includes a service identifier (ID) that indicates the vehicle service. The vehicle is configured to actively detect the service ID and to determine the vehicle service in response to detecting the service ID. The vehicle and the servicing host establish a wireless connection to exchange data and automatically initiate the vehicle service in response to detecting the service ID.Type: GrantFiled: February 1, 2021Date of Patent: October 25, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Wei Tong, Shuqing Zeng, Shige Wang, Jiang-Ling Du, Upali P. Mudalige
-
Publication number: 20220245598Abstract: A vehicle communication and control system includes a servicing host capable of exchanging data with a vehicle. The servicing host provides a vehicle service and includes a service identifier (ID) that indicates the vehicle service. The vehicle is configured to actively detect the service ID and to determine the vehicle service in response to detecting the service ID. The vehicle and the servicing host establish a wireless connection to exchange data and automatically initiate the vehicle service in response to detecting the service ID.Type: ApplicationFiled: February 1, 2021Publication date: August 4, 2022Inventors: Wei Tong, Shuqing Zeng, Shige Wang, Jiang-Ling Du, Upali P. Mudalige
-
Publication number: 20220219644Abstract: A vehicle includes a body supporting at least one camera. The at least one camera is positioned to collect images of objects outside of the vehicle. The vehicle also includes a selectively operable vehicle system and a controller operatively connected to the at least one camera and the selectively operable vehicle system. The controller includes a gesture recognition system operable to process a gesture made by a person associated with the vehicle collected by the at least one camera and activate the selectively operable vehicle system associated with the gesture.Type: ApplicationFiled: January 11, 2021Publication date: July 14, 2022Inventors: Wei Tong, Shuqing Zeng, Xiaofeng F. Song, Mohannad Murad, Upali P. Mudalige
-
Patent number: 11327506Abstract: A system and method for monitoring a road segment includes determining a geographic position of a vehicle in context of a digitized roadway map. A perceived point cloud and a mapped point cloud associated with the road segment are determined. An error vector is determined based upon a transformation between the mapped point cloud and the perceived point cloud. A first confidence interval is derived from a Gaussian process that is composed from past observations. A second confidence interval associated with a longitudinal dimension and a third confidence interval associated with a lateral dimension are determined based upon the mapped point cloud and the perceived point cloud. A Kalman filter analysis is executed to dynamically determine a position of the vehicle relative to the roadway map based upon the error vector, the first confidence interval, the second confidence interval, and the third confidence interval.Type: GrantFiled: November 20, 2019Date of Patent: May 10, 2022Assignee: GM Global Technology Operations LLCInventors: Lawrence A. Bush, Brent N. Bacchus, James P. Neville, Upali P. Mudalige
-
Patent number: 11307039Abstract: A system for generating a map includes a processing device configured to determine a provenance of each received map of a plurality of maps, parse each received map into objects of interest, and compare the objects of interest to identify one or more sets of objects representing a common feature. For each set of objects, the processing device is configured to select a subset of the set of objects based on the provenance associated with each object in the set of objects, and calculate a similarity metric for each object in the subset. The similarity metric is selected from an alignment between an object and a reference object in the subset, and/or a positional relationship between the object and the reference object. The processing device is configured to generate a common object representing the common feature based on the similarity metric, and generate a merged map including the common object.Type: GrantFiled: June 12, 2019Date of Patent: April 19, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Michael A. Losh, Aravindhan Mani, Upali P. Mudalige
-
Patent number: 11300974Abstract: Methods and systems are provided for detecting objects within an environment of a vehicle. In one embodiment, a method includes: receiving, by a processor, image data sensed from the environment of the vehicle; determining, by a processor, an area within the image data that object identification is uncertain; controlling, by the processor, a position of a lighting device to illuminate a location in the environment of the vehicle, wherein the location is associated with the area; controlling, by the processor, a position of one or more sensors to obtain sensor data from the location of the environment of the vehicle while the lighting device is illuminating the location; identifying, by the processor, one or more objects from the sensor data; and controlling, by the processor, the vehicle based on the one or more objects.Type: GrantFiled: July 18, 2019Date of Patent: April 12, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Upali P. Mudalige, Zachariah E. Tyree, Wei Tong, Shuqing Zeng
-
Patent number: 11204417Abstract: The vehicle mounted perception sensor gathers environment perception data from a scene using first and second heterogeneous (different modality) sensors, at least one of the heterogeneous sensors is directable to a predetermined region of interest. A perception processor receives the environment perception data and performs object recognition to identify objects each with a computed confidence score. The processor assesses the confidence score vis-à-vis a predetermined threshold, and based on that assessment, generates an attention signal to redirect the one of the heterogeneous sensors to a region of interest identified by the other heterogeneous sensor. In this way information from one sensor primes the other sensor to increase accuracy and provide deeper knowledge about the scene and thus do a better job of object tracking in vehicular applications.Type: GrantFiled: May 8, 2019Date of Patent: December 21, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige, Lawrence A. Bush
-
Patent number: 11157784Abstract: System and method for explaining driving behavior actions of autonomous vehicles. Combined sensor information collected at a scene understanding module is used to produce a state representation. The state representation includes predetermined types of image representations that, along with a state prediction, are used by a decision making module for determining one or more weighted behavior policies. A driving behavior action is selected and performed based on the determined one or more behavior policies. Information is then provided indicating why the selected driving behavior action was chosen in a particular driving context of the autonomous vehicle. In one or more embodiments, a user interface is configured to depict the predetermined types of image representations corresponding with the driving behavior action performed via the autonomous vehicle.Type: GrantFiled: May 8, 2019Date of Patent: October 26, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Praveen Palanisamy, Upali P. Mudalige
-
Patent number: 11052914Abstract: Automated driving systems, control logic, and methods execute maneuver criticality analysis to provide intelligent vehicle operation in transient driving conditions. A method for controlling an automated driving operation includes a vehicle controller receiving path plan data with location, destination, and predicted path data for a vehicle. From the received path plan data, the controller predicts an upcoming maneuver for driving the vehicle between start and goal lane segments. The vehicle controller determines a predicted route with lane segments connecting the start and goal lane segments, and segment maneuvers for moving the vehicle between the start, goal, and route lane segments. A cost value is calculated for each segment maneuver; the controller determines if a cost values exceeds a corresponding criticality value. If so, the controller commands a resident vehicle subsystem to execute a control operation associated with taking the predicted route.Type: GrantFiled: March 14, 2019Date of Patent: July 6, 2021Assignee: GM Global Technology Operations LLCInventors: Syed B. Mehdi, Pinaki Gupta, Upali P. Mudalige
-
Patent number: 11016495Abstract: Systems and methods are provided for end-to-end learning of commands for controlling an autonomous vehicle. A pre-processor pre-processes image data acquired by sensors at a current time step (CTS) to generate pre-processed image data that is concatenated with additional input(s) (e.g., a segmentation map and/or optical flow map) to generate a dynamic scene output. A convolutional neural network (CNN) processes the dynamic scene output to generate a feature map that includes extracted spatial features that are concatenated with vehicle kinematics to generate a spatial context feature vector. An LSTM network processes, during the (CTS), the spatial context feature vector at the (CTS) and one or more previous LSTM outputs at corresponding previous time steps to generate an encoded temporal context vector at the (CTS). The fully connected layer processes the encoded temporal context vector to learn control command(s) (e.g., steering angle, acceleration rate and/or a brake rate control commands).Type: GrantFiled: November 5, 2018Date of Patent: May 25, 2021Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
-
Publication number: 20210146827Abstract: An exemplary automotive vehicle includes a first actuator configured to control acceleration and braking of the automotive vehicle, a second actuator configured to control steering of the automotive vehicle, a vehicle sensor configured to generate data regarding the presence, location, classification, and path of detected features in a vicinity of the automotive vehicle and a controller in communication with the vehicle sensor, and the first and second actuators. The controller is configured to selectively control the first and second actuators in an autonomous mode along a first trajectory according to an automated driving system. The controller is also configured to receive the data regarding the detected features from the vehicle sensor, determine a predicted vehicle maneuver from the data regarding the detected features, map the predicted vehicle maneuver with an indication symbol, and generate a control signal to display the indication symbol.Type: ApplicationFiled: November 20, 2019Publication date: May 20, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Zachariah E. Tyree, Prabhjot Kaur, Upali P. Mudalige
-
Publication number: 20210149415Abstract: A system and method for monitoring a road segment includes determining a geographic position of a vehicle in context of a digitized roadway map. A perceived point cloud and a mapped point cloud associated with the road segment are determined. An error vector is determined based upon a transformation between the mapped point cloud and the perceived point cloud. A first confidence interval is derived from a Gaussian process that is composed from past observations. A second confidence interval associated with a longitudinal dimension and a third confidence interval associated with a lateral dimension are determined based upon the mapped point cloud and the perceived point cloud. A Kalman filter analysis is executed to dynamically determine a position of the vehicle relative to the roadway map based upon the error vector, the first confidence interval, the second confidence interval, and the third confidence interval.Type: ApplicationFiled: November 20, 2019Publication date: May 20, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lawrence A. Bush, Brent N. Bacchus, James P. Neville, Upali P. Mudalige
-
Patent number: 10984534Abstract: Systems and methods to identify an attention region in sensor-based detection involve obtaining a detection result that indicates one or more detection areas where one or more objects of interest are detected. The detection result is based on using a first detection algorithm. The method includes obtaining a reference detection result that indicates one or more reference detection areas where one or more objects of interest are detected. The reference detection result is based on using a second detection algorithm. The method also includes identifying the attention region as one of the one or more reference detection areas without a corresponding one or more detection areas. The first detection algorithm is used to perform detection in the attention region.Type: GrantFiled: March 28, 2019Date of Patent: April 20, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige
-
Publication number: 20210086695Abstract: The present application relates to a method and apparatus for generating a graphical user interface indicative of a vehicle underbody view including a LIDAR operative to generate a depth map of an off-road surface, a camera for capturing an image of the off-road surface, a chassis sensor operative to detect an orientation of a host vehicle, a processor operative to generate an augmented image in response to the depth map, the image and the orientation, wherein the augmented image depicts an underbody view of the host vehicle and a graphic representative of a host vehicle suspension system, and a display operative to display the augmented image to a host vehicle operator. A static and dynamic model of the vehicle underbody is compared vs the 3-D terrain model to identify contact points between the underbody and terrain are highlighted.Type: ApplicationFiled: September 24, 2019Publication date: March 25, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Brian Mahnken, Bradford G. Schreiber, Jeffrey Louis Brown, Shawn W. Ryan, Brent T. Deep, Kurt A. Heier, Upali P. Mudalige, Shuqing Zeng
-
Patent number: 10955842Abstract: Systems and methods are provided for controlling an autonomous vehicle (AV). A scene understanding module of a high-level controller selects a particular combination of sensorimotor primitive modules to be enabled and executed for a particular driving scenario from a plurality of sensorimotor primitive modules. Each one of the particular combination of the sensorimotor primitive modules addresses a sub-task in a sequence of sub-tasks that address a particular driving scenario. A primitive processor module executes the particular combination of the sensorimotor primitive modules such that each generates a vehicle trajectory and speed profile. An arbitration module selects one of the vehicle trajectory and speed profiles having the highest priority ranking for execution, and a vehicle control module processes the selected one of vehicle trajectory and speed profiles to generate control signals used to execute one or more control actions to automatically control the AV.Type: GrantFiled: May 24, 2018Date of Patent: March 23, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige
-
Patent number: 10940863Abstract: Systems and methods are provided that employ spatial and temporal attention-based deep reinforcement learning of hierarchical lane-change policies for controlling an autonomous vehicle. An actor-critic network architecture includes an actor network that process image data received from an environment to learn the lane-change policies as a set of hierarchical actions, and a critic network that evaluates the lane-change policies to calculate loss and gradients to predict an action-value function (Q) that is used to drive learning and update parameters of the lane-change policies. The actor-critic network architecture implements a spatial attention module to select relevant regions in the image data that are of importance, and a temporal attention module to learn temporal attention weights to be applied to past frames of image data to indicate relative importance in deciding which lane-change policy to select.Type: GrantFiled: November 1, 2018Date of Patent: March 9, 2021Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling