Patents by Inventor Katharina Muelling
Katharina Muelling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11458635Abstract: Various example embodiments described herein relate to an item manipulation system including a control system and a robotic arm coupled to the control system. The item manipulation system includes an end effector communicatively coupled to the control system and defines a first end and a second end. The first end of the end effector is rotatably engaged to the robotic arm. The item manipulation system also includes a gripper unit attached to the second end of the end effector. The gripper unit is configured to grip the item. The gripper unit includes at least one flexible suction cup and at least one rigid gripper. Each of the flexible suction cup and the at least one rigid gripper engage a surface of the item based on vacuum suction force generated through the at least one flexible suction cup or the at least one rigid gripper.Type: GrantFiled: May 7, 2019Date of Patent: October 4, 2022Assignee: INTELLIGRATED HEADQUARTERS, LLCInventors: Matthew R. Wicks, Gabriel Goldman, D. W. Wilson Hamilton, Katharina Muelling
-
Patent number: 11318620Abstract: The present disclosure relates to a material handling system for manipulating items. The material handling system includes a repositioning system comprising a robotic tool which includes a robotic arm portion and an end effector. The robotic tool is configured to manipulate an item in a first orientation and reorient the item to a second orientation. The material handling system further includes a vision system having one or more sensors positioned within the material handling system. The vision system is configured to generate inputs corresponding to the characteristics of the items. The material handling system may further include a controller executing instructions to cause the material handling system to identify the item in the first orientation, based on the one or more characteristics of the item, initiate, by the repositioning system, picking of the item in the first orientation, and re-orient the item in the second orientation.Type: GrantFiled: May 7, 2019Date of Patent: May 3, 2022Assignees: Intelligrated Headquarters, LLC, Carnegie Mellon UniversityInventors: Matthew R. Wicks, Michael L. Girtman, Thomas M. Ferner, John Simons, Herman Herman, Gabriel Goldman, Jose Gonzalez-Mora, Katharina Muelling
-
Patent number: 11016495Abstract: Systems and methods are provided for end-to-end learning of commands for controlling an autonomous vehicle. A pre-processor pre-processes image data acquired by sensors at a current time step (CTS) to generate pre-processed image data that is concatenated with additional input(s) (e.g., a segmentation map and/or optical flow map) to generate a dynamic scene output. A convolutional neural network (CNN) processes the dynamic scene output to generate a feature map that includes extracted spatial features that are concatenated with vehicle kinematics to generate a spatial context feature vector. An LSTM network processes, during the (CTS), the spatial context feature vector at the (CTS) and one or more previous LSTM outputs at corresponding previous time steps to generate an encoded temporal context vector at the (CTS). The fully connected layer processes the encoded temporal context vector to learn control command(s) (e.g., steering angle, acceleration rate and/or a brake rate control commands).Type: GrantFiled: November 5, 2018Date of Patent: May 25, 2021Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
-
Patent number: 10940863Abstract: Systems and methods are provided that employ spatial and temporal attention-based deep reinforcement learning of hierarchical lane-change policies for controlling an autonomous vehicle. An actor-critic network architecture includes an actor network that process image data received from an environment to learn the lane-change policies as a set of hierarchical actions, and a critic network that evaluates the lane-change policies to calculate loss and gradients to predict an action-value function (Q) that is used to drive learning and update parameters of the lane-change policies. The actor-critic network architecture implements a spatial attention module to select relevant regions in the image data that are of importance, and a temporal attention module to learn temporal attention weights to be applied to past frames of image data to indicate relative importance in deciding which lane-change policy to select.Type: GrantFiled: November 1, 2018Date of Patent: March 9, 2021Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
-
Patent number: 10732639Abstract: The present application generally relates to a method and apparatus for generating an action policy for controlling an autonomous vehicle. In particular, the system performs a deep learning algorithm in order to determine the action policy and an automatically generated curriculum system to determine a number of increasingly difficult tasks in order to refine the action policy.Type: GrantFiled: March 8, 2018Date of Patent: August 4, 2020Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Praveen Palanisamy, Zhiqian Qiao, Upali P. Mudalige, Katharina Muelling, John M. Dolan
-
Publication number: 20200139973Abstract: Systems and methods are provided that employ spatial and temporal attention-based deep reinforcement learning of hierarchical lane-change policies for controlling an autonomous vehicle. An actor-critic network architecture includes an actor network that process image data received from an environment to learn the lane-change policies as a set of hierarchical actions, and a critic network that evaluates the lane-change policies to calculate loss and gradients to predict an action-value function (Q) that is used to drive learning and update parameters of the lane-change policies. The actor-critic network architecture implements a spatial attention module to select relevant regions in the image data that are of importance, and a temporal attention module to learn temporal attention weights to be applied to past frames of image data to indicate relative importance in deciding which lane-change policy to select.Type: ApplicationFiled: November 1, 2018Publication date: May 7, 2020Applicants: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
-
Publication number: 20200142421Abstract: Systems and methods are provided for end-to-end learning of commands for controlling an autonomous vehicle. A pre-processor pre-processes image data acquired by sensors at a current time step (CTS) to generate pre-processed image data that is concatenated with additional input(s) (e.g., a segmentation map and/or optical flow map) to generate a dynamic scene output. A convolutional neural network (CNN) processes the dynamic scene output to generate a feature map that includes extracted spatial features that are concatenated with vehicle kinematics to generate a spatial context feature vector. An LSTM network processes, during the (CTS), the spatial context feature vector at the (CTS) and one or more previous LSTM outputs at corresponding previous time steps to generate an encoded temporal context vector at the (CTS). The fully connected layer processes the encoded temporal context vector to learn control command(s) (e.g., steering angle, acceleration rate and/or a brake rate control commands).Type: ApplicationFiled: November 5, 2018Publication date: May 7, 2020Applicants: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Praveen Palanisamy, Upali P. Mudalige, Yilun Chen, John M. Dolan, Katharina Muelling
-
Publication number: 20200026277Abstract: A method in an autonomous vehicle (AV) is provided. The method includes determining, from vehicle sensor data and road geometry data, a plurality of range measurements and obstacle velocity data; determining vehicle state data wherein the vehicle state data includes a velocity of the AV, a distance to a stop line, a distance to a midpoint of an intersection, and a distance to a goal; determining, based on the plurality of range measurements, the obstacle velocity data and the vehicle state data, a set of discrete behavior actions and a unique trajectory control action associated with each discrete behavior action; choosing a discrete behavior action and a unique trajectory control action to perform; and communicating a message to vehicle controls conveying the unique trajectory control action associated with the discrete behavior action.Type: ApplicationFiled: July 19, 2018Publication date: January 23, 2020Applicants: GM GLOBAL TECHNOLOGY OPERATIONS LLC, Carnegie Mellon UniversityInventors: Praveen Palanisamy, Zhiqian Qiao, Katharina Muelling, John M. Dolan, Upali P. Mudalige
-
Publication number: 20190344448Abstract: Various example embodiments described herein relate to an item manipulation system including a control system and a robotic arm coupled to the control system. The item manipulation system includes an end effector communicatively coupled to the control system and defines a first end and a second end. The first end of the end effector is rotatably engaged to the robotic arm. The item manipulation system also includes a gripper unit attached to the second end of the end effector. The gripper unit is configured to grip the item. The gripper unit includes at least one flexible suction cup and at least one rigid gripper. Each of the flexible suction cup and the at least one rigid gripper engage a surface of the item based on vacuum suction force generated through the at least one flexible suction cup or the at least one rigid gripper.Type: ApplicationFiled: May 7, 2019Publication date: November 14, 2019Inventors: Matthew R. WICKS, Gabriel GOLDMAN, D.W. Wilson HAMILTON, Katharina MUELLING
-
Publication number: 20190344447Abstract: The present disclosure relates to a material handling system for manipulating items. The material handling system includes a repositioning system comprising a robotic tool which includes a robotic arm portion and an end effector. The robotic tool is configured to manipulate an item in a first orientation and reorient the item to a second orientation. The material handling system further includes a vision system having one or more sensors positioned within the material handling system. The vision system is configured to generate inputs corresponding to the characteristics of the items. The material handling system may further include a controller executing instructions to cause the material handling system to identify the item in the first orientation, based on the one or more characteristics of the item, initiate, by the repositioning system, picking of the item in the first orientation, and re-orient the item in the second orientation.Type: ApplicationFiled: May 7, 2019Publication date: November 14, 2019Inventors: Matthew R. WICKS, Michael L. GIRTMAN, Thomas M. FERNER, John SIMONS, Herman HERMAN, Gabriel GOLDMAN, Jose GONZALEZ-MORA, Katharina MUELLING
-
Publication number: 20190278282Abstract: The present application generally relates to a method and apparatus for generating an action policy for controlling an autonomous vehicle. In particular, the system performs a deep learning algorithm in order to determine the action policy and an automatically generated curriculum system to determine a number of increasingly difficult tasks in order to refine the action policy.Type: ApplicationFiled: March 8, 2018Publication date: September 12, 2019Inventors: Praveen Palanisamy, Zhiqian Qiao, Upali P. Mudalige, Katharina Muelling, John M. Dolan