Patents by Inventor David W. Payton

David W. Payton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9875427
    Abstract: A method for localizing and estimating a pose of a known object in a field of view of a vision system is described, and includes developing a processor-based model of the known object, capturing a bitmap image file including an image of the field of view including the known object, extracting features from the bitmap image file, matching the extracted features with features associated with the model of the known object, localizing an object in the bitmap image file based upon the extracted features, clustering the extracted features of the localized object, merging the clustered extracted features, detecting the known object in the field of view based upon a comparison of the merged clustered extracted features and the processor-based model of the known object, and estimating a pose of the detected known object in the field of view based upon the detecting of the known object.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: January 23, 2018
    Assignee: GM Global Technology Operations LLC
    Inventors: Swarup Medasani, Jason Meltzer, Jiejun Xu, Zhichao Chen, Rashmi N. Sundareswara, David W. Payton, Ryan M. Uhlenbrock, Leandro G. Barajas, Kyungnam Kim
  • Patent number: 9844881
    Abstract: A machine vision system for a controllable robotic device proximal to a workspace includes an image acquisition sensor arranged to periodically capture vision signal inputs each including an image of a field of view including the workspace. A controller operatively couples to the robotic device and includes a non-transitory memory component including an executable vision perception routine. The vision perception routine includes a focus loop control routine operative to dynamically track a focus object in the workspace and a background loop control routine operative to monitor a background of the workspace. The focus loop control routine executes simultaneously asynchronously in parallel with the background loop control routine to determine a combined resultant including the focus object and the background based upon the periodically captured vision signal inputs. The controller is operative to control the robotic device to manipulate the focus object based upon the focus loop control routine.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: December 19, 2017
    Assignee: GM Global Technology Operations LLC
    Inventors: David W. Payton, Kyungnam Kim, Zhichao Chen, Ryan M. Uhlenbrock, Li Yang Ku
  • Patent number: 9824607
    Abstract: A brain-machine interface for extracting user action intentions within a continuous asynchronous interactive environment is presented. A subliminal stimulus module generates contextually appropriate decision-related stimuli that are unobtrusive to a user. An altered perceptual experience module modifies a user's sensation of the interactive environment based on decision-related stimuli generated from the subliminal stimulus module. A brain monitoring module assesses the user's brain activity in response to the decision-related stimuli and to determine whether an action within the asynchronous interactive environment is intended by the user. Finally, an action is taken based on explicit user input, the user's brain activity in response to the decision-related stimuli, or a combination thereof.
    Type: Grant
    Filed: January 23, 2013
    Date of Patent: November 21, 2017
    Assignee: HRL Laboratories, LLC
    Inventors: Rajan Bhattacharyya, Ryan M. Uhlenbrock, David W. Payton
  • Publication number: 20170303849
    Abstract: Described is a system for system for gait intervention and fall prevention. The system is incorporated into a body suit having a plurality of distributed sensors and a vestibulo-muscular biostim array. The sensors are operable for providing biosensor data to the analytics module, while the vestibulo-muscular biostim array includes a plurality of distributed effectors. The analytics module is connected with the body suit and sensors and is operable for receiving biosensor data and analyzing a particular user's gait and predicting falls. Finally, a closed-loop biostim control module is included for activating the vestibulo-muscular biostim array to compensate for a risk of a predicted fall.
    Type: Application
    Filed: February 11, 2016
    Publication date: October 26, 2017
    Inventors: Vincent De Sapio, Michael D. Howard, Suhas E. Chelian, Matthias Ziegler, Matthew E. Phillips, Kevin R. Martin, Heiko Hoffmann, David W. Payton
  • Patent number: 9776325
    Abstract: Described is system for tele-robotic operations over time-delayed communication links. Sensor data is acquired from at least one sensor for sensing surroundings of a robot having at least one robotic arm for manipulating an object. A three-dimensional model of the sensed surroundings is generated, and the sensor data is fit to the three-dimensional model. Using the three-dimensional model, a user demonstrates a movement path for the at least one robotic arm. A flow field representing the movement path is generated and combined with obstacle-repellent forces to provide force feedback to the user through a haptic device. The flow field comprises a set of parameters, and the set of parameters are transmitted to the robot to execute a movement of the at least one robotic arm for manipulating the object.
    Type: Grant
    Filed: November 6, 2013
    Date of Patent: October 3, 2017
    Assignee: HRL Laboratories, LLC
    Inventors: Heiko Hoffmann, David W. Payton, Vincent De Sapio
  • Publication number: 20170032220
    Abstract: A method for localizing and estimating a pose of a known object in a field of view of a vision system is described, and includes developing a processor-based model of the known object, capturing a bitmap image file including an image of the field of view including the known object, extracting features from the bitmap image file, matching the extracted features with features associated with the model of the known object, localizing an object in the bitmap image file based upon the extracted features, clustering the extracted features of the localized object, merging the clustered extracted features, detecting the known object in the field of view based upon a comparison of the merged clustered extracted features and the processor-based model of the known object, and estimating a pose of the detected known object in the field of view based upon the detecting of the known object.
    Type: Application
    Filed: July 28, 2015
    Publication date: February 2, 2017
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Swarup Medasani, Jason Meltzer, Jiejun Xu, Zhichao Chen, Rashmi N. Sundareswara, David W. Payton, Ryan M. Uhlenbrock, Leandro G. Barajas, Kyungnam Kim
  • Patent number: 9557722
    Abstract: Described is a control system for stabilizing complex systems through self-adjustment. The complex system consists of agents or machines interacting with an environment and controlled by a controller. The control system includes a sensor configured to measure a state of the complex system and output the measured state of the complex system. A filter receives the measured state of the complex system, computes a variance in the measured state of the complex system over time, and outputs the computed variance. A regulator, which is connected with at least one controller, adjusts a control parameter in response to the computed variance received from the filter. The regulator is configured to regulate each controller's action on each agent or machine based on the control parameter in order to maintain stability of the complex system. In a desired aspect, the at least one control parameter comprises a set of additional input delays.
    Type: Grant
    Filed: August 6, 2012
    Date of Patent: January 31, 2017
    Assignee: HRL Laboratories, LLC
    Inventors: Heiko Hoffmann, David W. Payton
  • Publication number: 20160368148
    Abstract: A machine vision system for a controllable robotic device proximal to a workspace includes an image acquisition sensor arranged to periodically capture vision signal inputs each including an image of a field of view including the workspace. A controller operatively couples to the robotic device and includes a non-transitory memory component including an executable vision perception routine. The vision perception routine includes a focus loop control routine operative to dynamically track a focus object in the workspace and a background loop control routine operative to monitor a background of the workspace. The focus loop control routine executes simultaneously asynchronously in parallel with the background loop control routine to determine a combined resultant including the focus object and the background based upon the periodically captured vision signal inputs. The controller is operative to control the robotic device to manipulate the focus object based upon the focus loop control routine.
    Type: Application
    Filed: June 22, 2015
    Publication date: December 22, 2016
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: David W. Payton, Kyungnam Kim, Zhichao Chen, Ryan M. Uhlenbrock, Li Yang Ku
  • Patent number: 9445739
    Abstract: Systems, methods, and apparatus for neuro-robotic goal selection are disclosed. An example method to control a robot is described, including presenting a target object to a user, the target object corresponding to a goal to be effectuated by a robot, emphasizing a portion of the target object, identifying a first brain signal corresponding to a first mental response of the user to the emphasized portion, determining whether the first mental response corresponds to a selection of the emphasized portion by the user, and effectuating the robot with respect to the goal based on the emphasized portion.
    Type: Grant
    Filed: February 3, 2010
    Date of Patent: September 20, 2016
    Assignee: HRL Laboratories, LLC
    Inventors: David W. Payton, Michael J. Daily
  • Patent number: 9403273
    Abstract: A method of training a robot to autonomously execute a robotic task includes moving an end effector through multiple states of a predetermined robotic task to demonstrate the task to the robot in a set of n training demonstrations. The method includes measuring training data, including at least the linear force and the torque via a force-torque sensor while moving the end effector through the multiple states. Key features are extracted from the training data, which is segmented into a time sequence of control primitives. Transitions between adjacent segments of the time sequence are identified. During autonomous execution of the same task, a controller detects the transitions and automatically switches between control modes. A robotic system includes a robot, force-torque sensor, and a controller programmed to execute the method.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: August 2, 2016
    Assignee: GM Global Technology Operations LLC
    Inventors: David W. Payton, Ryan M. Uhlenbrock, Li Yang Ku
  • Patent number: 9387589
    Abstract: A robotic system includes a robot, sensors which measure status information including a position and orientation of the robot and an object within the workspace, and a controller. The controller, which visually debugs an operation of the robot, includes a simulator module, action planning module, and graphical user interface (GUI). The simulator module receives the status information and generates visual markers, in response to marker commands, as graphical depictions of the object and robot. An action planning module selects a next action of the robot. The marker generator module generates and outputs the marker commands to the simulator module in response to the selected next action. The GUI receives and displays the visual markers, selected future action, and input commands. Via the action planning module, the position and/or orientation of the visual markers are modified in real time to change the operation of the robot.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: July 12, 2016
    Assignee: GM Global Technology Operations LLC
    Inventors: Leandro G. Barajas, David W Payton, Li Yang Ku, Ryan M Uhlenbrock, Darren Earl
  • Patent number: 9381643
    Abstract: A robotic system includes an end-effector and a control system. The control system includes a processor, a dynamical system module (DSM), and a velocity control module (VCM). Via execution of a method, the DSM processes inputs via a flow vector field and outputs a control velocity command. The inputs may include an actual position, desired goal position, and demonstrated reference path of the end-effector. The VCM receives an actual velocity of the end-effector and the control velocity command as inputs, and transmits a motor torque command to the end-effector as an output command. The control system employs a predetermined set of differential equations to generate a motion trajectory of the end-effector in real time that approximates the demonstrated reference path. The control system is also programmed to modify movement of the end-effector in real time via the VCM in response to perturbations of movement of the end-effector.
    Type: Grant
    Filed: July 3, 2014
    Date of Patent: July 5, 2016
    Assignee: GM Global Technology Operations LLC
    Inventors: Heiko Hoffmann, David W. Payton, Derek Mitchell
  • Publication number: 20160000511
    Abstract: A robotic system includes an end-effector and a control system. The control system includes a processor, a dynamical system module (DSM), and a velocity control module (VCM). Via execution of a method, the DSM processes inputs via a flow vector field and outputs a control velocity command. The inputs may include an actual position, desired goal position, and demonstrated reference path of the end-effector. The VCM receives an actual velocity of the end-effector and the control velocity command as inputs, and transmits a motor torque command to the end-effector as an output command. The control system employs a predetermined set of differential equations to generate a motion trajectory of the end-effector in real time that approximates the demonstrated reference path. The control system is also programmed to modify movement of the end-effector in real time via the VCM in response to perturbations of movement of the end-effector.
    Type: Application
    Filed: July 3, 2014
    Publication date: January 7, 2016
    Inventors: Heiko Hoffmann, David W. Payton, Derek Mitchell
  • Publication number: 20150336268
    Abstract: A method of training a robot to autonomously execute a robotic task includes moving an end effector through multiple states of a predetermined robotic task to demonstrate the task to the robot in a set of n training demonstrations. The method includes measuring training data, including at least the linear force and the torque via a force-torque sensor while moving the end effector through the multiple states. Key features are extracted from the training data, which is segmented into a time sequence of control primitives. Transitions between adjacent segments of the time sequence are identified. During autonomous execution of the same task, a controller detects the transitions and automatically switches between control modes. A robotic system includes a robot, force-torque sensor, and a controller programmed to execute the method.
    Type: Application
    Filed: May 23, 2014
    Publication date: November 26, 2015
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: David W. Payton, Ryan M. Uhlenbrock, Li Yang Ku
  • Publication number: 20150239127
    Abstract: A robotic system includes a robot, sensors which measure status information including a position and orientation of the robot and an object within the workspace, and a controller. The controller, which visually debugs an operation of the robot, includes a simulator module, action planning module, and graphical user interface (GUI). The simulator module receives the status information and generates visual markers, in response to marker commands, as graphical depictions of the object and robot. An action planning module selects a next action of the robot. The marker generator module generates and outputs the marker commands to the simulator module in response to the selected next action. The GUI receives and displays the visual markers, selected future action, and input commands. Via the action planning module, the position and/or orientation of the visual markers are modified in real time to change the operation of the robot.
    Type: Application
    Filed: February 25, 2014
    Publication date: August 27, 2015
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC.
    Inventors: Leandro G. Barajas, David W. Payton, Li Yang Ku, Ryan M. Uhlenbrock, Darren Earl
  • Patent number: 9020870
    Abstract: Described is a recall system that uses spiking neuron networks to identify an unknown external stimulus. The system operates by receiving a first input signal (having spatial-temporal data) that originates from a known external stimulus. The spatial-temporal data is converted into a first spike train. A first set of polychronous groups (PCGs) are generated as a result of the first spike train. Thereafter, a second input signal originating from an unknown external stimulus is received. The spatial-temporal data of the second input signal is converted into a second spike train. A second set of PCGs are then generated as a result of the second spike train. Finally, the second set of PCGs is recognized as being sufficiently similar to the first set of PCGs to identify the unknown external stimulus as the known external stimulus.
    Type: Grant
    Filed: June 14, 2011
    Date of Patent: April 28, 2015
    Assignee: HRL Laboratories, LLC
    Inventors: Michael J. Daily, Michael D. Howard, Yang Chen, David W. Payton, Rashmi N. Sundareswara
  • Patent number: 9002642
    Abstract: Provided is a system and method for tracking and identifying a target in an area of interest based on a comparison of predicted target behavior or movement and sensed target behavior or movement. Incorporating aspects of both particle diffusion and mobility constraint models with target intent derivations, the system may continuously track a target while simultaneously refining target identification information. Alternatively, the system and method are applied to reacquire a target track based on prioritized intents and predicted target location.
    Type: Grant
    Filed: June 4, 2008
    Date of Patent: April 7, 2015
    Assignee: Raytheon Company
    Inventors: William B. Noble, Serdar N. Gokcen, Michael D. Howard, David W. Payton
  • Patent number: 8843236
    Abstract: A method for training a robot to execute a robotic task in a work environment includes moving the robot across its configuration space through multiple states of the task and recording motor schema describing a sequence of behavior of the robot. Sensory data describing performance and state values of the robot is recorded while moving the robot. The method includes detecting perceptual features of objects located in the environment, assigning virtual deictic markers to the detected perceptual features, and using the assigned markers and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task. Markers may be combined to produce a generalized marker. A system includes the robot, a sensor array for detecting the performance and state values, a perceptual sensor for imaging objects in the environment, and an electronic control unit that executes the present method.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: September 23, 2014
    Assignee: GM Global Technology Operations LLC
    Inventors: Leandro G. Barajas, Eric Martinson, David W. Payton, Ryan M. Uhlenbrock
  • Patent number: 8788030
    Abstract: Systems, methods, and apparatus for neuro-robotic tracking point selection are disclosed. A described example robot control system includes a feature and image presenter, a classifier, a visual-servo controller, and a robot interface. The feature and image presenter is to display an image of an object, emphasize one of more potential trackable features of the object, receive a selection of the emphasized feature, and determine an offset from the selected feature as a goal. The classifier is to classify a mental response to the emphasized features, and to determine that the mental response corresponds to the selection of one of the emphasized features. The visual-servo controller is to track the emphasized feature corresponding to an identified brain signal. The robot interface is to generate control information to effect a robot action based on the emphasized feature, the visual-servo controller to track the emphasized feature while the robot action is being effected.
    Type: Grant
    Filed: June 5, 2013
    Date of Patent: July 22, 2014
    Assignee: HRL Laboratories, LLC
    Inventors: David W. Payton, Michael J. Daily
  • Patent number: 8756183
    Abstract: Described is a system for representing, storing, and reconstructing an input signal. The system constructs an index of unique polychronous groups (PCGs) from a spiking neuron network. Thereafter, a basis set of spike codes is generated from the unique PCGs. An input signal can then be received, with the input signal being spike encoded using the basis set of spike codes from the unique PCGs. The input signal can then be reconstructed by looking up in a reconstruction table, for each unique PCG in the basis set in temporal order according to firing times, anchor neurons. Using a neuron assignment table, an output location can be looked up for each anchor neuron to place a value based on the firing times of each unique PCG. Finally, the output locations of the anchor neurons can be compiled to reconstruct the input signal.
    Type: Grant
    Filed: June 14, 2011
    Date of Patent: June 17, 2014
    Assignee: HRL Laboratories, LLC
    Inventors: Michael J. Daily, Michael D. Howard, Yang Chen, Rashmi N. Sundareswara, David W. Payton