Patents by Inventor Eric Martinson

Eric Martinson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160250751
    Abstract: The disclosure includes methods for determining a current location for a user in an environment; detecting obstacles within the environment; estimating one or more physical capabilities of the user based on an EHR associated with the user; generating, with a processor-based device that is programmed to perform the generating, instructions for a robot to perform a task based on the obstacles within the environment and one or more physical capabilities of the user; and instructing the robot to perform the task.
    Type: Application
    Filed: February 26, 2015
    Publication date: September 1, 2016
    Inventors: Eric Martinson, Emrah Akin Sisbot, Veeraganesh Yalla, Kentaro Oguchi, Yusuke Nakano
  • Publication number: 20160180195
    Abstract: By way of example, the technology disclosed by this document receives image data; extracts a depth image and a color image from the image data; creates a mask image by segmenting the depth image; determines a first likelihood score from the depth image and the mask image using a layered classifier; determines a second likelihood score from the color image and the mask image using a deep convolutional neural network; and determines a class of at least a portion of the image data based on the first likelihood score and the second likelihood score. Further, the technology can pre-filter the mask image using the layered classifier and then use the pre-filtered mask image and the color image to calculate a second likelihood score using the deep convolutional neural network to speed up processing.
    Type: Application
    Filed: February 19, 2016
    Publication date: June 23, 2016
    Inventors: Eric Martinson, Veeraganesh Yalla
  • Patent number: 9355334
    Abstract: In an example embodiment, a computer-implemented method is disclosed that determines a depth image; detects an object blob in the depth image; segments the object blob into a set of layers; and compares the set of layers associated with the object blob with a set of object models to determine a match. Comparing the set of layers with the set of object models can include determining a likelihood of the object blob as belonging to each of the object models and determining the object blob to match a particular object model based on the likelihood. Further, determining the likelihood of the object blob as belonging to the one of the object models can include computing a recognition score for each layer of the set of layers; and aggregating the recognition score of each layer of the set of layers.
    Type: Grant
    Filed: February 3, 2014
    Date of Patent: May 31, 2016
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Eric Martinson
  • Publication number: 20160123743
    Abstract: The disclosure includes a method that includes assigning a classification to a travel route followed by a first client device based on data associated with when the first client device followed the travel route. The method may further include recommending the travel route to a second client device based on a request from the second client device for a desired travel route with the classification.
    Type: Application
    Filed: October 31, 2014
    Publication date: May 5, 2016
    Inventors: Emrah Akin Sisbot, Veera Ganesh Yalla, Eric Martinson, Hirokazu Nomoto, Takuya Hasegawa
  • Patent number: 9301722
    Abstract: In an example, a computer-implemented method receives one or more user inputs and captures a sound associated with a sound source via one or more capturing devices using sound source localization. The method then estimates one or more first posterior likelihoods of one or more positions of the sound source based on the one or more user inputs and a second posterior likelihood of a position of the sound source based on the sound. The method then estimates an overall posterior likelihood of an actual position of the sound source based on 1) the one or more first posterior likelihoods of the one or more positions of the sound source estimated based on the one or more user inputs and 2) the second posterior likelihood of the position of the sound source estimated based on the sound.
    Type: Grant
    Filed: February 3, 2014
    Date of Patent: April 5, 2016
    Inventor: Eric Martinson
  • Patent number: 8843236
    Abstract: A method for training a robot to execute a robotic task in a work environment includes moving the robot across its configuration space through multiple states of the task and recording motor schema describing a sequence of behavior of the robot. Sensory data describing performance and state values of the robot is recorded while moving the robot. The method includes detecting perceptual features of objects located in the environment, assigning virtual deictic markers to the detected perceptual features, and using the assigned markers and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task. Markers may be combined to produce a generalized marker. A system includes the robot, a sensor array for detecting the performance and state values, a perceptual sensor for imaging objects in the environment, and an electronic control unit that executes the present method.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: September 23, 2014
    Assignee: GM Global Technology Operations LLC
    Inventors: Leandro G. Barajas, Eric Martinson, David W. Payton, Ryan M. Uhlenbrock
  • Publication number: 20130245824
    Abstract: A method for training a robot to execute a robotic task in a work environment includes moving the robot across its configuration space through multiple states of the task and recording motor schema describing a sequence of behavior of the robot. Sensory data describing performance and state values of the robot is recorded while moving the robot. The method includes detecting perceptual features of objects located in the environment, assigning virtual deictic markers to the detected perceptual features, and using the assigned markers and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task. Markers may be combined to produce a generalized marker. A system includes the robot, a sensor array for detecting the performance and state values, a perceptual sensor for imaging objects in the environment, and an electronic control unit that executes the present method.
    Type: Application
    Filed: March 15, 2012
    Publication date: September 19, 2013
    Applicant: GM GLOBAL TECHNOLOGY OPEATIONS LLC
    Inventors: Leandro G. Barajas, Eric Martinson, David W. Payton, Ryan M. Uhlenbrock
  • Publication number: 20060015222
    Abstract: Moving a target mobile node to arrange a number of mobile nodes includes determining a first direction. A first relative position for each of the neighboring mobile nodes of the target mobile node is established. The target mobile node moves according to a first alignment procedure to align the mobile nodes in the first direction. A second direction substantially orthogonal to the first direction is determined. A second relative position for each of the neighboring mobile nodes is established. The target mobile node moves according to a second alignment procedure to align the mobile nodes in the second direction.
    Type: Application
    Filed: July 12, 2004
    Publication date: January 19, 2006
    Inventors: David Payton, Eric Martinson