Patents by Inventor Eric Martinson

Eric Martinson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11925232
    Abstract: In some examples, a system (1900) includes a hearing protector (1920); at least one position sensor; at least one sound monitoring sensor; at least one computing device configured to: receive, from the at least one sound monitoring sensor and over a time duration, indications of sound levels to which a worker is exposed; determine, from the at least one position sensor and during the time duration, that the hearing protector is not positioned at one or more ears of the worker to attenuate the sound levels; and generate, in response to the determination that at least one of the sound levels satisfies an exposure threshold during the time duration and the hearing protector is not positioned at one or more ears of the worker to attenuate the sound levels, an indication for output.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: March 12, 2024
    Assignee: 3M Innovative Properties Company
    Inventors: Steven T. Awiszus, Kiran S. Kanukurthy, Eric C. Lobner, Robert J. Quintero, Micayla A. Johnson, Madeleine E. Filloux, Caroline M. Ylitalo, Paul A. Martinson
  • Patent number: 11120353
    Abstract: By way of example, the technology disclosed by this document may be implemented in a method that includes receiving stored sensor data describing characteristics of a vehicle in motion at a past time and extracting features for prediction and features for recognition from the stored sensor data. The features for prediction may be input into a prediction network, which may generate a predicted label for a past driver action based on the features for prediction. The features for recognition may be input into a recognition network, which may generate a recognized label for the past driver action based on the features for recognition. In some instances, the method may include training prediction network weights of the prediction network using the recognized label and the predicted label.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: September 14, 2021
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Oluwatobi Olabiyi, Eric Martinson
  • Patent number: 10816974
    Abstract: The novel technology described in this disclosure includes an example method comprising selecting a target of interest having an obsolete appearance model, the obsolete appearance model describing a prior appearance of the target of interest, navigating a first mobile robot to a location the first mobile robot including a mechanical component providing motive force to the first mobile robot and an image sensor, and searching for the target of interest at the location. The method may include collecting, in the location by the image sensor of the first mobile robot, appearance data of the target of interest, and updating the obsolete appearance model using the appearance data of the target of interest. In some implementations, the method may, in a subsequent meeting between the target of interest and a second mobile robot at a later point in time, recognizing the target of interest using the updated appearance model.
    Type: Grant
    Filed: June 14, 2017
    Date of Patent: October 27, 2020
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Eric Martinson, Rui Guo, Yusuke Nakano
  • Patent number: 10754351
    Abstract: The novel technology described in this disclosure includes an example method comprising initializing an observability grid with an observation likelihood distribution for an environment being navigated by a mobile detection system, such as but not limited to a robot; searching the environment using the observability grid for an observation point; navigating, using a propulsion system, the robot to the observation point; and observing a target object from the observation point. The observability grid may include two or more spatial dimensions and an angular dimension. In some cases, the method may include sampling the environment with the robot based on the observation likelihood distribution of the observability grid.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: August 25, 2020
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Eric Martinson, Jose Capriles Grane, Yusuke Nakano
  • Patent number: 10611379
    Abstract: By way of example, the technology disclosed by this document is capable of receiving signal data from one or more sensors; inputting the signal data into an input layer of a deep neural network (DNN), the DNN including one or more layers; generating, using the one or more layers of the DNN, one or more spatial representations of the signal data; generating, using one or more hierarchical temporal memories (HTMs) respectively associated with the one or more layers of the DNNs, one or more temporal predictions by the DNN based on the one or more spatial representations; and generating an anticipation of a future outcome by recognizing a temporal pattern based on the one or more temporal predictions.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: April 7, 2020
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventors: Oluwatobi Olabiyi, Veeraganesh Yalla, Eric Martinson
  • Patent number: 10383552
    Abstract: This disclosure describes, according to some implementations, a system and method for capturing sensor data for gait analysis of a subject using a robot. In an example method, a robot unit receives an instruction to monitor a gait of a subject; initializes a monitoring approach in response to receiving the instructions to begin monitoring the gait of the subject; collecting sensor data capturing movement of the subject along the pathway portion; and generating gait data for gait analysis based on the sensor data. In various embodiments, the monitoring approaches may include an active approach, a passive approach, or a hybrid approach.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: August 20, 2019
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Eric Martinson, Peter Cottrell
  • Publication number: 20180246520
    Abstract: The novel technology described in this disclosure includes an example method comprising initializing an observability grid with an observation likelihood distribution for an environment being navigated by a mobile detection system, such as but not limited to a robot; searching the environment using the observability grid for an observation point; navigating, using a propulsion system, the robot to the observation point; and observing a target object from the observation point. The observability grid may include two or more spatial dimensions and an angular dimension. In some cases, the method may include sampling the environment with the robot based on the observation likelihood distribution of the observability grid.
    Type: Application
    Filed: February 28, 2017
    Publication date: August 30, 2018
    Inventors: Eric Martinson, Jose Capriles Grane, Yusuke Nakano
  • Publication number: 20180246512
    Abstract: The novel technology described in this disclosure includes an example method comprising selecting a target of interest having an obsolete appearance model, the obsolete appearance model describing a prior appearance of the target of interest, navigating a first mobile robot to a location the first mobile robot including a mechanical component providing motive force to the first mobile robot and an image sensor, and searching for the target of interest at the location. The method may include collecting, in the location by the image sensor of the first mobile robot, appearance data of the target of interest, and updating the obsolete appearance model using the appearance data of the target of interest. In some implementations, the method may, in a subsequent meeting between the target of interest and a second mobile robot at a later point in time, recognizing the target of interest using the updated appearance model.
    Type: Application
    Filed: June 14, 2017
    Publication date: August 30, 2018
    Inventors: Eric Martinson, Rui Guo, Yusuke Nakano
  • Patent number: 10049267
    Abstract: The novel technology described in this disclosure includes an example method comprising capturing sensor data using one or more sensors describing a particular environment; processing the sensor data using one or more computing devices coupled to the one or more sensors to detect a participant within the environment; determining a location of the participant within the environment; querying a feature database populated with a multiplicity of features extracted from the environment using the location of the participant for one or more features being located proximate the location of the participant; and selecting, using the one or more computing devices, a scene type from among a plurality of predetermined scene types based on association likelihood values describing probabilities of each feature of the one or more features being located within the scene types.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: August 14, 2018
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Eric Martinson, David Kim, Yusuke Nakano
  • Publication number: 20180053102
    Abstract: By way of example, the technology disclosed by this document may be implemented in a method that includes aggregating local sensor data from vehicle system sensors, detecting a driver action using the local sensor data, and extracting features related to predicting driver action from the local sensor data during the operation of the vehicle. The method may include adapting a stock machine learning-based driver action prediction model to a customized machine learning-based driver action prediction model using one or more of the extracted features and the detected driver action, the stock machine learning-based driver action prediction model initially generated using a generic model configured to be applicable to a generalized driving populace. In some instances, the method may also include predicting a driver action using the customized machine learning-based driver action prediction model and the extracted features.
    Type: Application
    Filed: November 28, 2016
    Publication date: February 22, 2018
    Inventors: Eric Martinson, Oluwatobi Olabiyi, Kentaro Oguchi
  • Publication number: 20180053108
    Abstract: By way of example, the technology disclosed by this document may be implemented in a method that includes receiving stored sensor data describing characteristics of a vehicle in motion at a past time and extracting features for prediction and features for recognition from the stored sensor data. The features for prediction may be input into a prediction network, which may generate a predicted label for a past driver action based on the features for prediction. The features for recognition may be input into a recognition network, which may generate a recognized label for the past driver action based on the features for recognition. In some instances, the method may include training prediction network weights of the prediction network using the recognized label and the predicted label.
    Type: Application
    Filed: November 28, 2016
    Publication date: February 22, 2018
    Inventors: Oluwatobi Olabiyi, Eric Martinson
  • Publication number: 20180053093
    Abstract: By way of example, the technology disclosed by this document is capable of receiving signal data from one or more sensors; inputting the signal data into an input layer of a deep neural network (DNN), the DNN including one or more layers; generating, using the one or more layers of the DNN, one or more spatial representations of the signal data; generating, using one or more hierarchical temporal memories (HTMs) respectively associated with the one or more layers of the DNNs, one or more temporal predictions by the DNN based on the one or more spatial representations; and generating an anticipation of a future outcome by recognizing a temporal pattern based on the one or more temporal predictions.
    Type: Application
    Filed: August 16, 2016
    Publication date: February 22, 2018
    Inventors: Oluwatobi Olabiyi, Veeraganesh Yalla, Eric Martinson
  • Publication number: 20170303825
    Abstract: This disclosure describes, according to some implementations, a system and method for capturing sensor data for gait analysis of a subject using a robot. In an example method, a robot unit receives an instruction to monitor a gait of a subject; initializes a monitoring approach in response to receiving the instructions to begin monitoring the gait of the subject; collecting sensor data capturing movement of the subject along the pathway portion; and generating gait data for gait analysis based on the sensor data. In various embodiments, the monitoring approaches may include an active approach, a passive approach, or a hybrid approach.
    Type: Application
    Filed: April 26, 2016
    Publication date: October 26, 2017
    Inventors: Eric Martinson, Peter Cottrell
  • Patent number: 9751212
    Abstract: This disclosure describes, according to some implementations, a system and method for adapting object handover from robot to human using perceptual affordances. In an example method, upon receiving sensor data describing surroundings and/or operational state of a robot unit from a sensor, the method calculates a probability of a perceptual classification based on the sensor data. The perceptual classification may be one or more of an environment classification, an object classification, a human action classification, or an electro-mechanical state classification. The method further calculates an affordance of the probability of the perceptual classifier using a preference model, determines a handover action based on the affordance, executes the handover action, and updates the preference model based on feedback.
    Type: Grant
    Filed: May 5, 2016
    Date of Patent: September 5, 2017
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Eric Martinson, David Kim, Emrah Akin Sisbot, Yusuke Nakano
  • Publication number: 20170249504
    Abstract: The novel technology described in this disclosure includes an example method comprising capturing sensor data using one or more sensors describing a particular environment; processing the sensor data using one or more computing devices coupled to the one or more sensors to detect a participant within the environment; determining a location of the participant within the environment; querying a feature database populated with a multiplicity of features extracted from the environment using the location of the participant for one or more features being located proximate the location of the participant; and selecting, using the one or more computing devices, a scene type from among a plurality of predetermined scene types based on association likelihood values describing probabilities of each feature of the one or more features being located within the scene types.
    Type: Application
    Filed: February 29, 2016
    Publication date: August 31, 2017
    Inventors: Eric Martinson, David Kim, Yusuke Nakano
  • Patent number: 9733097
    Abstract: The disclosure includes a method that includes assigning a classification to a travel route followed by a first client device based on data associated with when the first client device followed the travel route. The method may further include recommending the travel route to a second client device based on a request from the second client device for a desired travel route with the classification.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: August 15, 2017
    Inventors: Emrah Akin Sisbot, Veera Ganesh Yalla, Eric Martinson, Hirokazu Nomoto, Takuya Hasegawa
  • Patent number: 9694496
    Abstract: The disclosure includes methods for determining a current location for a user in an environment; detecting obstacles within the environment; estimating one or more physical capabilities of the user based on an EHR associated with the user; generating, with a processor-based device that is programmed to perform the generating, instructions for a robot to perform a task based on the obstacles within the environment and one or more physical capabilities of the user; and instructing the robot to perform the task.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: July 4, 2017
    Inventors: Eric Martinson, Emrah Akin Sisbot, Veeraganesh Yalla, Kentaro Oguchi, Yusuke Nakano
  • Patent number: 9613505
    Abstract: Technology for localized guidance of a body part of a user to specific objects within a physical environment using a vibration interface is described. An example system may include a vibration interface wearable on an extremity by a user. The vibration interface includes a plurality of motors. The system includes sensor(s) coupled to the vibrotactile system and a sensing system coupled to the sensor(s) and the vibration interface. The sensing system is configured to analyze a physical environment in which the user is located for a tangible object using the sensor(s), to generate a trajectory for navigating the extremity of the user to the tangible object based on a relative position of the extremity of the user bearing the vibration interface to a position of the tangible object within the physical environment, and to guide the extremity of the user along the trajectory by vibrating the vibration interface.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: April 4, 2017
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventors: Eric Martinson, Emrah Akin Sisbot, Joseph Djugash, Kentaro Oguchi, Yutaka Takaoka, Yusuke Nakano
  • Patent number: 9542626
    Abstract: By way of example, the technology disclosed by this document receives image data; extracts a depth image and a color image from the image data; creates a mask image by segmenting the depth image; determines a first likelihood score from the depth image and the mask image using a layered classifier; determines a second likelihood score from the color image and the mask image using a deep convolutional neural network; and determines a class of at least a portion of the image data based on the first likelihood score and the second likelihood score. Further, the technology can pre-filter the mask image using the layered classifier and then use the pre-filtered mask image and the color image to calculate a second likelihood score using the deep convolutional neural network to speed up processing.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: January 10, 2017
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Eric Martinson, Veeraganesh Yalla
  • Publication number: 20160267755
    Abstract: Technology for localized guidance of a body part of a user to specific objects within a physical environment using a vibration interface is described. An example system may include a vibration interface wearable on an extremity by a user. The vibration interface includes a plurality of motors. The system includes sensor(s) coupled to the vibrotactile system and a sensing system coupled to the sensor(s) and the vibration interface. The sensing system is configured to analyze a physical environment in which the user is located for a tangible object using the sensor(s), to generate a trajectory for navigating the extremity of the user to the tangible object based on a relative position of the extremity of the user bearing the vibration interface to a position of the tangible object within the physical environment, and to guide the extremity of the user along the trajectory by vibrating the vibration interface.
    Type: Application
    Filed: March 13, 2015
    Publication date: September 15, 2016
    Inventors: Eric Martinson, Emrah Akin Sisbot, Joseph Djugash, Kentaro Oguchi, Yutaka Takaoka, Yusuke Nakano