Patents by Inventor Oleg Sinyavskiy

Oleg Sinyavskiy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210354302
    Abstract: Systems and methods for laser and imaging odometry for autonomous robots are disclosed herein. According to at least one non-limiting exemplary embodiment, a robot may utilize images captured by a sensor and encoded with a depth parameter to determine its motion and localize itself. The determined motion and localization may then be utilized to verify calibration of the sensor based on a comparison between motion and localization data based on the images and motion and localization data based on data from other sensors and odometry units of the robot.
    Type: Application
    Filed: July 27, 2021
    Publication date: November 18, 2021
    Inventors: Girish Bathala, Sahil Dhayalkar, Kirill Pirozhenko, Oleg Sinyavskiy
  • Patent number: 11161241
    Abstract: Robotic devices may be trained by a user guiding the robot along a target trajectory using a correction signal. A robotic device may comprise an adaptive controller configured to generate control commands based on one or more of the trainer input, sensory input, and/or performance measure. Training may comprise a plurality of trials. During an initial portion of a trial, the trainer may observe robot's operation and refrain from providing the training input to the robot. Upon observing a discrepancy between the target behavior and the actual behavior during the initial trial portion, the trainer may provide a teaching input (e.g., a correction signal) configured to affect robot's trajectory during subsequent trials. Upon completing a sufficient number of trials, the robot may be capable of navigating the trajectory in absence of the training input.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: November 2, 2021
    Assignee: Brain Corporation
    Inventors: Oleg Sinyavskiy, Jean-Baptiste Passot, Eugene Izhikevich
  • Patent number: 11099575
    Abstract: The safe operation and navigation of robots is an active research topic for many real-world applications, such as the automation of large industrial equipment. This technological field often requires heavy machines with arbitrary shapes to navigate very close to obstacles, a challenging and largely unsolved problem. To address this issue, a new planning architecture is developed that allows wheeled vehicles to navigate safely and without human supervision in cluttered environments. The inventive methods and systems disclosed herein belong to the Model Predictive Control (MPC) family of local planning algorithms. The technological features disclosed herein works in the space of two-dimensional (2D) occupancy grids and plans in motor command space using a black box forward model for state inference. Compared to the conventional methods and systems, the inventive methods and systems disclosed herein include several properties that make it scalable and applicable to a production environment.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: August 24, 2021
    Assignee: Brain Corporation
    Inventors: Oleg Sinyavskiy, Borja Ibarz Gabardos, Jean-Baptiste Passot
  • Publication number: 20210232149
    Abstract: Systems and methods for persistent mapping of environmental parameters using a centralized cloud server and a robotic network are disclosed herein. According to at least one non-limiting exemplary embodiment, a cloud server may utilize a robotic network, comprising a plurality of robots, communicatively coupled to the cloud server to collect data and generate or update a persistent map of a parameter of an environment based on the collected data from the plurality of robots on the robotic network.
    Type: Application
    Filed: April 15, 2021
    Publication date: July 29, 2021
    Inventors: Cody Griffin, Oleg Sinyavskiy
  • Publication number: 20210220995
    Abstract: Systems and methods for robotic path planning are disclosed. In some implementations of the present disclosure, a robot can generate a cost map associated with an environment of the robot. The cost map can comprise a plurality of pixels each corresponding to a location in the environment, where each pixel can have an associated cost. The robot can further generate a plurality of masks having projected path portions for the travel of the robot within the environment, where each mask comprises a plurality of mask pixels that correspond to locations in the environment. The robot can then determine a mask cost associated with each mask based at least in part on the cost map and select a mask based at least in part on the mask cost. Based on the projected path portions within the selected mask, the robot can navigate a space.
    Type: Application
    Filed: January 25, 2021
    Publication date: July 22, 2021
    Inventors: Oleg Sinyavskiy, Jean-Baptiste Passot, Borja Ibarz Gabardos, Diana Vu Le
  • Publication number: 20210191401
    Abstract: Systems, apparatuses, and methods for bias determination and value calculation of parameters of a robot are disclosed herein. According to at least one exemplary embodiment, a bias in a navigation parameter may be determined based on a bias in one or more measurement units, wherein a navigation parameter may be a parameter useful to a robot to recreate a route such as, for example, velocity and the bias may be accounted for to more accurately recreate the route and generate accurate maps of an environment.
    Type: Application
    Filed: February 16, 2021
    Publication date: June 24, 2021
    Inventors: Oleg Sinyavskiy, Girish Bathala
  • Publication number: 20210031367
    Abstract: Systems, apparatuses, and methods for rapid machine learning for floor segmentation for robotic devices are disclosed herein. According to at least one non-limiting exemplary embodiment, a robotic system is disclosed. The robotic system may comprise a neural network embodied therein capable of learning associations between color values of pixels and corresponding classifications of those pixels, wherein neural network is trained initially to identify floor and non-floor pixels within images. A user input may be provided to the neural network to further configure the neural network to be able to identify navigable floors and unnavigable floors unique to an environment without a need for additional annotated training images specific to the environment.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 4, 2021
    Inventors: Ali Mirzaei, Oleg Sinyavskiy
  • Patent number: 10899008
    Abstract: Systems and methods for robotic path planning are disclosed. In some implementations of the present disclosure, a robot can generate a cost map associated with an environment of the robot. The cost map can comprise a plurality of pixels each corresponding to a location in the environment, where each pixel can have an associated cost. The robot can further generate a plurality of masks having projected path portions for the travel of the robot within the environment, where each mask comprises a plurality of mask pixels that correspond to locations in the environment. The robot can then determine a mask cost associated with each mask based at least in part on the cost map and select a mask based at least in part on the mask cost. Based on the projected path portions within the selected mask, the robot can navigate a space.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: January 26, 2021
    Assignee: Brain Corporation
    Inventors: Oleg Sinyavskiy, Jean-Baptiste Passot, Borja Ibarz Gabardos, Diana Vu Le
  • Patent number: 10843338
    Abstract: Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: November 24, 2020
    Assignee: Brain Corporation
    Inventors: Philip Meier, Jean-Baptiste Passot, Borja Ibarz Gabardos, Patryk Laurent, Oleg Sinyavskiy, Peter O'Connor, Eugene Izhikevich
  • Publication number: 20200316773
    Abstract: Apparatus and methods for training and operating of robotic devices. Robotic controller may comprise a predictor apparatus configured to generate motor control output. The predictor may be operable in accordance with a learning process based on a teaching signal comprising the control output. An adaptive controller block may provide control output that may be combined with the predicted control output. The predictor learning process may be configured to learn the combined control signal. Predictor training may comprise a plurality of trials. During initial trial, the control output may be capable of causing a robot to perform a task. During intermediate trials, individual contributions from the controller block and the predictor may be inadequate for the task. Upon learning, the control knowledge may be transferred to the predictor so as to enable task execution in absence of subsequent inputs from the controller. Control output and/or predictor output may comprise multi-channel signals.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Inventors: Eugene Izhikevich, Oleg Sinyavskiy, Jean-Baptiste Passot
  • Patent number: 10717191
    Abstract: Robotic devices may be trained by a trainer guiding the robot along a target trajectory using physical contact with the robot. The robot may comprise an adaptive controller configured to generate control commands based on one or more of the trainer input, sensory input, and/or performance measure. The trainer may observe task execution by the robot. Responsive to observing a discrepancy between the target behavior and the actual behavior, the trainer may provide a teaching input via a haptic action. The robot may execute the action based on a combination of the internal control signal produced by a learning process of the robot and the training input. The robot may infer the teaching input based on a comparison of a predicted state and actual state of the robot. The robot's learning process may be adjusted in accordance with the teaching input so as to reduce the discrepancy during a subsequent trial.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: July 21, 2020
    Assignee: Brain Corporation
    Inventors: Filip Ponulak, Moslem Kazemi, Patryk Laurent, Oleg Sinyavskiy, Eugene Izhikevich
  • Patent number: 10688657
    Abstract: Apparatus and methods for training and operating of robotic devices. Robotic controller may comprise a predictor apparatus configured to generate motor control output. The predictor may be operable in accordance with a learning process based on a teaching signal comprising the control output. An adaptive controller block may provide control output that may be combined with the predicted control output. The predictor learning process may be configured to learn the combined control signal. Predictor training may comprise a plurality of trials. During initial trial, the control output may be capable of causing a robot to perform a task. During intermediate trials, individual contributions from the controller block and the predictor may be inadequate for the task. Upon learning, the control knowledge may be transferred to the predictor so as to enable task execution in absence of subsequent inputs from the controller. Control output and/or predictor output may comprise multi-channel signals.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: June 23, 2020
    Assignee: Brain Corporation
    Inventors: Eugene Izhikevich, Oleg Sinyavskiy, Jean-Baptiste Passot
  • Publication number: 20200139540
    Abstract: Apparatus and methods for training and controlling of, for instance, robotic devices. In one implementation, a robot may be trained by a user using supervised learning. The user may be unable to control all degrees of freedom of the robot simultaneously. The user may interface to the robot via a control apparatus configured to select and operate a subset of the robot's complement of actuators. The robot may comprise an adaptive controller comprising a neuron network. The adaptive controller may be configured to generate actuator control commands based on the user input and output of the learning process. Training of the adaptive controller may comprise partial set training. The user may train the adaptive controller to operate first actuator subset. Subsequent to learning to operate the first subset, the adaptive controller may be trained to operate another subset of degrees of freedom based on user input via the control apparatus.
    Type: Application
    Filed: November 13, 2019
    Publication date: May 7, 2020
    Inventors: Jean-Baptiste Passot, Oleg Sinyavskiy, Eugene Izhikevich
  • Publication number: 20190381663
    Abstract: Systems and methods assisting a robotic apparatus are disclosed. In some exemplary implementations, a robot can encounter situations where the robot cannot proceed and/or does not know with a high degree of certainty it can proceed. Accordingly, the robot can determine that it has encountered an error and/or assist event. In some exemplary implementations, the robot can receive assistance from an operator and/or attempt to resolve the issue itself. In some cases, the robot can be configured to delay actions in order to allow resolution of the error and/or assist event.
    Type: Application
    Filed: June 27, 2019
    Publication date: December 19, 2019
    Inventors: Oleg Sinyavskiy, Jean-Baptiste Passot, Borja Ibarz Gabardos, Diana Vu Le
  • Patent number: 10507580
    Abstract: Apparatus and methods for training and controlling of, for instance, robotic devices. In one implementation, a robot may be trained by a user using supervised learning. The user may be unable to control all degrees of freedom of the robot simultaneously. The user may interface to the robot via a control apparatus configured to select and operate a subset of the robot's complement of actuators. The robot may comprise an adaptive controller comprising a neuron network. The adaptive controller may be configured to generate actuator control commands based on the user input and output of the learning process. Training of the adaptive controller may comprise partial set training. The user may train the adaptive controller to operate first actuator subset. Subsequent to learning to operate the first subset, the adaptive controller may be trained to operate another subset of degrees of freedom based on user input via the control apparatus.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: December 17, 2019
    Assignee: Brain Corporation
    Inventors: Jean-Baptiste Passot, Oleg Sinyavskiy, Eugene Izhikevich
  • Publication number: 20190366538
    Abstract: Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
    Type: Application
    Filed: June 20, 2019
    Publication date: December 5, 2019
    Inventors: Patryk Laurent, Jean-Baptiste Passot, Oleg Sinyavskiy, Filip Ponulak, Borja Ibarz Gabardos, Eugene Izhikevich
  • Publication number: 20190321973
    Abstract: Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
    Type: Application
    Filed: May 3, 2019
    Publication date: October 24, 2019
    Inventors: Philip Meier, Jean-Baptiste Passot, Borja Ibarz Gabardos, Patryk Laurent, Oleg Sinyavskiy, Peter O'Connor, Eugene Izhikevich
  • Publication number: 20190299410
    Abstract: Systems and methods for robotic path planning are disclosed. In some implementations of the present disclosure, a robot can generate a cost map associated with an environment of the robot. The cost map can comprise a plurality of pixels each corresponding to a location in the environment, where each pixel can have an associated cost. The robot can further generate a plurality of masks having projected path portions for the travel of the robot within the environment, where each mask comprises a plurality of mask pixels that correspond to locations in the environment. The robot can then determine a mask cost associated with each mask based at least in part on the cost map and select a mask based at least in part on the mask cost. Based on the projected path portions within the selected mask, the robot can navigate a space.
    Type: Application
    Filed: April 5, 2019
    Publication date: October 3, 2019
    Inventors: Oleg Sinyavskiy, Jean-Baptiste Passot, Borja Ibarz Gabardos, Diana Vu Le
  • Publication number: 20190302791
    Abstract: Systems and methods for robotic mapping are disclosed. In some example implementations, an automated device can travel in an environment. From travelling in the environment, the automated device can create a graph comprising a plurality of nodes, wherein each node corresponds to a scan taken by one or more sensors of the automated device at a location in the environment. In some example embodiments, the automated device can reevaluate its travel along a desired path if it encounters objects or obstructions along its path, whether those objects or obstructions are present in the front, rare or side of the automated device. In some example embodiments, the automated device uses a timestamp methodology to maneuver around its environment that provides faster processing and less usage of memory space.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: Jayram Moorkanikara Nageswaran, Oleg Sinyavskiy, Borja Ibarz Gabardos
  • Publication number: 20190299407
    Abstract: An apparatus and methods for training and/or operating a robotic device to follow a trajectory. A robotic vehicle may utilize a camera and stores the sequence of images of a visual scene seen when following a trajectory during training in an ordered buffer. Motor commands associated with a given image may be stored. During autonomous operation, an acquired image may be compared with one or more images from the training buffer in order to determine the most likely match. An evaluation may be performed in order to determine if the image may correspond to a shifted (e.g., left/right) version of a stored image as previously observed. If the new image is shifted left, right turn command may be issued. If the new image is shifted right then left turn command may be issued.
    Type: Application
    Filed: April 5, 2019
    Publication date: October 3, 2019
    Inventors: Oyvind Grotmol, Oleg Sinyavskiy