Patents Assigned to Brain Corporation
  • Patent number: 11279026
    Abstract: Apparatus and methods for training and controlling of, for instance, robotic devices. In one implementation, a robot may be trained by a user using supervised learning. The user may be unable to control all degrees of freedom of the robot simultaneously. The user may interface to the robot via a control apparatus configured to select and operate a subset of the robot's complement of actuators. The robot may comprise an adaptive controller comprising a neuron network. The adaptive controller may be configured to generate actuator control commands based on the user input and output of the learning process. Training of the adaptive controller may comprise partial set training. The user may train the adaptive controller to operate first actuator subset. Subsequent to learning to operate the first subset, the adaptive controller may be trained to operate another subset of degrees of freedom based on user input via the control apparatus.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: March 22, 2022
    Assignee: Brain Corporation
    Inventors: Jean-Baptiste Passot, Oleg Sinyavskiy, Eugene Izhikevich
  • Patent number: 11224971
    Abstract: Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: January 18, 2022
    Assignee: Brain Corporation
    Inventors: Patryk Laurent, Jean-Baptiste Passot, Oleg Sinyavskiy, Filip Ponulak, Borja Ibarz Gabardos, Eugene Izhikevich
  • Patent number: 11219401
    Abstract: The present disclosure provides a non-transitory computer program product embodied in a computer-readable medium and, when executed by one or more analysis modules, providing a visual output for presenting physiological signals of a cardiovascular system. The non-transitory computer program product comprises a first axis representing, subsets of intrinsic mode functions (IMF); a second axis representing a function of signal strength in a time interval; and a plurality of visual elements, each of the visual elements being defined by the first axis and the second axis, and each of the visual elements comprising a plurality of analyzed data units collected over the time interval. Wherein each of the analyzed data units comprises a first coordinate, a second coordinate, and a probability density value generated from an intrinsic probability density function of one of the subsets of IMFs.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: January 11, 2022
    Assignee: Adaptive, Intelligent and Dynamic Brain Corporation (AidBrain)
    Inventor: Norden E. Huang
  • Patent number: 11161241
    Abstract: Robotic devices may be trained by a user guiding the robot along a target trajectory using a correction signal. A robotic device may comprise an adaptive controller configured to generate control commands based on one or more of the trainer input, sensory input, and/or performance measure. Training may comprise a plurality of trials. During an initial portion of a trial, the trainer may observe robot's operation and refrain from providing the training input to the robot. Upon observing a discrepancy between the target behavior and the actual behavior during the initial trial portion, the trainer may provide a teaching input (e.g., a correction signal) configured to affect robot's trajectory during subsequent trials. Upon completing a sufficient number of trials, the robot may be capable of navigating the trajectory in absence of the training input.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: November 2, 2021
    Assignee: Brain Corporation
    Inventors: Oleg Sinyavskiy, Jean-Baptiste Passot, Eugene Izhikevich
  • Patent number: 11099575
    Abstract: The safe operation and navigation of robots is an active research topic for many real-world applications, such as the automation of large industrial equipment. This technological field often requires heavy machines with arbitrary shapes to navigate very close to obstacles, a challenging and largely unsolved problem. To address this issue, a new planning architecture is developed that allows wheeled vehicles to navigate safely and without human supervision in cluttered environments. The inventive methods and systems disclosed herein belong to the Model Predictive Control (MPC) family of local planning algorithms. The technological features disclosed herein works in the space of two-dimensional (2D) occupancy grids and plans in motor command space using a black box forward model for state inference. Compared to the conventional methods and systems, the inventive methods and systems disclosed herein include several properties that make it scalable and applicable to a production environment.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: August 24, 2021
    Assignee: Brain Corporation
    Inventors: Oleg Sinyavskiy, Borja Ibarz Gabardos, Jean-Baptiste Passot
  • Patent number: 11042775
    Abstract: A data processing apparatus may utilize an artificial neuron network configured to reduce dimensionality of input data using a sparse transformation configured using receptive field structure of network units. Output of the network may be analyzed for temporally persistency that is characterized by similarity matrix. Elements of the matrix may be incremented when present activity unit activity at a preceding frame. The similarity matrix may be partitioned based on a distance measure for a given element of the matrix and its closest neighbors. Stability of learning of temporally proximal patterns may be greatly improved as the similarity matrix is learned independently of the partitioning operation. Partitioning of the similarity matrix using the methodology of the disclosure may be performed online, e.g., contemporaneously with the encoding and/or similarity matrix construction, thereby enabling learning of new features in the input data.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: June 22, 2021
    Assignee: Brain Corporation
    Inventors: Micah Richert, Filip Piekniewski
  • Patent number: 10989521
    Abstract: Data streams from multiple image sensors may be combined in order to form, for example, an interleaved video stream, which can be used to determine distance to an object. The video stream may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: April 27, 2021
    Assignee: Brain Corporation
    Inventors: Micah Richert, Marius Buibas, Vadim Polonichko
  • Patent number: 10967519
    Abstract: Systems and methods for automatic detection of spills are disclosed. In some exemplary implementations, a robot can have a spill detector comprising at least one optical imaging device configured to capture at least one image of a scene containing a spill while the robot moves between locations. The robot can process the at least one image by segmentation. Once the spill has been identified, the robot can then generate an alert indicative at least in part of a recognition of the spill.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: April 6, 2021
    Assignee: Brain Corporation
    Inventors: Dimitry Fisher, Cody Griffin, Micah Richert, Filip Piekniewski, Eugene Izhikevich, Jayram Moorkanikara Nageswaran, John Black
  • Patent number: 10899008
    Abstract: Systems and methods for robotic path planning are disclosed. In some implementations of the present disclosure, a robot can generate a cost map associated with an environment of the robot. The cost map can comprise a plurality of pixels each corresponding to a location in the environment, where each pixel can have an associated cost. The robot can further generate a plurality of masks having projected path portions for the travel of the robot within the environment, where each mask comprises a plurality of mask pixels that correspond to locations in the environment. The robot can then determine a mask cost associated with each mask based at least in part on the cost map and select a mask based at least in part on the mask cost. Based on the projected path portions within the selected mask, the robot can navigate a space.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: January 26, 2021
    Assignee: Brain Corporation
    Inventors: Oleg Sinyavskiy, Jean-Baptiste Passot, Borja Ibarz Gabardos, Diana Vu Le
  • Patent number: 10895629
    Abstract: Broadband signal transmissions may be used for object detection and/or ranging. Broadband transmissions may comprise a pseudo-random bit sequence or a bit sequence produced using, a random process. The sequence may be used to modulate transmissions of a given wave type. Various types of waves may be utilized, pressure, light, and radio waves. Waves reflected by objects within the sensing volume may be sampled. The received signal may be convolved with a time-reversed copy of the transmitted random sequence to produce a correlogram. The correlogram may be analyzed to determine range to objects. The analysis may comprise determination of one or more peaks/troughs in the correlogram. Range to an object may be determines based on a time lag of a respective peak.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: January 19, 2021
    Assignee: Brain Corporation
    Inventor: Micah Richert
  • Patent number: 10860882
    Abstract: Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: December 8, 2020
    Assignee: Brain Corporation
    Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher
  • Patent number: 10852730
    Abstract: Systems and methods for robotic mobile platforms are disclosed. In one exemplary implementation, a system for enabling autonomous navigation of a mobile platform is disclosed. The system may include a memory having computer readable instructions stored thereon and at least one processor configured to execute the computer readable instructions. The execution of the computer readable instructions causes the system to: receive a first set of coordinates corresponding to a first location of a user; determine a different second location for the mobile platform; navigate the mobile platform between the second location and the first location; and receive a different second set of coordinates. Methods, apparatus and computer-readable mediums are also disclosed.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: December 1, 2020
    Assignee: Brain Corporation
    Inventor: Eugene Izhikevich
  • Patent number: 10843338
    Abstract: Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: November 24, 2020
    Assignee: Brain Corporation
    Inventors: Philip Meier, Jean-Baptiste Passot, Borja Ibarz Gabardos, Patryk Laurent, Oleg Sinyavskiy, Peter O'Connor, Eugene Izhikevich
  • Patent number: 10823576
    Abstract: Systems and methods for robotic mapping are disclosed. In some exemplary implementations, a robot can travel in an environment. From travelling in the environment, the robot can create a graph comprising a plurality of nodes, wherein each node corresponds to a scan taken by a sensor of the robot at a location in the environment. In some exemplary implementations, the robot can generate a map of the environment from the graph. In some cases, to facilitate map generation, the robot can constrain the graph to start and end at a substantially similar location. The robot can also perform scan matching on extended scan groups, determined from identifying overlap between scans, to further determine the location of features in a map.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: November 3, 2020
    Assignee: Brain Corporation
    Inventors: Jaldert Rombouts, Borja Ibarz Gabardos, Jean-Baptiste Passot, Andrew Smith
  • Patent number: 10818016
    Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: October 27, 2020
    Assignee: Brain Corporation
    Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
  • Patent number: 10820009
    Abstract: Frame sequences from multiple image sensors may be combined in order to form, for example, an interleaved frame sequence. Individual frames of the combined sequence may be configured a by combination (e.g., concatenation) of frames from one or more source sequences. The interleaved/concatenated frame sequence may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.
    Type: Grant
    Filed: August 17, 2018
    Date of Patent: October 27, 2020
    Assignee: Brain Corporation
    Inventor: Micah Richert
  • Patent number: 10810456
    Abstract: Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: October 20, 2020
    Assignee: Brain Corporation
    Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher
  • Patent number: 10807230
    Abstract: Apparatus and methods for navigation of a robotic device configured to operate in an environment comprising objects and/or persons. Location of objects and/or persons may change prior and/or during operation of the robot. In one embodiment, a bistatic sensor comprises a transmitter and a receiver. The receiver may be spatially displaced from the transmitter. The transmitter may project a pattern on a surface in the direction of robot movement. In one variant, the pattern comprises an encoded portion and an information portion. The information portion may be used to communicate information related to robot movement to one or more persons. The encoded portion may be used to determine presence of one or more object in the path of the robot. The receiver may sample a reflected pattern and compare it with the transmitted pattern. Based on a similarity measure breaching a threshold, indication of object present may be produced.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: October 20, 2020
    Assignee: Brain Corporation
    Inventors: Botond Szatmary, Micah Richert
  • Patent number: 10723018
    Abstract: Systems and methods for remote operating and/or monitoring of a robot are disclosed. In some exemplary implementations, a robot can be communicatively coupled to a remote network. The remote network can send and receive signals with the robot. In some exemplary implementations, the remote network can receive sensor data from the robot, allowing the remote network to determine the context of the robot. In this way, the remote network can respond to assistance requests and also provide operating commands to the robot.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: July 28, 2020
    Assignee: Brain Corporation
    Inventors: Cody Griffin, Roger Unwin, John Black
  • Patent number: 10728570
    Abstract: A data processing apparatus may use a video encoder in order to extract motion information from streaming video in real time. Output of the video encoder may be parsed in order to extract motion information associated with one or more objects within the video stream. Motion information may be utilized by e.g., an adaptive controller in order to detect one or more objects salient to a given task. The controller may be configured to determine a control signal associated with the given task. The control signal determination may be configured based on a characteristic of an object detected using motion information extracted from the encoded output. The control signal may be provided to a robotic device causing the device to execute the task. The use of dedicated hardware video encoder output may reduce energy consumption associated with execution of the task and/or extend autonomy of the robotic device.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: July 28, 2020
    Assignee: Brain Corporation
    Inventor: Micah Richert