Patents by Inventor Micah Richert
Micah Richert has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190182499Abstract: A data processing apparatus may use a video encoder in order to extract motion information from streaming video in real time. Output of the video encoder may be parsed in order to extract motion information associated with one or more objects within the video stream. Motion information may be utilized by e.g., an adaptive controller in order to detect one or more objects salient to a given task. The controller may be configured to determine a control signal associated with the given task. The control signal determination may be configured based on a characteristic of an object detected using motion information extracted from the encoded output. The control signal may be provided to a robotic device causing the device to execute the task. The use of dedicated hardware video encoder output may reduce energy consumption associated with execution of the task and/or extend autonomy of the robotic device.Type: ApplicationFiled: December 5, 2018Publication date: June 13, 2019Inventor: Micah Richert
-
Publication number: 20190178631Abstract: Data streams from multiple image sensors may be combined in order to form, for example, an interleaved video stream, which can be used to determine distance to an object. The video stream may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.Type: ApplicationFiled: December 10, 2018Publication date: June 13, 2019Inventors: Micah Richert, Marius Buibas, Vadim Polonichko
-
Patent number: 10282849Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.Type: GrantFiled: June 19, 2017Date of Patent: May 7, 2019Assignee: Brain CorporationInventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
-
Patent number: 10268919Abstract: Methods and apparatus for tracking and discerning objects using their saliency. In one embodiment of the present disclosure, the tracking of objects is based on a combination of object saliency and additional sources of signal about object identity. Under certain simplifying assumptions, the present disclosure allows for robust tracking of simple objects with limited processing resources. In one or more variants, efficient implementation of the methods described allow sensors (e.g., cameras) to be used on board a robot (or autonomous vehicle) on a mobile determining platform, such as to capture images to determine the presence and/or identity of salient objects. Such determination of salient objects allow for e.g., adjustments to vehicle or other moving object trajectory.Type: GrantFiled: September 21, 2015Date of Patent: April 23, 2019Assignee: Brain CorporationInventors: Filip Piekniewski, Micah Richert
-
Publication number: 20190061160Abstract: Systems and methods for automatic detection of spills are disclosed. In some exemplary implementations, a robot can have a spill detector comprising at least one optical imaging device configured to capture at least one image of a scene containing a spill while the robot moves between locations. The robot can process the at least one image by segmentation. Once the spill has been identified, the robot can then generate an alert indicative at least in part of a recognition of the spill.Type: ApplicationFiled: June 4, 2018Publication date: February 28, 2019Inventors: Dimitry Fisher, Cody Griffin, Micah Richert, Filip Piekniewski, Eugene Izhikevich, Jayram Moorkanikara Nageswaran, John Black
-
Publication number: 20190043208Abstract: Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.Type: ApplicationFiled: July 23, 2018Publication date: February 7, 2019Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher
-
Patent number: 10197664Abstract: Broadband signal transmissions may be used for object detection and/or ranging. Broadband transmissions may comprise a pseudo-random bit sequence or a bit sequence produced using, a random process. The sequence may be used to modulate transmissions of a given wave type. Various types of waves may be utilized, pressure, light, and radio waves. Waves reflected by objects within the sensing volume may be sampled. The received signal may be convolved with a time-reversed copy of the transmitted random sequence to produce a correlogram. The correlogram may be analyzed to determine range to objects. The analysis may comprise determination of one or more peaks/troughs in the correlogram. Range to an object may be determines based on a time lag of a respective peak.Type: GrantFiled: July 20, 2015Date of Patent: February 5, 2019Assignee: Brain CorporationInventor: Micah Richert
-
Patent number: 10194163Abstract: A data processing apparatus may use a video encoder in order to extract motion information from streaming video in real time. Output of the video encoder may be parsed in order to extract motion information associated with one or more objects within the video stream. Motion information may be utilized by e.g., an adaptive controller in order to detect one or more objects salient to a given task. The controller may be configured to determine a control signal associated with the given task. The control signal determination may be configured based on a characteristic of an object detected using motion information extracted from the encoded output. The control signal may be provided to a robotic device causing the device to execute the task. The use of dedicated hardware video encoder output may reduce energy consumption associated with execution of the task and/or extend autonomy of the robotic device.Type: GrantFiled: May 22, 2014Date of Patent: January 29, 2019Assignee: Brain CorporationInventor: Micah Richert
-
Patent number: 10184787Abstract: Data streams from multiple image sensors may be combined in order to form, for example, an interleaved video stream, which can be used to determine distance to an object. The video stream may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.Type: GrantFiled: April 9, 2018Date of Patent: January 22, 2019Assignee: Brain CorporationInventors: Micah Richert, Marius Buibas, Vadim Polonichko
-
Publication number: 20190005659Abstract: Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.Type: ApplicationFiled: August 17, 2018Publication date: January 3, 2019Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher
-
Publication number: 20190007695Abstract: Frame sequences from multiple image sensors may be combined in order to form, for example, an interleaved frame sequence. Individual frames of the combined sequence may be configured a by combination (e.g., concatenation) of frames from one or more source sequences. The interleaved/concatenated frame sequence may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.Type: ApplicationFiled: August 17, 2018Publication date: January 3, 2019Inventor: Micah Richert
-
Publication number: 20180299258Abstract: Data streams from multiple image sensors may be combined in order to form, for example, an interleaved video stream, which can be used to determine distance to an object. The video stream may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.Type: ApplicationFiled: April 9, 2018Publication date: October 18, 2018Inventors: Micah Richert, Marius Buibas, Vadim Polonichko
-
Publication number: 20180293742Abstract: Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.Type: ApplicationFiled: January 15, 2018Publication date: October 11, 2018Inventors: FILIP Piekniewski, Micah Richert, Dimitry Fisher
-
Patent number: 10057593Abstract: Frame sequences from multiple image sensors may be combined in order to form, for example, an interleaved frame sequence. Individual frames of the combined sequence may be configured a by combination (e.g., concatenation) of frames from one or more source sequences. The interleaved/concatenated frame sequence may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.Type: GrantFiled: July 8, 2014Date of Patent: August 21, 2018Assignee: BRAIN CORPORATIONInventor: Micah Richert
-
Patent number: 10055850Abstract: Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.Type: GrantFiled: March 3, 2015Date of Patent: August 21, 2018Assignee: BRAIN CORPORATIONInventors: Filip Piekniewski, Micah Richert, Dimitry Fisher
-
Publication number: 20180207791Abstract: Apparatus and methods for navigation of a robotic device configured to operate in an environment comprising objects and/or persons. Location of objects and/or persons may change prior and/or during operation of the robot. In one embodiment, a bistatic sensor comprises a transmitter and a receiver. The receiver may be spatially displaced from the transmitter. The transmitter may project a pattern on a surface in the direction of robot movement. In one variant, the pattern comprises an encoded portion and an information portion. The information portion may be used to communicate information related to robot movement to one or more persons. The encoded portion may be used to determine presence of one or more object in the path of the robot. The receiver may sample a reflected pattern and compare it with the transmitted pattern. Based on a similarity measure breaching a threshold, indication of object present may be produced.Type: ApplicationFiled: January 22, 2018Publication date: July 26, 2018Inventors: Botond Szatmary, Micah Richert
-
Patent number: 10032280Abstract: Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.Type: GrantFiled: March 3, 2015Date of Patent: July 24, 2018Assignee: BRAIN CORPORATIONInventors: Filip Piekniewski, Micah Richert, Dimitry Fisher
-
Patent number: 9987752Abstract: Systems and methods for automatic detection of spills are disclosed. In some exemplary implementations, a robot can have a spill detector comprising at least one optical imaging device configured to capture at least one image of a scene containing a spill while the robot moves between locations. The robot can process the at least one image by segmentation. Once the spill has been identified, the robot can then generate an alert indicative at least in part of a recognition of the spill.Type: GrantFiled: June 10, 2016Date of Patent: June 5, 2018Assignee: BRAIN CORPORATIONInventors: Dimitry Fisher, Cody Griffin, Micah Richert, Filip Piekniewski, Eugene Izhikevich, Jayram Moorkanikara Nageswaran, John Black
-
Patent number: 9939253Abstract: Data streams from multiple image sensors may be combined in order to form, for example, an interleaved video stream, which can be used to determine distance to an object. The video stream may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.Type: GrantFiled: May 22, 2014Date of Patent: April 10, 2018Assignee: BRAIN CORPORATIONInventors: Micah Richert, Marius Buibas, Vadim Polonichko
-
Patent number: 9873196Abstract: Apparatus and methods for navigation of a robotic device configured to operate in an environment comprising objects and/or persons. Location of objects and/or persons may change prior and/or during operation of the robot. In one embodiment, a bistatic sensor comprises a transmitter and a receiver. The receiver may be spatially displaced from the transmitter. The transmitter may project a pattern on a surface in the direction of robot movement. In one variant, the pattern comprises an encoded portion and an information portion. The information portion may be used to communicate information related to robot movement to one or more persons. The encoded portion may be used to determine presence of one or more object in the path of the robot. The receiver may sample a reflected pattern and compare it with the transmitted pattern. Based on a similarity measure breaching a threshold, indication of object present may be produced.Type: GrantFiled: June 26, 2015Date of Patent: January 23, 2018Assignee: Brain CorporationInventors: Botond Szatmary, Micah Richert