Patents by Inventor Sebastian Schweigert

Sebastian Schweigert has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10935383
    Abstract: Provided is a method for navigating and mapping a workspace, including: obtaining a stream of spatial data indicative of a robot's position in a workspace, the stream of spatial data being based on at least output of a first sensor; obtaining a stream of movement data indicative of the robot's displacement in the workspace, the stream of movement data being based on at least output of a second sensor of different type than the first sensor; navigating along a path of the robot in the workspace based on the stream of spatial data; while navigating, mapping at least part of the workspace based on the stream of spatial data to form or update a spatial map in memory; and switching to a second mode of operation if the stream of spatial data is unavailable due to the first sensor becoming impaired or inoperative.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: March 2, 2021
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Lukas Fath, Chen Zhang, Sebastian Schweigert
  • Patent number: 10915114
    Abstract: Provided is a process, including obtaining, with a robot, raw pixel intensity values of a first image and raw pixel intensity values of a second image, wherein the first image and the second image are taken from different positions; determining, with one or more processors, an overlapping area of a field of view of the first image and of a field of view of the second image by comparing the raw pixel intensity values of the first image to the raw pixel intensity values of the second image; spatially, with one or more processors, aligning values based on sensor readings of the robot based on the overlapping area; and inferring, with one or more processors, features of a working environment of the robot based on the spatially aligned sensor readings.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: February 9, 2021
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Chen Zhang
  • Patent number: 10810427
    Abstract: Provided are operations including: receiving, with one or more processors of a robot, an image of an environment from an imaging device separate from the robot; obtaining, with the one or more processors, raw pixel intensity values of the image; extracting, with the one or more processors, objects and features in the image by grouping pixels with similar raw pixel intensity values, and by identifying areas in the image with greatest change in raw pixel intensity values; determining, with the one or more processors, an area within a map of the environment corresponding with the image by comparing the objects and features of the image with objects and features of the map; and, inferring, with the one or more processors, one or more locations captured in the image based on the location of the area of the map corresponding with the image.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: October 20, 2020
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Chen Zhang, Hao Yuan
  • Patent number: 10809071
    Abstract: Provided is a process executed by a robot, including: traversing, to a first position, a first distance in a backward direction; after traversing the first distance, rotating 180 degrees in a first rotation; after the first rotation, traversing, to a second position, a second distance in the second direction; and after traversing the second distance, rotating 180 degrees in a second rotation such that the field of view of the sensor points in the first direction.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: October 20, 2020
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Lukas Fath, Chen Zhang
  • Patent number: 10740920
    Abstract: Provided is a method including capturing a plurality of images by at least one sensor of a robot; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images; and determining, with the processor of the robot, based on alignment of the data, a spatial model of the environment.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 11, 2020
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Chen Zhang, Sebastian Schweigert
  • Patent number: 10612929
    Abstract: Provided is a process that includes: obtaining a first version of a map of a workspace; selecting a first undiscovered area of the workspace; in response to selecting the first undiscovered area, causing the robot to move to a position and orientation to sense data in at least part of the first undiscovered area; and obtaining an updated version of the map mapping a larger area of the workspace than the first version.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: April 7, 2020
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Chen Zhang, Lukas Fath
  • Patent number: 10613541
    Abstract: A system and method for devising a surface coverage scheme within a workspace. Space within a two-dimensional map of the workspace is identified as free, occupied, or unknown. The map is divided into a grid of cells. A loop-free spanning tree is constructed within all free cells within the grid. The robotic device is programmed to drive along the outside edge of the spanning tree to cover all portions of each free cell at least once upon completing the path. The system monitors several performance parameters during each work session and assigns negative rewards based on these parameters. A large positive reward is assigned upon completion of the surface coverage. Spanning trees with at least slight differences are used to determine which spanning tree produces the highest reward. The system is programmed to attempt maximize rewards at all times, causing the system to learn the best eventual method or policy for servicing the workspace.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: April 7, 2020
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia, Sebastian Schweigert
  • Patent number: 10482619
    Abstract: Provided is a method and apparatus for combining perceived depths to construct a floor plan using cameras, such as depth cameras. The camera(s) perceive depths from the camera(s) to objects within a first field of view. The camera(s) is rotated to observe a second field of view partly overlapping the first field of view. The camera(s) perceives depths from the camera(s) to objects within the second field of view. The depths from the first and second fields of view are compared to find the area of overlap between the two fields of view. The depths from the two fields of view are then merged at the area of overlap to create a segment of a floor plan. The method is repeated wherein depths are perceived within consecutively overlapping fields of view and are combined to construct a floor plan of the environment as the camera is rotated.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: November 19, 2019
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Chen Zhang
  • Patent number: 10422648
    Abstract: Provided is a process, including: obtaining, with one or more processors, first depth data, wherein: the first depth data indicates a first distance from a robot at a first position to a surface of, or in, a workspace in which the robot is disposed, the first depth data indicates a first direction in which the first distance is measured, the first depth data indicates the first distance and the first direction in a frame of reference of the robot, and the frame of reference of the robot is different from a frame of reference of the workspace; translating, with one or more processors, the first depth data into translated first depth data that is in the frame of reference of the workspace; and storing, with one or more processors, the translated first depth data in memory.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: September 24, 2019
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Lukas Fath, Chen Zhang
  • Publication number: 20190121361
    Abstract: Provided is a process executed by a robot, including: traversing, to a first position, a first distance in a backward direction; after traversing the first distance, rotating 180 degrees in a first rotation; after the first rotation, traversing, to a second position, a second distance in the second direction; and after traversing the second distance, rotating 180 degrees in a second rotation such that the field of view of the sensor points in the first direction.
    Type: Application
    Filed: October 17, 2018
    Publication date: April 25, 2019
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Lukas Fath, Chen Zhang
  • Publication number: 20190120633
    Abstract: Provided is a process that includes: obtaining a first version of a map of a workspace; selecting a first undiscovered area of the workspace; in response to selecting the first undiscovered area, causing the robot to move to a position and orientation to sense data in at least part of the first undiscovered area; and obtaining an updated version of the map mapping a larger area of the workspace than the first version.
    Type: Application
    Filed: October 17, 2018
    Publication date: April 25, 2019
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Chen Zhang, Lukas Fath
  • Publication number: 20190114798
    Abstract: Provided is a process, including: obtaining, with one or more processors, first depth data, wherein: the first depth data indicates a first distance from a robot at a first position to a surface of, or in, a workspace in which the robot is disposed, the first depth data indicates a first direction in which the first distance is measured, the first depth data indicates the first distance and the first direction in a frame of reference of the robot, and the frame of reference of the robot is different from a frame of reference of the workspace; translating, with one or more processors, the first depth data into translated first depth data that is in the frame of reference of the workspace; and storing, with one or more processors, the translated first depth data in memory.
    Type: Application
    Filed: October 17, 2018
    Publication date: April 18, 2019
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Lukas Fath, Chen Zhang
  • Publication number: 20190035099
    Abstract: Provided is a method and apparatus for combining perceived depths to construct a floor plan using cameras, such as depth cameras. The camera(s) perceive depths from the camera(s) to objects within a first field of view. The camera(s) is rotated to observe a second field of view partly overlapping the first field of view. The camera(s) perceives depths from the camera(s) to objects within the second field of view. The depths from the first and second fields of view are compared to find the area of overlap between the two fields of view. The depths from the two fields of view are then merged at the area of overlap to create a segment of a floor plan. The method is repeated wherein depths are perceived within consecutively overlapping fields of view and are combined to construct a floor plan of the environment as the camera is rotated.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 31, 2019
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Chen Zhang
  • Publication number: 20190035100
    Abstract: Provided is a process, including obtaining, with a robot, raw pixel intensity values of a first image and raw pixel intensity values of a second image, wherein the first image and the second image are taken from different positions; determining, with one or more processors, an overlapping area of a field of view of the first image and of a field of view of the second image by comparing the raw pixel intensity values of the first image to the raw pixel intensity values of the second image; spatially, with one or more processors, aligning values based on sensor readings of the robot based on the overlapping area; and inferring, with one or more processors, features of a working environment of the robot based on the spatially aligned sensor readings.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 31, 2019
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Chen Zhang