Patents by Inventor Markus Schlattmann

Markus Schlattmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240161427
    Abstract: Systems and methods enable providing various virtual activities. One of the methods comprises obtaining images of a physical area acquired by an imaging device, generating a virtual activity world representative of the physical area, the generating comprising using at least part of the images to map physical elements of a plurality of physical elements, to the virtual activity world, wherein the physical elements include a given physical element movable by the user, and adding one or more virtual objects to the virtual activity world, using the virtual activity world to detect an interaction between the given physical element mapped to the virtual activity world, and a given virtual object, responsive to a detected interaction, determining an outcome of the interaction, applying a change in the virtual activity world corresponding to the outcome, and displaying a representation of the virtual activity world on a display device.
    Type: Application
    Filed: March 7, 2022
    Publication date: May 16, 2024
    Inventors: Robert BIEHL, Arno MITTELBACH, Markus SCHLATTMANN, Thomas BADER
  • Patent number: 11734854
    Abstract: A system, method or computer program product for estimating an absolute 3D location of at least one object x imaged by a single camera, the system including processing circuitry configured for identifying an interaction, at time t, of object x with an object y imaged with said object x by said single camera, typically including logic for determining object y's absolute 3D location at time t, and providing an output indication of object x's absolute location at time t, derived from the 3D location, as known, at time t, of object y.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: August 22, 2023
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Philipp Huelsdunk, Robert Biehl, Markus Schlattmann
  • Publication number: 20230148135
    Abstract: The presently disclosed subject matter includes a computerized system and method of tracking and characterizing dynamics using at least one ML model configured to operate well under conditions where data input rate is varying as well as condition where data input rate is constant.
    Type: Application
    Filed: December 14, 2020
    Publication date: May 11, 2023
    Inventors: Omair GHORI, Michael LEIGSNERING, Grigory ALEXANDROVICH, Saad JAVED, Markus SCHLATTMANN, Robert BIEHL
  • Publication number: 20220375126
    Abstract: A system, method or computer program product for estimating an absolute 3D location of at least one object x imaged by a single camera, the system including processing circuitry configured for identifying an interaction, at time t, of object x with an object y imaged with said object x by said single camera, typically including logic for determining object y's absolute 3D location at time t, and providing an output indication of object x's absolute location at time t, derived from the 3D location, as known, at time t, of object y.
    Type: Application
    Filed: May 26, 2022
    Publication date: November 24, 2022
    Inventors: Philipp HUELSDUNK, Robert BIEHL, Markus SCHLATTMANN
  • Publication number: 20220277463
    Abstract: The presently disclosed subject matter there includes a computerized device that comprises a processing circuitry, and one or more sensors including at least one image sensor configured to continuously capture images of an environment; the processing circuitry is configured to process the captured images and: detect an object within the environment; track a trajectory of movement of the object; select a best fitting motion model to the trajectory and determine a state of the object based on the best fitting motion model; and determine at least one intersection point between the object and a target object.
    Type: Application
    Filed: August 26, 2020
    Publication date: September 1, 2022
    Inventors: Markus SCHLATTMANN, Thomas BADER, Robert BIEHL, Philipp HUELSDUNK, Paul IDSTEIN, Michael LEIGSNERING, Sergey SUKHANOV
  • Patent number: 11373331
    Abstract: A system, method or computer program product for estimating an absolute 3D location of at least one object x imaged by a single camera, the system including processing circuitry configured for identifying an interaction, at time t, of object x with an object y imaged with said object x by said single camera, typically including logic for determining object y's absolute 3D location at time t, and providing an output indication of object x's absolute location at time t, derived from the 3D location, as known, at time t, of object y.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: June 28, 2022
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Philipp Huelsdunk, Robert Biehl, Markus Schlattmann
  • Publication number: 20210192783
    Abstract: A system, method or computer program product for estimating an absolute 3D location of at least one object x imaged by a single camera, the system including processing circuitry configured for identifying an interaction, at time t, of object x with an object y imaged with said object x by said single camera, typically including logic for determining object y's absolute 3D location at time t, and providing an output indication of object x's absolute location at time t, derived from the 3D location, as known, at time t, of object y.
    Type: Application
    Filed: December 18, 2019
    Publication date: June 24, 2021
    Inventors: Philipp HUELSDUNK, Robert BIEHL, Markus SCHLATTMANN
  • Publication number: 20200311392
    Abstract: There is provided a computerized method for determining attention of an audience individual. The method comprises receiving a tracked sequence of an individual's attention directions, where the tracked sequence is indicative of a direction of attention of the one audience individual in each frame of a first subset of a first series of frames. The first series of frames are associated with a time interval. The method also comprises receiving a motion trajectory of a moving object, which is indicative of a location of a moving object in each frame of a second subset of a second series of frames. The second series of frames are associated with a second time interval that corresponds to the first time interval. The method further comprises processing the tracked sequence and the motion trajectory for determining an attention score of the audience individual towards the moving object.
    Type: Application
    Filed: March 27, 2019
    Publication date: October 1, 2020
    Inventors: Rohit MANDE, Markus SCHLATTMANN, Stefan DIENER
  • Patent number: 9953245
    Abstract: A method and system of identifying in an image captured by a second camera a target object captured by a first camera is disclosed. The method comprises using image captured by the 1st camera for generating a first set of objects comprising the target object and other objects; classifying each object to generate a reference group of attribute values characterizing the objects; using image captured by the 2nd camera for generating second sets of objects; classifying each object in each second set to generate, for each second set, a corresponding group of attribute values characterizing the objects in the corresponding second set; selecting the second set of objects corresponding to the group of attribute values best matching the reference group of attribute values; and identifying the target object in the selected second set of objects in accordance with a position of the target object in the first set of objects.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: April 24, 2018
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Sebastian Palacio, Stephan Krauss, Jan Hirzel, Didier Stricker, Markus Schlattmann, Sebastian Hohmann
  • Patent number: 9942450
    Abstract: A method for automatically matching video streams from two cameras of a camera network includes obtaining a video stream of frames that are acquired by each of the cameras. Each video stream includes images of moving objects. A time signature for each of the video streams is calculated. Each time signature is indicative of a time at which an image of one the objects is located at a predetermined part of the frame. A temporal offset of one of the signatures relative to the other signature is calculated such that, when applied to one of the signatures, a correspondence between the signatures is maximized. The temporal offset is applicable to video streams that are acquired by the two cameras to determine if a moving object that is imaged by one of the cameras is identical to a moving object that is imaged by the other camera.
    Type: Grant
    Filed: July 7, 2015
    Date of Patent: April 10, 2018
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Markus Schlattmann, Stefan Diener
  • Patent number: 9928594
    Abstract: A method for automatic spatial calibration of a network of cameras along a road includes, processing a frame that is obtained from each camera of the network to automatically identify an image of a pattern of road markings that have a known spatial relationship to one another. The identified images are used to calculate a position of each camera relative to the pattern of road markings that is imaged by that camera. Geographical information is applied to calculate an absolute position of a field of view of each camera. A global optimization is applied to adjust the absolute position of the field of view of each camera of the camera network relative to an absolute position of the fields of view of other cameras of the camera network.
    Type: Grant
    Filed: July 7, 2015
    Date of Patent: March 27, 2018
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Henning Hamer, Arsenii Sawchuk, Markus Schlattmann, Roel Heremans
  • Publication number: 20180070013
    Abstract: A method, system, and computer program product for stabilizing frames, the method comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
    Type: Application
    Filed: October 31, 2017
    Publication date: March 8, 2018
    Inventors: Markus SCHLATTMANN, Rohit MANDE
  • Patent number: 9838604
    Abstract: A method, system, and computer program product for stabilizing frames, the method comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: December 5, 2017
    Assignee: AG INTERNATIONAL GMBH
    Inventors: Markus Schlattmann, Rohit Mande
  • Publication number: 20170344855
    Abstract: Methods and systems for determining the likelihood of a traffic collision between vehicles of different vehicle classes at an intersection are disclosed. Data informative of an intersection model is obtained. A first and second vehicle appearing in image data are classified, their trajectories extracted, and a plurality of references trajectories associated with their respective classes are selected from the intersection model. Conflicting pairs of references trajectories are identified. For each pair, the first vehicle is mapped to a point on the first reference trajectory and the second vehicle is mapped to a point on the second reference trajectory. Data indicative of the likelihood of a collision is generated, and a warning is generated when the generated data satisfy a predetermined criterion.
    Type: Application
    Filed: May 24, 2016
    Publication date: November 30, 2017
    Inventors: Rohit MANDE, Markus SCHLATTMANN
  • Patent number: 9679200
    Abstract: In queues, persons—or objects (120) in general—move inside an area (110) to a target (112), such as to a counter. The queue has movement characteristics in terms of speed, waiting times and queue form. A computer-implemented approach obtains the characteristics by receiving a sequence (140) of image frames (141, 142, 143, 49) that represent the surveillance area (110); calculating flow vectors that indicate an optical displacement for the sequence (140) of image frames (141/142, 142/143); extending one of the flow vectors as lead vector in extension directions; determining intermediate vectors by using flow vectors along the extension directions; and selecting one the intermediate vectors as the new lead vector. The steps are repeated to concatenate the lead vectors to the movement characteristics (190).
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: June 13, 2017
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Markus Schlattmann, Henning Hamer, Ulf Blanke
  • Patent number: 9633434
    Abstract: A computer implemented method, computer program product, and computer system for determining camera calibration data. The computer system receives geo-positional data of a moving object, wherein the geo-positional data is associated with an indicator (112). The computer system receives further a sequence of frames (140) from the at least one camera (150), wherein at least one frame has a picture of the moving object (118) with a structure and with an encoded version of the indicator which are optically recognizable. The indicator associated with the at least one frame is extracted by decoding (170) the optically encoded version of the indicator of the at least one frame. The geo-positional data of the moving object which is in the picture of the at least one frame is obtained by matching (172) the indicator associated with the geo-positional data of the moving object and the decoded indicator associated with the at least one frame.
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: April 25, 2017
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Markus Schlattmann, Ulf Blanke
  • Publication number: 20170111585
    Abstract: A method, system, and computer program product for stabilizing frames, the method comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
    Type: Application
    Filed: October 15, 2015
    Publication date: April 20, 2017
    Inventors: Markus SCHLATTMANN, Rohit MANDE
  • Publication number: 20170004386
    Abstract: A method and system of identifying in an image captured by a second camera a target object captured by a first camera is disclosed. The method comprises using image captured by the 1st camera for generating a first set of objects comprising the target object and other objects; classifying each object to generate a reference group of attribute values characterizing the objects; using image captured by the 2nd camera for generating second sets of objects; classifying each object in each second set to generate, for each second set, a corresponding group of attribute values characterizing the objects in the corresponding second set; selecting the second set of objects corresponding to the group of attribute values best matching the reference group of attribute values; and identifying the target object in the selected second set of objects in accordance with a position of the target object in the first set of objects.
    Type: Application
    Filed: August 19, 2016
    Publication date: January 5, 2017
    Inventors: Sebastian PALACIO, Stephan KRAUSS, Jan HIRZEL, Didier STRICKER, Markus SCHLATTMANN, Sebastian HOHMANN
  • Patent number: 9449258
    Abstract: A target object captured by a first camera is identified in images captured by a second camera. A reference platoon comprising the target object and other objects is generated using first camera images. A reference group characterizing the objects in the reference platoon is generated by running a first set of trained classifiers over the reference platoon, the first set of trained classifiers trained to characterize objects captured by the first camera. Candidate platoons are generated using second camera images. Candidate groups characterizing objects in the candidate platoons are obtained by running an independently trained second set of classifiers over the candidate platoons, the second set characterizing objects captured by the second camera. Candidate groups are compared to the reference group, and a best matching candidate platoon is selected. The target object is identified in the selected candidate platoon based on the object's position in the reference platoon.
    Type: Grant
    Filed: July 2, 2015
    Date of Patent: September 20, 2016
    Assignee: AGT INTERNATIONAL GMBH
    Inventors: Sebastian Palacio, Stephan Krauss, Jan Hirzel, Didier Stricker, Markus Schlattmann, Sebastian Hohmann
  • Publication number: 20160117833
    Abstract: Apparatus and method for extracting a background model from a video stream of frames. Each frame is divided into a rectangular array of blocks, and a block descriptor of each block is compared with the previous frame's corresponding block descriptor. When a block descriptor is substantially the same as the corresponding block descriptor of the preceding frame for at least a predetermined number of frames, then the background model is updated according to the block descriptor.
    Type: Application
    Filed: October 22, 2015
    Publication date: April 28, 2016
    Inventors: Stephan KRAUSS, Jan HIRZEL, Pablo ABAD, Didier STRICKER, Henning HAMER, Markus SCHLATTMANN