Patents by Inventor Eckehard Steinbach

Eckehard Steinbach has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240152550
    Abstract: There is provided a visual localization method comprising: (a) transmitting data representative of one or more detected visual features from a mobile device to a server; (b) estimating the location of the mobile device at the server based on the visual features received from the mobile device; (c) transmitting reference data associated with the estimated location from the server to the mobile device; and (d) the mobile device determining its location based on the reference data received from the server.
    Type: Application
    Filed: October 27, 2023
    Publication date: May 9, 2024
    Inventors: Mohammad Abu-Alqumsan, Anas Al-Nuaimi, Robert Huitl, Georg Schroth, Florian Schweiger, Eckehard Steinbach
  • Patent number: 11887247
    Abstract: In an embodiment of the invention there is provided a method of visual localization, comprising: generating a plurality of virtual views, wherein each of the virtual views is associated with a location; obtaining a query image; determining the location where the query image was obtained on the basis of a comparison of the query image with said virtual views.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: January 30, 2024
    Assignee: NAVVIS GMBH
    Inventors: Eckehard Steinbach, Robert Huitl, Georg Schroth, Sebastian Hilsenbeck
  • Publication number: 20240013636
    Abstract: Encoding device for encoding vibrotactile multichannel signals, includes an encoder input module configured to receive a multichannel signal; a transform module adapted to execute a discrete wavelet transform of each channel of the multichannel signal and to generate a respective frequency range representation of each channel; a psychohaptic model unit designed to allocate to each channel, based on the respective frequency range representation, a mathematical representation of human perception of the channel; a clustering module configured to group, based on the allocated mathematical representation and a similarity measure of the channels, the wavelet-transformed channels of each multichannel signal into clusters, wherein each cluster is allocated a reference channel; a reference encoding module designed to quantize and compress wavelet coefficients of the reference channels which result from the performed discrete wavelet transform of the reference channels; a differential encoding module configured to enco
    Type: Application
    Filed: July 7, 2023
    Publication date: January 11, 2024
    Inventors: Andreas Noll, Lars Andreas Nockenberg, Eckehard Steinbach
  • Patent number: 11833696
    Abstract: A method for determining the joint positions of a kinematic chain uses only an imaging sensor and a computing unit. Characteristic features on the links and joints of the kinematic chain are identified and the joint positions are calculated from these visual measurements. The robot can be controlled without the use of joint encoders. A sensor system for monitoring the status of a kinematic chain includes a computing unit and an imaging sensor. The imaging sensor may be mounted to the kinematic chain or in the surroundings of the kinematic chain and monitors the kinematic chain and/or the surroundings of the kinematic chain. The computing unit determines a pose and/or movement parameters of at least one element of the kinematic chain by analyzing an output signal of the imaging sensor, in particular by analyzing characteristic features and determines a rotational joint position by analyzing the characteristic features.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: December 5, 2023
    Assignee: Technische Universität München
    Inventors: Nicolas Alt, Clemens Schuwerk, Eckehard Steinbach, Stefan Lochbrunner, Dong Qiuwei
  • Patent number: 11803586
    Abstract: There is provided a visual localization method comprising: (a) transmitting data representative of one or more detected visual features from a mobile device to a server; (b) estimating the location of the mobile device at the server based on the visual features received from the mobile device; (c) transmitting reference data associated with the estimated location from the server to the mobile device; and (d) the mobile device determining its location based on the reference data received from the server.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: October 31, 2023
    Assignee: NavVis GmbH
    Inventors: Mohammad Abu-Alqumsan, Anas Al-Nuaimi, Robert Huitl, Georg Schroth, Florian Schweiger, Eckehard Steinbach
  • Publication number: 20230116285
    Abstract: The present disclosure relates to image modification such as an image enhancement. The image enhancement may be applied for any image modification and it may be applied during or after image encoding and/or decoding, e.g. as a loop filter or a post filter. In particular, the image modification includes a multi-channel processing in which a primary channel is processed separately and secondary channels are processed based on the processed primary channel. The processing is based on a neural network. In order to enhance the image modification performance, prior to applying the modification, the image channels are analyzed and a primary channel and the secondary channels are determined, which may vary for multiples of images, images or image areas.
    Type: Application
    Filed: December 12, 2022
    Publication date: April 13, 2023
    Inventors: Kai Cui, Atanas Boev, Elena Alexandrovna Alshina, Eckehard Steinbach
  • Publication number: 20220348211
    Abstract: A method and an assistance device assist automated driving operation of a motor vehicle. Surroundings raw data recorded by way of a surroundings sensor system of the motor vehicle are processed by the assistance device in order to generate semantic surroundings data. This is accomplished by carrying out semantic object recognition. Further, a comparison of predefined semantically annotated map data against the semantic surroundings data is performed. This involves static objects indicated in the map data being identified in the semantic surroundings data as far as possible. Discrepancies detected during the process are used to recognize perception errors of the assistance device. A recognized perception error prompts a predefined safety measure to be carried out.
    Type: Application
    Filed: May 3, 2022
    Publication date: November 3, 2022
    Inventors: Markus HOFBAUER, Christopher KUHN, Goran PETROVIC, Eckehard STEINBACH
  • Publication number: 20210350629
    Abstract: In an embodiment of the invention there is provided a method of visual localization, comprising: generating a plurality of virtual views, wherein each of the virtual views is associated with a location; obtaining a query image; determining the location where the query image was obtained on the basis of a comparison of the query image with said virtual views.
    Type: Application
    Filed: July 23, 2021
    Publication date: November 11, 2021
    Inventors: Eckehard Steinbach, Robert Huitl, Georg Schroth, Sebastian Hilsenbeck
  • Patent number: 11113934
    Abstract: An encoding apparatus for encoding a vibrotactile signal includes a first transforming unit configured to perform a discrete wavelet transform of the signal, a second transforming unit configured to generate a frequency domain representation of the signal, a psychohaptic model unit configured to generate at least one quantization control signal based on the generated frequency domain representation of the sampled signal and on a predetermined perceptual model based on human haptic perception, a quantization unit configured to quantize wavelet coefficients resulting from the performed discrete wavelet transform and adapted by the quantization control signal, a compression unit configured to compress the quantized wavelet coefficients, and a bitstream generating unit configured to generate a bitstream corresponding to the encoded signal based on the compressed quantized wavelet coefficients.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: September 7, 2021
    Assignee: TECHNISCHE UNIVERSITÄT MÜNCHEN
    Inventors: Andreas Noll, Basak Gülecyüz, Eckehard Steinbach
  • Patent number: 11094123
    Abstract: In an embodiment of the invention there is provided a method of visual localization, comprising: generating a plurality of virtual views, wherein each of the virtual views is associated with a location; obtaining a query image; determining the location where the query image was obtained on the basis of a comparison of the query image with said virtual views.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: August 17, 2021
    Assignee: NAVVIS GMBH
    Inventors: Eckehard Steinbach, Robert Huitl, Georg Schroth, Sebastian Hilsenbeck
  • Publication number: 20210182334
    Abstract: There is provided a visual localization method comprising: (a) transmitting data representative of one or more detected visual features from a mobile device to a server; (b) estimating the location of the mobile device at the server based on the visual features received from the mobile device; (c) transmitting reference data associated with the estimated location from the server to the mobile device; and (d) the mobile device determining its location based on the reference data received from the server.
    Type: Application
    Filed: February 19, 2021
    Publication date: June 17, 2021
    Inventors: Mohammad Abu-Alqumsan, Anas Al-Nuaimi, Robert Huitl, Georg Schroth, Florian Schweiger, Eckehard Steinbach
  • Patent number: 10956489
    Abstract: There is provided a visual localization method comprising: (a) transmitting data representative of one or more detected visual features from a mobile device to a server; (b) estimating the location of the mobile device at the server based on the visual features received from the mobile device; (c) transmitting reference data associated with the estimated location from the server to the mobile device; and (d) the mobile device determining its location based on the reference data received from the server.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: March 23, 2021
    Assignee: NavVis GmbH
    Inventors: Mohammad Abu-Alqumsan, Anas Al-Nuaimi, Robert Huitl, Georg Schroth, Florian Schweiger, Eckehard Steinbach
  • Publication number: 20210023719
    Abstract: A method for determining the joint positions of a kinematic chain uses only an imaging sensor and a computing unit. Characteristic features on the links and joints of the kinematic chain are identified and the joint positions are calculated from these visual measurements. The robot can be controlled without the use of joint encoders. A sensor system for monitoring the status of a kinematic chain includes a computing unit and an imaging sensor. The imaging sensor may be mounted to the kinematic chain or in the surroundings of the kinematic chain and monitors the kinematic chain and/or the surroundings of the kinematic chain. The computing unit determines a pose and/or movement parameters of at least one element of the kinematic chain by analyzing an output signal of the imaging sensor, in particular by analyzing characteristic features and determines a rotational joint position by analyzing the characteristic features.
    Type: Application
    Filed: January 14, 2019
    Publication date: January 28, 2021
    Applicant: Technische Universität München
    Inventors: Nicolas ALT, Clemens SCHUWERK, Eckehard STEINBACH, Stefan LOCHBRUNNER, Dong QIUWEI
  • Publication number: 20200312103
    Abstract: An encoding apparatus for encoding a vibrotactile signal includes a first transforming unit configured to perform a discrete wavelet transform of the signal, a second transforming unit configured to generate a frequency domain representation of the signal, a psychohaptic model unit configured to generate at least one quantization control signal based on the generated frequency domain representation of the sampled signal and on a predetermined perceptual model based on human haptic perception, a quantization unit configured to quantize wavelet coefficients resulting from the performed discrete wavelet transform and adapted by the quantization control signal, a compression unit configured to compress the quantized wavelet coefficients, and a bitstream generating unit configured to generate a bitstream corresponding to the encoded signal based on the compressed quantized wavelet coefficients.
    Type: Application
    Filed: March 27, 2020
    Publication date: October 1, 2020
    Inventors: Andreas Noll, Basak Gülecyüz, Eckehard Steinbach
  • Publication number: 20200167382
    Abstract: There is provided a visual localization method comprising: (a) transmitting data representative of one or more detected visual features from a mobile device to a server; (b) estimating the location of the mobile device at the server based on the visual features received from the mobile device; (c) transmitting reference data associated with the estimated location from the server to the mobile device; and (d) the mobile device determining its location based on the reference data received from the server.
    Type: Application
    Filed: January 29, 2020
    Publication date: May 28, 2020
    Inventors: Mohammad Abu-Alqumsan, Anas Al-Nuaimi, Robert Huitl, Georg Schroth, Florian Schweiger, Eckehard Steinbach
  • Patent number: 10585938
    Abstract: There is provided a visual localization method comprising: (a) transmitting data representative of one or more detected visual features from a mobile device to a server; (b) estimating the location of the mobile device at the server based on the visual features received from the mobile device; (c) transmitting reference data associated with the estimated location from the server to the mobile device; and (d) the mobile device determining its location based on the reference data received from the server.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: March 10, 2020
    Assignee: NavVis GmbH
    Inventors: Mohammad Abu-Alqumsan, Anas Al-Nuaimi, Robert Huitl, Georg Schroth, Florian Schweiger, Eckehard Steinbach
  • Publication number: 20190376860
    Abstract: The present disclosure presents a visuo-haptic sensor, which is based on a passive, deformable element whose deformation is observed by a camera. In particular, the improved simplified sensor may determine the force and/or torque applied to a point of the sensor in multiple spatial dimensions.
    Type: Application
    Filed: August 26, 2019
    Publication date: December 12, 2019
    Inventors: Nicolas Alt, Eckehard Steinbach, Martin Freundl
  • Patent number: 10481693
    Abstract: The disclosure relates to a manually operable tactile input/output device for a data processing system, in particular a tactile computer mouse, which is configured with a housing and a plurality of actuators in operative connection with the housing and in which the actuators are arranged and jointly controlled in the housing to cause at least temporarily a tactile perception by a user in manual contact with the housing, by which simultaneously at least two and in particular all five tactile perception dimensions of microscopic roughness, macroscopic roughness, friction, hardness and heat are simulated. Furthermore, the disclosure relates to an operating method for a manually operable tactile input/output device and a data processing system.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: November 19, 2019
    Assignee: Technische Universität München
    Inventors: Matti Strese, Kevin Kuonath, Andreas Noll, Eckehard Steinbach, Martin Piccolrovazzi
  • Publication number: 20190287293
    Abstract: In an embodiment of the invention there is provided a method of visual localization, comprising: generating a plurality of virtual views, wherein each of the virtual views is associated with a location; obtaining a query image; determining the location where the query image was obtained on the basis of a comparison of the query image with said virtual views.
    Type: Application
    Filed: April 19, 2019
    Publication date: September 19, 2019
    Inventors: Eckehard Steinbach, Robert Huitl, Georg Schroth, Sebastian Hilsenbeck
  • Patent number: 10393603
    Abstract: The present disclosure presents a visuo-haptic sensor, which is based on a passive, deformable element whose deformation is observed by a camera. In particular, the improved simplified sensor may determine the force and/or torque applied to a point of the sensor in multiple spatial dimensions.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: August 27, 2019
    Assignee: Technische Universität München
    Inventors: Nicolas Alt, Eckehard Steinbach, Martin Freundl