Patents by Inventor Julian Eggert

Julian Eggert has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8933797
    Abstract: The present invention proposes a warning system that can be implemented in any kind of vehicle, in order to efficiently detect moving objects. The system utilizes at least one camera for a continuous imaging of the surroundings of the vehicle. Thereby, moving objects can be monitored. A computing unit is programmed to estimate a motion of any moving object based on a pixel motion in the camera image. If a dangerously moving object is detected, a warning unit can be used for issuing a warning signal. To take such a decision, the estimated motion of at least one of the moving objects can be correlated or compared to predetermined motion patterns.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: January 13, 2015
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Jörg Deigmöller, Julian Eggert, Herbert Janssen, Oliver Fuchs
  • Publication number: 20140225990
    Abstract: In one aspect, an image processing method for processing images is provided, comprising the steps of: obtaining, from an optical sensor, at least two images, determining an image warping function at least partially compensating the distortion, applying the determined image warping function to the image including the distortion, and calculating by a processing unit, and outputting, a depth and/or disparity image from the at least two images.
    Type: Application
    Filed: January 10, 2014
    Publication date: August 14, 2014
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Nils EINECKE, Julian EGGERT
  • Publication number: 20140198955
    Abstract: The invention provides an image processing method for processing a sequence of images, comprising the steps of: obtaining, from a visual sensor, at least two images of the sequence of images, detecting whether the images include a distortion, determining an image warping function at least partially compensating the distortion, applying the determined warping function to the image(s) including the distortion, and calculating by a processing unit, and outputting, an optical flow as a displacement vector field form the images.
    Type: Application
    Filed: January 10, 2014
    Publication date: July 17, 2014
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Joerg DEIGMOELLER, Nils EINECKE, Julian EGGERT
  • Patent number: 8711221
    Abstract: Estimating the dynamic states of a real-world object over time using a camera, two dimensional (2D) image information and a combination of different measurements of the distance between the object and the camera. The 2D image information is used to track a 2D position of the object as well as its 2D size of the appearance and change in the 2D size of the appearance of the object. In addition, the distance between the object and the camera is obtained from one or more direct depth measurements. The 2D position, the 2D size, and the depth of the object are coupled to obtain an improved estimation of three dimensional (3D) position and 3D velocity of the object. The object tracking apparatus uses the improved estimation to track real-world objects. The object tracking apparatus may be used on a moving platform such as a robot or a car with mounted cameras for a dynamic visual scene analysis.
    Type: Grant
    Filed: December 9, 2008
    Date of Patent: April 29, 2014
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Sven Rebhan, Volker Willert, Chen Zhang
  • Patent number: 8670604
    Abstract: The invention proposes a method for object and object configuration tracking based on sensory input data, the method comprising the steps of: (1.1) Basic recruiting: Detecting interesting parts in sensory input data which are not yet covered by already tracked objects and incrementally initializing basic tracking models for these parts to continuously estimate their states, (1.2) Tracking model complexity adjustment: Testing, during runtime more complex and more simple prediction and/or measurement models on the tracked objects, and (1.3) Basic release: Releasing trackers from parts of the sensory data where the tracker prediction and measurement processes do not get sufficient sensory support for some time.
    Type: Grant
    Filed: November 22, 2010
    Date of Patent: March 11, 2014
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Chen Zhang
  • Publication number: 20140049644
    Abstract: The present invention presents a sensing system 1 and a corresponding method for detecting moving objects 11 in the surroundings of a vehicle 10. The sensing system 1 comprises an imaging unit 2 for obtaining an image stream 2a, a computing unit 3 for analyzing the image stream 2a, and a control unit 4 for controlling the vehicle 10 based on the analysis result of the computing unit 3. The sensing system 1 particularly employs a background-model-free estimation. The sensing system 1 is configured to perform a local analysis of two neighboring motion vectors 6, which are computed from points 7a in images 5a, 5b of the image stream 2a, and to determine, whether the points 7a corresponding to these motion vectors 6 belong to a particularly moving object.
    Type: Application
    Filed: August 15, 2013
    Publication date: February 20, 2014
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Joerg DEIGMOELLER, Julian EGGERT, Nils EINECKE
  • Patent number: 8433661
    Abstract: Generally the background of the present invention is the field of artificial vision systems, i.e. systems having a visual sensing means (e.g. a video camera) and a following processing stage implemented using a computing unit. The processing stage outputs a representation of the visually analysed scene, which output can then be fed to control different actors, such as e.g. parts of a vehicle (automobile, plane, . . . ) or a robot, preferably an autonomous robot such as e.g. a humanoid robot.
    Type: Grant
    Filed: February 24, 2010
    Date of Patent: April 30, 2013
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Sven Rebhan
  • Patent number: 8422741
    Abstract: The invention relates to a method for detecting the proper motion of an real-world object, comprising the steps of acquiring, by an image capturing device (ICD), a first image (I1) of the object at a first point in time (t1) and a second image (I2) at a second point in time (t2); obtaining a third (hypothetical) image (I3), based on an estimated effect of the motion of the image capturing device (ICD) itself (EMF) between the first and the second point in time (t1, t2), wherein the effect of the motion of the image capturing device (ICD) itself is estimated based on the forward kinematics of the image capturing device; determining an optical flow (OF) between the second image (I2) and the third image (I3); and evaluating the optical flow (OF) by incorporating uncertainties of the optical flow (OF) and the ego-motion-flow (EMF) in order to determine the proper motion of the object.
    Type: Grant
    Filed: August 21, 2008
    Date of Patent: April 16, 2013
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Sven Rebhan, Jens Schmudderich, Volker Willert
  • Publication number: 20130088343
    Abstract: The present invention proposes a warning system that can be implemented in any kind of vehicle, in order to efficiently detect moving objects. The system utilizes at least one camera for a continuous imaging of the surroundings of the vehicle. Thereby, moving objects can be monitored. A computing unit is programmed to estimate a motion of any moving object based on a pixel motion in the camera image. If a dangerously moving object is detected, a warning unit can be used for issuing a warning signal. To take such a decision, the estimated motion of at least one of the moving objects can be correlated or compared to predetermined motion patterns.
    Type: Application
    Filed: September 12, 2012
    Publication date: April 11, 2013
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Jörg DEIGMÖLLER, Julian EGGERT, Herbert JANSSEN, Oliver FUCHS
  • Patent number: 8126269
    Abstract: A method for segregating a figure region from a background region in image sequences from dynamic visual scenes, comprising the steps of: (a) acquiring an image; (b) determining local motion estimations and confidences for each position of the image; (c) modifying a level-set function by moving and distorting the level-set function with the local motion estimations and smearing it based on the local motion confidences, to generate a predicted level set function that is geometrically in correspondence with the image and diffused at positions where the confidence of the motion estimation is low; (d) obtaining input features of the image by using a series of cues; (e) calculating a mask segregating the figure region from the background region of the image using the modified level-set function and the obtained input features; (f) extracting the figure region from the image; and repeating steps (a)-(f) until a termination criterion is satisfied.
    Type: Grant
    Filed: November 14, 2008
    Date of Patent: February 28, 2012
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Daniel Weiler
  • Publication number: 20110279652
    Abstract: The invention presents a method for comparing the similarity between image patches comprising the steps of receiving form at least two sources at least two image patches, wherein each source supplies an image patch, comparing the received image patches by extracting a number of corresponding subpart pairs from each image patch, calculating a normalized local similarity score between all corresponding subpart pairs, calculating a total matching score by integrating the local similarity scores of all corresponding subpart pairs, and using the total matching score as an indicator for an image patch similarity, determining corresponding similar image patches based on the total matching score.
    Type: Application
    Filed: May 6, 2011
    Publication date: November 17, 2011
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Julian EGGERT, Nils EINECKE
  • Publication number: 20110243390
    Abstract: The invention relates to a method for detecting the proper motion of an real-world object, comprising the steps of acquiring, by an image capturing device (ICD), a first image (I1) of the object at a first point in time (t1) and a second image (I2) at a second point in time (t2); obtaining a third (hypothetical) image (I3), based on an estimated effect of the motion of the image capturing device (ICD) itself (EMF) between the first and the second point in time (t1, t2), wherein the effect of the motion of the image capturing device (ICD) itself is estimated based on the forward kinematics of the image capturing device; determining an optical flow (OF) between the second image (I2) and the third image (I3); and evaluating the optical flow (OF) by incorporating uncertainties of the optical flow (OF) and the ego-motion-flow (EMF) in order to determine the proper motion of the object.
    Type: Application
    Filed: August 21, 2008
    Publication date: October 6, 2011
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Julian Eggert, Sven Rebhan, Jens Schmudderich, Volker Willert
  • Publication number: 20110129119
    Abstract: The invention proposes a method for object and object configuration tracking based on sensory input data, the method comprising the steps of: (1.1) Basic recruiting: Detecting interesting parts in sensory input data which are not yet covered by already tracked objects and incrementally initializing basic tracking models for these parts to continuously estimate their states, (1.2) Tracking model complexity adjustment: Testing, during runtime more complex and more simple prediction and/or measurement models on the tracked objects, and (1.3) Basic release: Releasing trackers from parts of the sensory data where the tracker prediction and measurement processes do not get sufficient sensory support for some time.
    Type: Application
    Filed: November 22, 2010
    Publication date: June 2, 2011
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Julian EGGERT, Chen ZHANG
  • Patent number: 7848565
    Abstract: A method for generating a saliency map for a robot device having sensor means, the saliency map indicating to the robot device patterns in the input space of the sensor means which are important for carrying out a task; wherein the saliency map comprises a weighted contribution from a disparity saliency selection carried out on the basis of depth information gathered from the sensor means, such that a weighted priority is allocated to patterns being within a defined peripersonal space in the environment of the sensor means.
    Type: Grant
    Filed: June 20, 2006
    Date of Patent: December 7, 2010
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Christian Goerick, Heiko Wersing, Mark Dunn, Julian Eggert
  • Publication number: 20100223216
    Abstract: Generally the background of the present invention is the field of artificial vision systems, i.e. systems having a visual sensing means (e.g. a video camera) and a following processing stage implemented using a computing unit. The processing stage outputs a representation of the visually analysed scene, which output can then be fed to control different actors, such as e.g. parts of a vehicle (automobile, plane, . . . ) or a robot, preferably an autonomous robot such as e.g. a humanoid robot.
    Type: Application
    Filed: February 24, 2010
    Publication date: September 2, 2010
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Julian Eggert, Sven Rebhan
  • Patent number: 7693287
    Abstract: Techniques are disclosed for sound source localization based on joint learning and evaluation of ITD and ILD representations that are measured in a complementary, correlation-based way using binaural time-frequency spectrums. According to one embodiment, from these measurements and learned representatives, which may, for example, be created by combinations of measurements from signals belonging to the same class, i.e., the same azimuthal location, probability distributions over frequency and class are computed. These probability distributions can be combined over cue and frequency using information-theoretic approaches to get a robust classification of the location and additionally a confidence measure for the quality of the classification result.
    Type: Grant
    Filed: May 25, 2005
    Date of Patent: April 6, 2010
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Volker Willert, Raphael Stahl, Juergen Adamy
  • Publication number: 20090213219
    Abstract: Estimating the dynamic states of a real-world object over time using a camera, two dimensional (2D) image information and a combination of different measurements of the distance between the object and the camera. The 2D image information is used to track a 2D position of the object as well as its 2D size of the appearance and change in the 2D size of the appearance of the object. In addition, the distance between the object and the camera is obtained from one or more direct depth measurements. The 2D position, the 2D size, and the depth of the object are coupled to obtain an improved estimation of three dimensional (3D) position and 3D velocity of the object. The object tracking apparatus uses the improved estimation to track real-world objects. The object tracking apparatus may be used on a moving platform such as a robot or a car with mounted cameras for a dynamic visual scene analysis.
    Type: Application
    Filed: December 9, 2008
    Publication date: August 27, 2009
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Julian Eggert, Sven Rebhan, Volker Willert, Chen Zhang
  • Publication number: 20090129675
    Abstract: A method for segregating a figure region from a background region in image sequences from dynamic visual scenes, comprising the steps of: (a) acquiring an image; (b) determining local motion estimations and confidences for each position of the image; (c) modifying a level-set function by moving and distorting the level-set function with the local motion estimations and smearing it based on the local motion confidences, to generate a predicted level set function that is geometrically in correspondence with the image and diffused at positions where the confidence of the motion estimation is low; (d) obtaining input features of the image by using a series of cues; (e) calculating a mask segregating the figure region from the background region of the image using the modified level-set function and the obtained input features; (f) extracting the figure region from the image; and repeating steps (a)-(f) until a termination criterion is satisfied.
    Type: Application
    Filed: November 14, 2008
    Publication date: May 21, 2009
    Applicant: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Daniel Weiler
  • Patent number: 7236961
    Abstract: Convolutional networks can be defined by a set of layers being respectively made up by a two-dimensional lattice of neurons. Each layer—with the exception of the last layer—represents a source layer for respectively following target layer. A plurality of neurons of a source layer called a source sub-area respectively share the identical connectivity weight matrix type. Each connectivity weight matrix type is represented by a scalar product of an encoding filter and a decoding filter. For each source layer a source reconstruction image is calculated on the basis of the corresponding encoding filters and the activities of the corresponding source sub-area. For each connectivity weight matrix type, each target sub-area and each target layer the input of the target layer is calculated as a convolution of the source reconstruction image and the decoding filter.
    Type: Grant
    Filed: March 12, 2002
    Date of Patent: June 26, 2007
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Berthold Bäuml
  • Publication number: 20070003130
    Abstract: A method for generating a saliency map for a robot device having sensor means, the saliency map indicating to the robot device patterns in the input space of the sensor means which are important for carrying out a task; wherein the saliency map comprises a weighted contribution from a disparity saliency selection carried out on the basis of depth information gathered from the sensor means, such that a weighted priority is allocated to patterns being within a defined peripersonal space in the environment of the sensor means.
    Type: Application
    Filed: June 20, 2006
    Publication date: January 4, 2007
    Inventors: Christian Goerick, Heiko Wersing, Mark Dunn, Julian Eggert