Patents by Inventor Julian Eggert
Julian Eggert has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8933797Abstract: The present invention proposes a warning system that can be implemented in any kind of vehicle, in order to efficiently detect moving objects. The system utilizes at least one camera for a continuous imaging of the surroundings of the vehicle. Thereby, moving objects can be monitored. A computing unit is programmed to estimate a motion of any moving object based on a pixel motion in the camera image. If a dangerously moving object is detected, a warning unit can be used for issuing a warning signal. To take such a decision, the estimated motion of at least one of the moving objects can be correlated or compared to predetermined motion patterns.Type: GrantFiled: September 12, 2012Date of Patent: January 13, 2015Assignee: Honda Research Institute Europe GmbHInventors: Jörg Deigmöller, Julian Eggert, Herbert Janssen, Oliver Fuchs
-
Publication number: 20140225990Abstract: In one aspect, an image processing method for processing images is provided, comprising the steps of: obtaining, from an optical sensor, at least two images, determining an image warping function at least partially compensating the distortion, applying the determined image warping function to the image including the distortion, and calculating by a processing unit, and outputting, a depth and/or disparity image from the at least two images.Type: ApplicationFiled: January 10, 2014Publication date: August 14, 2014Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Nils EINECKE, Julian EGGERT
-
Publication number: 20140198955Abstract: The invention provides an image processing method for processing a sequence of images, comprising the steps of: obtaining, from a visual sensor, at least two images of the sequence of images, detecting whether the images include a distortion, determining an image warping function at least partially compensating the distortion, applying the determined warping function to the image(s) including the distortion, and calculating by a processing unit, and outputting, an optical flow as a displacement vector field form the images.Type: ApplicationFiled: January 10, 2014Publication date: July 17, 2014Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Joerg DEIGMOELLER, Nils EINECKE, Julian EGGERT
-
Patent number: 8711221Abstract: Estimating the dynamic states of a real-world object over time using a camera, two dimensional (2D) image information and a combination of different measurements of the distance between the object and the camera. The 2D image information is used to track a 2D position of the object as well as its 2D size of the appearance and change in the 2D size of the appearance of the object. In addition, the distance between the object and the camera is obtained from one or more direct depth measurements. The 2D position, the 2D size, and the depth of the object are coupled to obtain an improved estimation of three dimensional (3D) position and 3D velocity of the object. The object tracking apparatus uses the improved estimation to track real-world objects. The object tracking apparatus may be used on a moving platform such as a robot or a car with mounted cameras for a dynamic visual scene analysis.Type: GrantFiled: December 9, 2008Date of Patent: April 29, 2014Assignee: Honda Research Institute Europe GmbHInventors: Julian Eggert, Sven Rebhan, Volker Willert, Chen Zhang
-
Patent number: 8670604Abstract: The invention proposes a method for object and object configuration tracking based on sensory input data, the method comprising the steps of: (1.1) Basic recruiting: Detecting interesting parts in sensory input data which are not yet covered by already tracked objects and incrementally initializing basic tracking models for these parts to continuously estimate their states, (1.2) Tracking model complexity adjustment: Testing, during runtime more complex and more simple prediction and/or measurement models on the tracked objects, and (1.3) Basic release: Releasing trackers from parts of the sensory data where the tracker prediction and measurement processes do not get sufficient sensory support for some time.Type: GrantFiled: November 22, 2010Date of Patent: March 11, 2014Assignee: Honda Research Institute Europe GmbHInventors: Julian Eggert, Chen Zhang
-
Publication number: 20140049644Abstract: The present invention presents a sensing system 1 and a corresponding method for detecting moving objects 11 in the surroundings of a vehicle 10. The sensing system 1 comprises an imaging unit 2 for obtaining an image stream 2a, a computing unit 3 for analyzing the image stream 2a, and a control unit 4 for controlling the vehicle 10 based on the analysis result of the computing unit 3. The sensing system 1 particularly employs a background-model-free estimation. The sensing system 1 is configured to perform a local analysis of two neighboring motion vectors 6, which are computed from points 7a in images 5a, 5b of the image stream 2a, and to determine, whether the points 7a corresponding to these motion vectors 6 belong to a particularly moving object.Type: ApplicationFiled: August 15, 2013Publication date: February 20, 2014Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Joerg DEIGMOELLER, Julian EGGERT, Nils EINECKE
-
Patent number: 8433661Abstract: Generally the background of the present invention is the field of artificial vision systems, i.e. systems having a visual sensing means (e.g. a video camera) and a following processing stage implemented using a computing unit. The processing stage outputs a representation of the visually analysed scene, which output can then be fed to control different actors, such as e.g. parts of a vehicle (automobile, plane, . . . ) or a robot, preferably an autonomous robot such as e.g. a humanoid robot.Type: GrantFiled: February 24, 2010Date of Patent: April 30, 2013Assignee: Honda Research Institute Europe GmbHInventors: Julian Eggert, Sven Rebhan
-
Patent number: 8422741Abstract: The invention relates to a method for detecting the proper motion of an real-world object, comprising the steps of acquiring, by an image capturing device (ICD), a first image (I1) of the object at a first point in time (t1) and a second image (I2) at a second point in time (t2); obtaining a third (hypothetical) image (I3), based on an estimated effect of the motion of the image capturing device (ICD) itself (EMF) between the first and the second point in time (t1, t2), wherein the effect of the motion of the image capturing device (ICD) itself is estimated based on the forward kinematics of the image capturing device; determining an optical flow (OF) between the second image (I2) and the third image (I3); and evaluating the optical flow (OF) by incorporating uncertainties of the optical flow (OF) and the ego-motion-flow (EMF) in order to determine the proper motion of the object.Type: GrantFiled: August 21, 2008Date of Patent: April 16, 2013Assignee: Honda Research Institute Europe GmbHInventors: Julian Eggert, Sven Rebhan, Jens Schmudderich, Volker Willert
-
Publication number: 20130088343Abstract: The present invention proposes a warning system that can be implemented in any kind of vehicle, in order to efficiently detect moving objects. The system utilizes at least one camera for a continuous imaging of the surroundings of the vehicle. Thereby, moving objects can be monitored. A computing unit is programmed to estimate a motion of any moving object based on a pixel motion in the camera image. If a dangerously moving object is detected, a warning unit can be used for issuing a warning signal. To take such a decision, the estimated motion of at least one of the moving objects can be correlated or compared to predetermined motion patterns.Type: ApplicationFiled: September 12, 2012Publication date: April 11, 2013Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Jörg DEIGMÖLLER, Julian EGGERT, Herbert JANSSEN, Oliver FUCHS
-
Patent number: 8126269Abstract: A method for segregating a figure region from a background region in image sequences from dynamic visual scenes, comprising the steps of: (a) acquiring an image; (b) determining local motion estimations and confidences for each position of the image; (c) modifying a level-set function by moving and distorting the level-set function with the local motion estimations and smearing it based on the local motion confidences, to generate a predicted level set function that is geometrically in correspondence with the image and diffused at positions where the confidence of the motion estimation is low; (d) obtaining input features of the image by using a series of cues; (e) calculating a mask segregating the figure region from the background region of the image using the modified level-set function and the obtained input features; (f) extracting the figure region from the image; and repeating steps (a)-(f) until a termination criterion is satisfied.Type: GrantFiled: November 14, 2008Date of Patent: February 28, 2012Assignee: Honda Research Institute Europe GmbHInventors: Julian Eggert, Daniel Weiler
-
Publication number: 20110279652Abstract: The invention presents a method for comparing the similarity between image patches comprising the steps of receiving form at least two sources at least two image patches, wherein each source supplies an image patch, comparing the received image patches by extracting a number of corresponding subpart pairs from each image patch, calculating a normalized local similarity score between all corresponding subpart pairs, calculating a total matching score by integrating the local similarity scores of all corresponding subpart pairs, and using the total matching score as an indicator for an image patch similarity, determining corresponding similar image patches based on the total matching score.Type: ApplicationFiled: May 6, 2011Publication date: November 17, 2011Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Julian EGGERT, Nils EINECKE
-
Publication number: 20110243390Abstract: The invention relates to a method for detecting the proper motion of an real-world object, comprising the steps of acquiring, by an image capturing device (ICD), a first image (I1) of the object at a first point in time (t1) and a second image (I2) at a second point in time (t2); obtaining a third (hypothetical) image (I3), based on an estimated effect of the motion of the image capturing device (ICD) itself (EMF) between the first and the second point in time (t1, t2), wherein the effect of the motion of the image capturing device (ICD) itself is estimated based on the forward kinematics of the image capturing device; determining an optical flow (OF) between the second image (I2) and the third image (I3); and evaluating the optical flow (OF) by incorporating uncertainties of the optical flow (OF) and the ego-motion-flow (EMF) in order to determine the proper motion of the object.Type: ApplicationFiled: August 21, 2008Publication date: October 6, 2011Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Julian Eggert, Sven Rebhan, Jens Schmudderich, Volker Willert
-
Publication number: 20110129119Abstract: The invention proposes a method for object and object configuration tracking based on sensory input data, the method comprising the steps of: (1.1) Basic recruiting: Detecting interesting parts in sensory input data which are not yet covered by already tracked objects and incrementally initializing basic tracking models for these parts to continuously estimate their states, (1.2) Tracking model complexity adjustment: Testing, during runtime more complex and more simple prediction and/or measurement models on the tracked objects, and (1.3) Basic release: Releasing trackers from parts of the sensory data where the tracker prediction and measurement processes do not get sufficient sensory support for some time.Type: ApplicationFiled: November 22, 2010Publication date: June 2, 2011Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Julian EGGERT, Chen ZHANG
-
Patent number: 7848565Abstract: A method for generating a saliency map for a robot device having sensor means, the saliency map indicating to the robot device patterns in the input space of the sensor means which are important for carrying out a task; wherein the saliency map comprises a weighted contribution from a disparity saliency selection carried out on the basis of depth information gathered from the sensor means, such that a weighted priority is allocated to patterns being within a defined peripersonal space in the environment of the sensor means.Type: GrantFiled: June 20, 2006Date of Patent: December 7, 2010Assignee: Honda Research Institute Europe GmbHInventors: Christian Goerick, Heiko Wersing, Mark Dunn, Julian Eggert
-
Publication number: 20100223216Abstract: Generally the background of the present invention is the field of artificial vision systems, i.e. systems having a visual sensing means (e.g. a video camera) and a following processing stage implemented using a computing unit. The processing stage outputs a representation of the visually analysed scene, which output can then be fed to control different actors, such as e.g. parts of a vehicle (automobile, plane, . . . ) or a robot, preferably an autonomous robot such as e.g. a humanoid robot.Type: ApplicationFiled: February 24, 2010Publication date: September 2, 2010Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Julian Eggert, Sven Rebhan
-
Patent number: 7693287Abstract: Techniques are disclosed for sound source localization based on joint learning and evaluation of ITD and ILD representations that are measured in a complementary, correlation-based way using binaural time-frequency spectrums. According to one embodiment, from these measurements and learned representatives, which may, for example, be created by combinations of measurements from signals belonging to the same class, i.e., the same azimuthal location, probability distributions over frequency and class are computed. These probability distributions can be combined over cue and frequency using information-theoretic approaches to get a robust classification of the location and additionally a confidence measure for the quality of the classification result.Type: GrantFiled: May 25, 2005Date of Patent: April 6, 2010Assignee: Honda Research Institute Europe GmbHInventors: Julian Eggert, Volker Willert, Raphael Stahl, Juergen Adamy
-
Publication number: 20090213219Abstract: Estimating the dynamic states of a real-world object over time using a camera, two dimensional (2D) image information and a combination of different measurements of the distance between the object and the camera. The 2D image information is used to track a 2D position of the object as well as its 2D size of the appearance and change in the 2D size of the appearance of the object. In addition, the distance between the object and the camera is obtained from one or more direct depth measurements. The 2D position, the 2D size, and the depth of the object are coupled to obtain an improved estimation of three dimensional (3D) position and 3D velocity of the object. The object tracking apparatus uses the improved estimation to track real-world objects. The object tracking apparatus may be used on a moving platform such as a robot or a car with mounted cameras for a dynamic visual scene analysis.Type: ApplicationFiled: December 9, 2008Publication date: August 27, 2009Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBHInventors: Julian Eggert, Sven Rebhan, Volker Willert, Chen Zhang
-
Publication number: 20090129675Abstract: A method for segregating a figure region from a background region in image sequences from dynamic visual scenes, comprising the steps of: (a) acquiring an image; (b) determining local motion estimations and confidences for each position of the image; (c) modifying a level-set function by moving and distorting the level-set function with the local motion estimations and smearing it based on the local motion confidences, to generate a predicted level set function that is geometrically in correspondence with the image and diffused at positions where the confidence of the motion estimation is low; (d) obtaining input features of the image by using a series of cues; (e) calculating a mask segregating the figure region from the background region of the image using the modified level-set function and the obtained input features; (f) extracting the figure region from the image; and repeating steps (a)-(f) until a termination criterion is satisfied.Type: ApplicationFiled: November 14, 2008Publication date: May 21, 2009Applicant: Honda Research Institute Europe GmbHInventors: Julian Eggert, Daniel Weiler
-
Patent number: 7236961Abstract: Convolutional networks can be defined by a set of layers being respectively made up by a two-dimensional lattice of neurons. Each layer—with the exception of the last layer—represents a source layer for respectively following target layer. A plurality of neurons of a source layer called a source sub-area respectively share the identical connectivity weight matrix type. Each connectivity weight matrix type is represented by a scalar product of an encoding filter and a decoding filter. For each source layer a source reconstruction image is calculated on the basis of the corresponding encoding filters and the activities of the corresponding source sub-area. For each connectivity weight matrix type, each target sub-area and each target layer the input of the target layer is calculated as a convolution of the source reconstruction image and the decoding filter.Type: GrantFiled: March 12, 2002Date of Patent: June 26, 2007Assignee: Honda Research Institute Europe GmbHInventors: Julian Eggert, Berthold Bäuml
-
Publication number: 20070003130Abstract: A method for generating a saliency map for a robot device having sensor means, the saliency map indicating to the robot device patterns in the input space of the sensor means which are important for carrying out a task; wherein the saliency map comprises a weighted contribution from a disparity saliency selection carried out on the basis of depth information gathered from the sensor means, such that a weighted priority is allocated to patterns being within a defined peripersonal space in the environment of the sensor means.Type: ApplicationFiled: June 20, 2006Publication date: January 4, 2007Inventors: Christian Goerick, Heiko Wersing, Mark Dunn, Julian Eggert