Patents by Inventor Andreas PANAKOS
Andreas PANAKOS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961215Abstract: A method for processing images is described, wherein a scenery is recorded as at least one raw image by at least one optical capture means mounted on a vehicle, and wherein image data of the scenery are mapped incompletely and/or erroneously in the subsequently rendered render image in at least one area. In order to provide a user of one or more cameras on a motor vehicle, that have visibility restrictions with a more agreeable visual experience, the method includes identifying the area(s) of incomplete and/or erroneous mapping in the render image on the basis of existing visibility restrictions, generating masks that enclose the area(s) of incomplete and/or erroneous mapping as masked areas, reconstructing image data in unmasked areas of the render image by means of digital inpainting and synthesizing together with the masked areas to produce a correction image, and displaying the completed and/or debugged correction image.Type: GrantFiled: January 30, 2020Date of Patent: April 16, 2024Assignee: Conti Temic microelectronic GmbHInventors: Ekaterina Grünwedel, Charlotte Gloger, Andreas Panakos
-
Patent number: 11417012Abstract: A method includes the steps: a) capturing image of an environment surrounding a vehicle by a camera; b) determining an area of increased brightness in the image based on pixels having a brightness exceeding a predefined threshold; c) estimating position coordinates of a light source in the environment from the area of increased brightness; d) detecting a shadow of the vehicle in the image; and e) determining a type of the light source as a spot light source or a direction light source from the shadow, environment.Type: GrantFiled: May 22, 2019Date of Patent: August 16, 2022Assignee: CONTI TEMIC MICROELECTRONIC GMBHInventors: Andreas Panakos, Martin Buerker, Charlotte Gloger, Frank Kittmann, Moritz Meyers, Markus Friebe
-
Patent number: 11410430Abstract: A surround view system for a vehicle includes a detection unit and an evaluation unit. The detection unit is designed to detect data relating to the surroundings. The evaluation unit is designed to identify an object in the detected surroundings data and to determine the 3D shape of this object. The evaluation unit is additionally designed to add the determined 3D shape to a projection surface of the surround view system to produce a modified projection surface. The evaluation unit is designed to project the surroundings data onto the modified projection surface.Type: GrantFiled: February 26, 2019Date of Patent: August 9, 2022Assignee: CONTI TEMIC MICROELECTRONIC GMBHInventors: Martin Buerker, Charlotte Gloger, Andreas Panakos, Frank Kittmann
-
Publication number: 20220215668Abstract: A method is disclosed for generating an image of vehicle surroundings, including: providing multiple vehicle cameras which are arranged in particular on a vehicle bodywork of a vehicle (S1). Individual HDR images are calculated and/or generated from image data or images from the vehicle cameras (S2). The multiple individual HDR images are assembled to produce an overall HDR image. An image is calculated having a low dynamic range, in particular an LDR image, from the overall HDR image. An apparatus is also disclosed for generating an image of vehicle surroundings.Type: ApplicationFiled: May 5, 2020Publication date: July 7, 2022Applicant: Conti Temic microelectronic GmbHInventors: Martin Bürker, Markus Friebe, Andreas Panakos, Danilo Djordjevic, Markus Eich
-
Publication number: 20220156894Abstract: A method for processing images is described, wherein a scenery is recorded as at least one raw image by at least one optical capture means mounted on a vehicle, and wherein image data of the scenery are mapped incompletely and/or erroneously in the subsequently rendered render image in at least one area. In order to provide a user of one or more cameras on a motor vehicle, that have visibility restrictions with a more agreeable visual experience, the method includes identifying the area(s) of incomplete and/or erroneous mapping in the render image on the basis of existing visibility restrictions, generating masks that enclose the area(s) of incomplete and/or erroneous mapping as masked areas, reconstructing image data in unmasked areas of the render image by means of digital inpainting and synthesizing together with the masked areas to produce a correction image, and displaying the completed and/or debugged correction image.Type: ApplicationFiled: January 30, 2020Publication date: May 19, 2022Applicant: Conti Temic microelectronic GmbHInventors: Ekaterina Grünwedel, Charlotte Gloger, Andreas Panakos
-
Publication number: 20210256278Abstract: The invention relates to a driver assistance system (100) for a vehicle (105) for detecting light conditions in the vehicle (105), having a sensor arrangement (102, 103, 104) designed to capture sensor data, and having a control device (101) designed to ascertain if the sight of a driver of the vehicle (105) is negatively influenced by a source of stray light external to the vehicle.Type: ApplicationFiled: September 3, 2019Publication date: August 19, 2021Inventors: Charlotte GLOGER, Martin BUERKER, Andreas PANAKOS, Markus FRIEBE
-
Publication number: 20210225023Abstract: The invention relates to a device and a method for estimating position coordinates and the type of light sources in images of the surrounding environment, in particular in images of the surrounding environment of a surround-view camera in vehicles. The type of light source (120, 125, 128) can be a spot light source or a direction light source.Type: ApplicationFiled: May 22, 2019Publication date: July 22, 2021Inventors: Andreas PANAKOS, Martin BUERKER, Charlotte GLOGER, Frank KITTMANN, Moritz MEYERS, Markus FRIEBE
-
Publication number: 20210004614Abstract: The invention relates to a surround view system (1) for a vehicle (2). The surround view system (1) comprises a detection unit (20) and an evaluation unit (10). The detection unit (20) is designed to detect data relating to the surroundings. The evaluation unit (10) is designed to identify an object (3) in the detected data relating to the surroundings and to determine the 3D shape of this object. The evaluation unit (10) is additionally designed to add the determined 3D shape to a projection surface (15) of the surround view system (1) for the detected data relating to the surroundings such that an adapted projection surface (16) results. The evaluation unit (10) is designed to project the data relating to the surroundings onto the adapted projection surface (16).Type: ApplicationFiled: February 26, 2019Publication date: January 7, 2021Inventors: Martin BUERKER, Charlotte GLOGER, Andreas PANAKOS, Frank KITTMANN
-
Patent number: 10867401Abstract: A method and device for determining an ego-motion of a vehicle are disclosed. Respective sequences of consecutive images are obtained from a front view camera, a left side view camera, a right side view camera and a rear view camera and merged. A virtual projection of the images to a ground plane is provided using an affine projection. An optical flow is determined from the sequence of projected images, an ego-motion of the vehicle is determined from the optical flow and the ego-motion is used to predict a kinematic state of the car.Type: GrantFiled: August 16, 2017Date of Patent: December 15, 2020Assignee: Application Solutions (Electronics and Vision) Ltd.Inventors: Rui Guerreiro, Andreas Panakos, Carlos Silva, Dev Yadav
-
Patent number: 10719955Abstract: The application provides a method of calibrating a camera of a vehicle. The vehicle has a reference frame. The method comprises taking an image of a scene by the camera. The ground plane of the vehicle is then determined according to features of the image. An origin point of the vehicle reference frame is later defined as being located on the determined ground plane. A translation of a reference frame of the camera is afterward determined for aligning the camera reference frame with the vehicle reference frame.Type: GrantFiled: October 13, 2017Date of Patent: July 21, 2020Assignee: Application Solutions (Electronics and Vision) Ltd.Inventors: Rui Guerreiro, Andreas Panakos, Carlos Silva, Dev Yadav
-
Patent number: 10652466Abstract: A method and a device for image stabilization of an image sequence of one or more cameras of a vehicle are disclosed. Images of the surroundings of the vehicle are recorded and image points of the recorded images are projected to a unit sphere, where the projection includes applying a lens distortion correction. For each of the one or more cameras, a vanishing point is calculated using the projected image points on the unit sphere and a motion of the vanishing point on the unit sphere is tracked. The motion of the vanishing point is used to calculate a corrected projection to a ground plane.Type: GrantFiled: August 16, 2017Date of Patent: May 12, 2020Assignee: APPLICATIONS SOLUTIONS (ELECTRONIC AND VISION) LTDInventors: Rui Guerreiro, Carlos Silva, Andreas Panakos, Dev Yadav
-
Publication number: 20180040141Abstract: The application provides a method of calibrating a camera of a vehicle. The vehicle has a reference frame. The method comprises taking an image of a scene by the camera. The ground plane of the vehicle is then determined according to features of the image. An origin point of the vehicle reference frame is later defined as being located on the determined ground plane. A translation of a reference frame of the camera is afterward determined for aligning the camera reference frame with the vehicle reference frame.Type: ApplicationFiled: October 13, 2017Publication date: February 8, 2018Applicant: Applications Solutions (Electronic and Vision) LtdInventors: Rui Guerreiro, Andreas Panakos, Carlos Silva, Dev Yadav
-
Publication number: 20170347030Abstract: A method and a device for image stabilization of an image sequence of one or more cameras of a vehicle are disclosed. Images of the surroundings of the vehicle are recorded and image points of the recorded images are projected to a unit sphere, where the projection includes applying a lens distortion correction. For each of the one or more cameras, a vanishing point is calculated using the projected image points on the unit sphere and a motion of the vanishing point on the unit sphere is tracked. The motion of the vanishing point is used to calculate a corrected projection to a ground plane.Type: ApplicationFiled: August 16, 2017Publication date: November 30, 2017Applicant: Applications Solutions (Electronic and Vision) LtdInventors: Rui Guerreiro, Carlos Silva, Andreas Panakos, Dev Yadav
-
Publication number: 20170345164Abstract: A method and device for determining an ego-motion of a vehicle are disclosed. Respective sequences of consecutive images are obtained from a front view camera, a left side view camera, a right side view camera and a rear view camera and merged. A virtual projection of the images to a ground plane is provided using an affine projection. An optical flow is determined from the sequence of projected images, an ego-motion of the vehicle is determined from the optical flow and the ego-motion is used to predict a kinematic state of the car.Type: ApplicationFiled: August 16, 2017Publication date: November 30, 2017Applicant: Applications Solutions (Electronic and Vision) LtdInventors: Rui Guerreiro, Andreas Panakos, Carlos Silva, Dev Yadav
-
Publication number: 20160280135Abstract: In an embodiment, an animal detection system (11) includes a surround view system (12), a location determination system (13) and a processing module (14). The processing module (14) is configured to receive data from the surround view system (12) and receive data regarding the location of the vehicle (10) from the location determination system (13). The processing module (14) is configured to determine if the location of the vehicle (10) is within an animal zone, and if the location of the vehicle (10) is within an animal zone, to process data received from the surround view system (12). If the data from the surround view system (12) indicates the presence of an animal, the processing module (14) activates an alert (19).Type: ApplicationFiled: March 9, 2016Publication date: September 29, 2016Inventors: Louis-Marie AUBERT, Paul FLANAGAN, Rui GUERREIRO, Clifford LAWSON, Andreas PANAKOS, John POWELL, Carlos SILVA, Greg WINGFIELD