Masayuki Takemura has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
Abstract: A lane departure warning device includes: a compartment line detection unit that detects a compartment line in an image of a running lane on which a vehicle is traveling, the image being captured by an image capturing device which the vehicle has; a dirtiness detection unit that detects dirtiness on any one of a lens of the image capturing device or a partition between an outside of a vehicle interior of the vehicle and the image capturing device; a departure judgment unit that judges whether or not the vehicle departs from the compartment line; and a warning judgment unit that when it is judged that the vehicle departs from the compartment line, outputs a warning signal based on a degree of the dirtiness detected. A position of the vehicle with respect to the compartment line when the warning signal is output varies depending on the degree of the dirtiness.
Abstract: An in-vehicle apparatus includes: a camera with a shielded area formed within a photographing area thereof by a light shielding member, which outputs a photographic image of an area surrounding a vehicle photographed through a photographic lens; and an adhering matter detection unit that detects adhering matter settled on the photographic lens based upon images in the shielded area included in a plurality of photographic images output from the camera at different time points.
June 28, 2013
January 9, 2014
Masahiro KIYOHARA, Kota IRIE, Masayuki TAKEMURA, Katsuyuki NAKAMURA
Abstract: To provide an onboard environment recognition system capable of preventing, with a reduced processing load, erroneous recognition caused by light from a headlight of a vehicle in the surroundings.
Abstract: A vehicle-mounted environment recognition apparatus including a simple pattern matching unit which extracts an object candidate from an image acquired from a vehicle-mounted image capturing apparatus by using a pattern shape stored in advance and outputs a position of the object candidate, an area change amount prediction unit which calculates a change amount prediction of the extracted object candidate on the basis of an object change amount prediction calculation method set differently for each area of a plurality of areas obtained by dividing the acquired image, detected vehicle behavior information, and an inputted position of the object candidate, and outputs a predicted position of an object, and a tracking unit which tracks the object on the basis of an inputted predicted position of the object.
Abstract: An in-car-use multi-application execution device is provided that ensures safety while maintaining convenience by securing operation of a plurality of applications and suppressing occurrence of a termination process within a limited processing capacity without degrading a real-time feature. The in-car-use multi-application execution device dynamically predicts a processing time for each application, and schedules each application on the basis of the predicted processing time. If it is determined that an application failing to complete a process in a prescribed cycle exists as a result of the scheduling, a process is executed that terminates the application or degrades the function of the application on the basis of a preset priority order.
Abstract: A vehicle controller is provided capable of expanding an application range of departure prevention control while suppressing erroneous control. The vehicle controller includes: a vehicle-mounted camera 600 that captures an image in front of a vehicle; and an ECU 610 that decides one vehicle control method from a plurality of vehicle control methods and controls an actuator with the decided vehicle control method. The vehicle-mounted camera includes an area-specific confidence calculation section 400 that divides the image captured into a plurality of areas on a basis of an acquired image by the capturing and a recognized lane, calculates confidence for each divided area and outputs area-specific confidence information, and the ECU decides a vehicle control method in accordance with the area-specific confidence information.
Abstract: A virtual vehicle which approaches to a monitor-side vehicle and a virtual background are defined in a camera image, and a region in which a virtual vehicle moves fast with respect to a virtual background is defined as a first region F1, and a region in which a virtual vehicle moves slowly with respect to a virtual background is defined as a second region F2. Then, the first region F1 and the second region F2 combined with an actual camera image. In the first region F1, in which the virtual vehicle moves fast, a monitored vehicle is detected by a movement aspect of feature portions in the region, and in the second region F2, in which the virtual vehicle moves slowly, a monitored vehicle is detected by pattern recognition.
Abstract: Provided are an information processing device and a processing method both of which are capable of executing processing in response to a traveling situation of a vehicle, thereby securing safety at the time of emergency while securing convenience.
Abstract: A traveling lane detector according to the present invention includes: an imaging unit mounted on a vehicle to take a road surface image; and an image processor performing image processing on the image to detect lane marks on the road surface. The image processor judges whether the vehicle is crossing the lane marks, and, when the vehicle is not crossing any of the lane marks, defines first and second windows, in the image, for detecting the lane marks located respectively on left and right parts of a road surface in front of or behind the vehicle, performs image processing on the image in each of the windows to detect the lane marks, and, when the vehicle is crossing any of the lane marks, defines a third window including the currently-crossed lane mark in the image, performs image processing on the image in the third window to detect the lane mark.
Abstract: A virtual vehicle which approaches to a monitor-side vehicle and a virtual background are defined in a camera image, and a region in which a virtual vehicle moves fast with respect to a virtual background is defined as a first region F1, and a region in which a virtual vehicle moves slowly with respect to a virtual background is defined as a second region F2. Then, the first region F1 and the second region F2 are applied to an actual camera image. In the first region F1 in which a virtual vehicle moves fast, a monitored vehicle is detected by a movement aspect of feature portions in the region, and in the second region F2 in which a virtual vehicle moves slowly, a monitored vehicle is detected by pattern recognition.
Abstract: Disclosed herein is a vehicle detection apparatus has a first and a second sensors to detect vehicles ahead of the host vehicle, and a judgment part to judge that, in the case where the second sensor detects an object which the first sensor has detected, the object is a vehicle. The judging part judges the object as a vehicle in the case where it has once judged the object as a vehicle even though the second sensor does not detect the object in the next judgment. The first and second sensors should preferably be a radar and a camera, respectively.