Hideki Oyaizu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
Abstract: An information processing apparatus according to an embodiment of the present technology includes a detection unit, an estimation unit, and a prediction unit. The detection unit detects a target object from an input image. The estimation unit estimates a posture of the detected target object. The prediction unit predicts an action of the target object on a basis of the estimated posture.
Abstract: In a calibration chart device 20, a peak detector 32 detects, from plural infrared images shot at respective movement positions by an infrared camera IRC by sequentially moving a marker whose distribution of an infrared-ray radiation amount in a moving direction is a unimodal distribution in a first direction and plural infrared images shot at respective movement positions by the infrared camera IRC by sequentially moving the marker in a second direction different from the first direction, a position of the marker at which a pixel value is maximized, for each pixel. A calibration processor 36 calculates a camera parameter by using the position of the marker detected by the peak detector 32 for each pixel. With this configuration, calibration of the infrared camera can be performed with ease.
Abstract: The present disclosure relates to an imaging apparatus and an imaging method that permit capture of a bright image without using an expensive large-diameter lens. A mirror surface having an opening portion larger in area than an imaging element is formed at a former stage of the imaging element. The mirror surface concentrates light from a subject surface. The imaging apparatus captures an image formed by light that directly enters the imaging element and light that is reflected by the mirror surface and reconstructs a final image from the captured image. The present disclosure is applicable to an imaging apparatus.
Abstract: A signal processing apparatus including a first position calculation unit that calculates a three-dimensional position of a target on a first coordinate system from a stereo image captured by a stereo camera, a second position calculation unit that calculates a three-dimensional position of the target on a second coordinate system from a sensor signal of a sensor capable of obtaining position information of at least one of a lateral direction and a longitudinal direction and position information of a depth direction, a correspondence detection unit that detects a correspondence relationship between the target on the first coordinate system and the target on the second coordinate system, and a positional relationship information estimating unit that estimates positional relationship information of the first coordinate system and the second coordinate system on the basis of the detected correspondence relationship.
Abstract: There is provided an image processing device including a far-infrared acquisition unit that acquires a far-infrared image, a first extraction unit that extracts a plurality of first markers having a first temperature from the far-infrared image, and a far-infrared specification unit that specifies a position of each of a plurality of second markers having a second temperature in the far-infrared image based on a geometric relationship between the plurality of respective first markers.
Abstract: Provided are an image processing apparatus and an image processing method that process a far-infrared image. The image processing apparatus includes a region extraction section, a modal transformation section, and a superimposition section. The region extraction section extracts a region of interest within a visible-light image captured by a visible-light camera. The modal transformation section receives an image of the region of interest within an infrared image captured by an infrared camera observing the same subject as the visible-light camera, and transforms the received image to a modal image. The superimposition section generates a presentation image by superimposing the modal image on the region of interest within the visible-light image. The modal transformation section transforms a far-infrared image of the region of interest to a modal image including an information modal familiar to humans by using, for example, a database and a conditional probability distribution.
Abstract: A field of view of a captured image of a lensless camera can be controlled, and a configuration of generating a restored image including part of an imaging region is realized. Included is a signal processing unit that receives observed image signals as output of an image sensor of a lensless camera to generate a restored image of a restored image region including part of a captured image region of the lensless camera.
December 6, 2018
December 31, 2020
HIDEKI OYAIZU, ILYA RESHETOUSKI, ATSUSHI ITO
Abstract: Provided is an image processing device including an adjustment unit that adjusts any one of a background image which is a far-infrared image showing a background that does not include an object and a target image which is a far-infrared image showing the object, on the basis of a time change model of an observation pixel value, and an extraction unit that extracts a target region including the object in the target image on the basis of a result of comparison between the background image and the target image after the adjustment is performed.
Abstract: State information corresponding to a subject on a visible light image, such as temperature information and sound field information, is acquired from state information indicating a subject state not indicated on the visible light image by an information acquisition unit. An effect processing unit performs an effect process on the visible light image on the basis of the state information acquired by the information acquisition unit. In the effect process, an effect component image is superimposed or a subject image is modified based on the type and temperature of the subject and an effect component image is superimposed or a subject image is modified based on the type of a sound source and the volume of a sound.
Abstract: A polarized image acquisition unit 20 acquires polarized images in a plurality of polarization directions. A reflection information generation unit 30 generates reflection information indicating reflection components from the polarized images in the plurality of polarization directions acquired by the polarized image acquisition unit 20. A reflection information using unit 40 uses the reflection information generated by the reflection information generation unit 30 to acquire an image of a viewed object appearing in the polarized images. A depth estimation unit estimates a depth value of a reflective surface area and acquires a position of the viewed object on the basis of an image of the viewed object appearing in the reflective surface area and the estimated depth value. Therefore, the viewed object positioned in, for example, an area of a blind spot can be easily checked.
Abstract: A control apparatus includes an input unit and a control unit. To the input unit, a pickup image of a camera provided to an own vehicle is input. The control unit detects a mirror provided to a different vehicle that exists in front of the own vehicle from the input pickup image, detects a person from a mirror image of the detected mirror, and recognizes a state of the person from an image of the detected person. Further, the control unit performs an alerting process or a control process for the own vehicle to prevent an accident of the own vehicle or the different vehicle in accordance with the recognized state of the person.
Abstract: An image processing apparatus and an image processing method for processing far-infrared ray images are provided. Specific temperature ranges constituted by pixels having values falling within a temperature range characteristic of a specific target are extracted from a far-infrared ray image captured of the same target. Of the specific temperature ranges, those having motion vectors close to each other are integrated so as to generate integrated ranges. The integrated ranges having motion vectors close to a global motion vector indicative of the motion of the image as a whole are excluded to obtain excluded integrated ranges. Visible light ranges corresponding to the specific temperature ranges in the excluded integrated ranges are extracted from the visible light image to generate visible light motion ranges. The positions of the visible light motion ranges are corrected on the basis of the motion vector of the excluded integrated ranges as a whole.
Abstract: An image acquisition unit 341-1 acquires a polarization image and a non-polarization image indicating a peripheral area of a moving body, such as the peripheral area of a vehicle. A discrimination information generation unit 342-1 uses the polarization image acquired by the image acquisition unit 341-1 and generates analysis object discrimination information indicating a road surface or the like. An image analysis unit 344-1 uses an image of an image analysis area set on the basis of the analysis object discrimination information generated by the discrimination information generation unit 342-1 with respect to the non-polarization image acquired by the image acquisition unit 341-1, and performs a discrimination of an object, such as an obstacle on the road surface. It is possible to efficiently perform a determination of the presence of the object from the non-polarization image of the peripheral area of the moving body.
Abstract: The present disclosure relates to an information processing apparatus capable of detecting a plane constituting a movement-enabling region, as well as to an information processing method, a program, and a mobile object. A normal direction of a plane constituting a road surface is detected on the basis of polarized images in multiple polarizing directions acquired by a polarization camera. A laser ranging sensor measures a distance to a point on the road surface so as to measure a position of the point. The plane constituting the road surface is identified on the basis of information regarding the normal direction of the plane constituting the road surface and information regarding the position of the point on the road surface. This disclosure may be applied to vehicle-mounted systems.
Abstract: A laser range finder projects light while changing a direction in a horizontal direction at a predetermined angle with respect to a vertical direction to also receive reflected light of the light, and detects a direction and a distance in which the light is reflected from an obstacle or the like, according to a difference time between a time of light projection and a time of light reception. A normal direction of a flat plane forming a road surface is detected on the basis of a polarized image. The laser range finder projects light such that the light has a predetermined angle with respect to the vertical direction so as to be orthogonal to the normal direction of the flat plane forming the road surface. The laser range finder can be applied to in-vehicle systems.
Abstract: An information processing apparatus according to an embodiment of the present technology includes a detection unit, an estimation unit, and a judgment unit. The detection unit detects a target object from an input image. The estimation unit estimates a posture of the detected target object. The judgment unit judges a possibility of the target object slipping on the basis of the estimated posture.
Abstract: Image processing methods and apparatus are described. The image processing method comprises receiving input of a visible-ray image and an infrared-ray image obtained by photographing a same subject, estimating, based on the visible-ray image, the infrared-ray image and motion information, a blur estimate associated with the visible-ray image, and generating, based on the estimated blur estimate, a corrected visible-ray image.
Abstract: Methods and apparatus for image processing are provided. The method comprises receiving input of a visible-ray image and a far-infrared-ray image obtained by photographing a same subject, estimating a blur estimation result in the visible-ray image, wherein estimating a blur estimation result comprises calculating a correlation between the visible-ray image and each of a plurality of filter-applied far-infrared ray images in which a different filter is applied to the far-infrared-ray image and selecting the filter for which the calculated correlation is highest, and performing a correction process on the visible-ray image based, at least in part, on the blur estimation result to generate a corrected visible-ray image from which the blur is reduced, wherein generating the corrected visible-ray image comprises applying, to the visible ray image, an inverse filter having an inverse characteristic to a characteristic of the selected filter.
Abstract: An image captured by a far-infrared camera is analyzed to analyze a distribution of a road-surface temperature, and a course of a highest road-surface temperature is determined to be a traveling route. Further, automatic driving along the course of the highest road-surface temperature is performed. Furthermore, a state of the distribution of a road-surface temperature, and a direction of the course of the highest road-surface temperature are displayed on a display section, so that a user (a driver) recognizes them. For example, a state analyzer detects a candidate course travelable for a vehicle, the state analyzer detecting a plurality of the candidate courses, calculates an average value of a road-surface temperature of each of the plurality of the candidate courses, and determines the candidate course having a largest average value of a road-surface temperature to be a traveling route.