VEHICLE VISION SYSTEM WITH MICRO LENS ARRAY

- MAGNA ELECTRONICS INC.

A vehicular vision system includes a camera disposed at a vehicle and having a field of view. The camera is operable to capture image data. The camera includes a pixelated array of photosensing elements. A lens array may be disposed at the pixelated array of photosensing elements. The lens array includes an array of lens elements for imaging light onto respective sub-arrays of photosensing elements of the pixelated array of photosensing elements. An image processor is operable to process captured image data and, responsive at least in part to image processing of captured image data, the vehicular vision system is operable to determine distance to an object present in the field of view of the camera.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is related to U.S. provisional application Ser. No. 61/770,048, filed Feb. 27, 2013, and Ser. No. 61/734,457, filed Dec. 7, 2012, which are hereby incorporated herein by reference in their entireties.

FIELD OF THE INVENTION

The present invention relates to vehicles with cameras mounted thereon and in particular to vehicles with one or more exterior-facing cameras, such as forward facing cameras and/or sideward facing cameras and/or rearward facing cameras.

BACKGROUND OF THE INVENTION

Use of imaging sensors in vehicle imaging systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 5,949,331; 5,670,935; and/or 5,550,677, which are hereby incorporated herein by reference in their entireties.

SUMMARY OF THE INVENTION

The present invention provides a camera for a vision system that utilizes one or more cameras or image sensors to capture image data of a scene exterior (such as forwardly) of a vehicle and provides a display of images indicative of or representative of the captured image data. The imager or camera of the vehicular vision system includes a pixelated imaging array of photosensing elements and a lens array disposed at the imaging array. The lens array comprises an array of lens elements for imaging light onto respective pixels or elements of the pixelated imaging array. The vision system may determine and provide disparity mapping to provide a stereo vision feature, especially in distances of less than about 3 m. The present invention thus may provide a camera or imager for a vehicular vision system that has reduced height and that may provide enhanced performance during parking maneuvers or vehicle maneuverings at or near objects.

These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a plan view of a vehicle with a vision system that incorporates cameras in accordance with the present invention;

FIG. 2 is a schematic of a 2×2 lens array in accordance with the present invention;

FIG. 3 is a schematic of an a×b lens array in accordance with the present invention;

FIG. 4 is a side elevation of an image sensor and lens array in accordance with the present invention;

FIG. 5 is a schematic of a side view of a lens array light field camera 30 capturing a schematized object 60, with two possible virtual viewpoints shown at 40 and 50;

FIG. 6 shows a vehicle 10 with generally evenly (ideal for a Stanford Light Field) distributed cameras 140 (schematized) over the vehicle;

FIG. 7 shows an enlarged view of a rear portion of the vehicle of FIG. 6; and

FIG. 8 shows a vehicle with cameras 151a-p distributed over the side area of the vehicle at hidden places.

DETAILED DESCRIPTION OF THE INVENTION

Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes an imaging system or vision system 12 that includes one or more imaging sensors or cameras (such as a rearward facing imaging sensor or camera 14a and/or a forwardly facing camera 14b at the front (or at the windshield) of the vehicle, and/or a sidewardly/rearwardly facing camera 14c, 14b at the sides of the vehicle), which capture images exterior of the vehicle, with the cameras having a lens for focusing images at or onto an imaging array or imaging plane of the camera (FIG. 1). The vision system 12 is operable to process image data captured by the cameras and may provide displayed images at a display device 16 for viewing by the driver of the vehicle. Optionally, the vision system may process image data to detect objects, such as objects to the rear of the subject or equipped vehicle during a reversing maneuver, or such as approaching or following vehicles or vehicles at a side lane adjacent to the subject or equipped vehicle or the like.

To have cameras integrated to vehicle side mirrors and into the rear portion or rear hatch of a vehicle is known. Having a two lens systems with different focus lengths projecting on one imager chip is described in U.S. patent application Ser. No. 13/534,657, filed Jun. 27, 2012 (Attorney Docket MAGO4 P-1892), which is hereby incorporated herein by reference in its entirety. To use monolytic wafer level cameras with optics, imager and bus driver on one chip, optionally with tunable liquid lens and optionally using micromechanical elements (MEM) also referred to as DLP is described in International Publication No. WO 2013/081985, which is hereby incorporated herein by reference in its entirety. Micro lens cameras are sometimes used in cellular telephone applications.

Conventional vehicular cameras take a lot of space (such as about 3 cm by about 3 cm by about 5 cm or thereabouts), which limits the design options in hatch handle integrations (such as described in U.S. provisional application Ser. No. 61/736,103, filed Dec. 12, 2012, which is hereby incorporated herein by reference in its entirety) and mirror integrations, especially where the height of the camera is a concern. There are concerns with discriminating distances in regions that are less than about 50 cm from the camera or rearward of the vehicle when employing ultrasound sensor systems or wide angle or fish eye cameras.

The present invention provides an array camera for use as a vehicle rear camera, especially in hatch handle integration or the like, and/or as a surround view side camera, especially as integrated side mirror (housing) cameras, or the like.

Array cameras employ several lens systems projecting on a limited number or couple of imager pixel arrays. Each array has its own lens. The arrays are in direct neighborhood covering the whole imager, such as can be seen in the schematics of FIGS. 2, 3 and 4. For example, and as shown in FIG. 2, the camera may have four lenses disposed over respective portions or arrays or sub-arrays of the imager (with each sub-array having, for example, about 500,000 pixels or photosensing elements for a two megapixel imager) or, for example, and as shown in FIG. 3, the camera may have multiple lenses (such as up to about 400 or more lenses) disposed over respective portions or arrays or sub-arrays of the imager (with each sub-array having, for example, about 5,000 pixels or photosensing elements for a two megapixel imager). Typically, each lens and sub-array produces one (merged) pixel or output (such as having 24 bit) in the resulting image of the light field. The achievements are that the lens systems can be made more primitive or less advanced, since there are redundant arrays covering at least in part the identical portion of the outside scene (as shown in FIGS. 4 and 5, the fields of view of the lenses overlap with other lenses and sub-arrays or covered array portions).

The barrel of the imager or camera may be designed much smaller or may even be eliminated, since the covered amount of pixels is smaller (for example, a conventional single lens may have to cover two million pixels (for a two megapixel or 2 MP imager), while the multiple lenses may each cover a much smaller number of pixels). Thus, each array's lens system may be substantially reduced in height.

Typically, all lens systems will have the same or identical or substantially the same focal length (such as for a wide angle lens from about 120 degrees to about 210 degrees) and typically all of the camera's viewing directions will be chosen to be substantially the same or identical (all in parallel). Thus, the disparity due to the array's distance will allow the system to do disparity mapping and by that to conceive stereo vision, especially in distances of less than about 3 m (which is interesting or a region of interest for vehicle parking scenes).

The image processing chain may be enabled to execute a valuable three dimensional or 3D construction (allowing for a determination of depth of the scene imaged by the camera or cameras). Another benefit of the present invention is that the camera can show short distances much more reasonable than a typical wide angle or fisheye lens camera. Thus, the system may be superior to ultra sound or ultrasonic sensor distance determination in critical distances of less than about 50 cm. At such distances of less than about 50 cm, the usual automotive ultrasound or ultrasonic sensors do not work well or do not work all due to wave reflection interference.

The imager may have a typical diagonal size, such as about ⅓″ to ⅕″ in diagonal size, and may, for example, comprise a megapixel imager, such as a 2 MP (megapixel) imager (or more) or the like.

Further or alternative configurations may have arrays on more than one imager arranged in a cluster or faceted eye configuration or arrays may be set up in a line (for higher disparity). The optics may comprise stack optics by PCBA's or the like.

Therefore, the present invention provides a vehicular vision system comprising one or more imagers or cameras, each having a pixelated imaging array of photosensing elements and a lens array disposed at the imaging array. The lens array comprises an array of lens elements for imaging light onto respective pixels or elements of the pixelated imaging array.

Further or alternative configurations may use or combine pixelated imaging array devices or cameras (such as wafer level cameras such as described in International Publication No. WO 2013/081985, which is hereby incorporated herein by reference in its entirety) into one or several light field cameras. Light field cameras are known in the scientific or experimental area of conventional photography and video taking. These applications allow the user to edit the focal point past the imaged scene and to move the view point within limited borderlines, and thus such cameras are also referred to as 4D cameras. Typically, the viewpoint can be shifted offline by the camera's width/height, with such a camera typically comprising about a 2 cm×2 cm imaging plane or array.

By generating view points from slightly different angles, a (real) stereo impression may be generated (allowing for determination of depths in the scene imaged by the cameras). FIG. 5 is a scheme of the principle of a light field camera 30, having a plurality of lenses or an array of lenses 70 disposed at an imager or imaging array of the camera. Each single lens or array 70 has a different position. The schematized ray's 80 represent the opening angle or field of view of each lens array 70. As shown in FIG. 5, the shift (in one dimension, such as in a vertical direction in FIG. 5) between virtual viewpoints 40 and 50 of a focused object 60 results in the object being viewed through different lenses and sub-arrays of the imaging array and lens array. Image processing of image data captured by the pixels of each lens or lens array can determine depth or distance to the detected object present in the field of view of the camera.

The system of the present invention may be used in conjunction with or as part of a driver detection or monitoring system or the like, such as for monitoring a driver's behavior or attentiveness or viewing direction or the like. Typically, there are two types of driver monitoring systems, such as a head tracking system and/or an eye tracking system. The first type typically has a stereo pair of conventional (non-array lens) near infra red (NIR) or visible light cameras, typically using ambient light or additional illumination targeted towards the driver's head. The second type typically has one camera and a structured light emitter. The emitter may be a LASER, a LED or a flashlight of suitable wavelength or other suitable illumination source. Optionally, and in accordance with the present invention, one light field camera (array camera) may be utilized in an in-cabin head and/or eye tracking system for tracking the driver's head and/or driver's eyes. The camera may be sensitive to NIR and/or to visible light. The system may acquire the camera view's depths and by that may determine the distance to the driver's head (and/or eyes and/or additional properties of the driver's body such as the driver's arms, hands, shoulders, mouth, chin nose and/or forehead) and an angle relative to the camera's viewing direction. By that, the system may determine the driver's facing direction and eye gaze direction without a second camera or directed or structured light emitter. The system of the present invention thus may operate with any suitable illumination.

To overcome the limitation in shifting the virtual viewpoint by just the camera's width/height, the same principle is used but by not using different array areas of a single camera, but a wider area covered by multiple cameras. Such an array formed by multiple lens cameras or preferably single lens cameras is called a ‘Stanford light field camera’ or ‘Stanford light field array’.

As another aspect of the present invention, a Stanford light field camera or array may be composed at a vehicle by using multiple cameras or multiples of the above mentioned monolithic wafer level cameras or two or more lens array cameras in and/or at a vehicle, such as for capturing images or image data representative of the inside of the vehicle, and/or such as for capturing images or image data representative of the area surrounding the vehicle or outside or exterior the vehicle. For example, and with reference to FIG. 6, a vehicle 10 may have generally evenly (ideal for a Stanford light field array) distributed cameras 140 at or over the exterior of the vehicle. FIG. 7 shows a sectional close up view of the rear portion of the vehicle with the cameras 140 disposed thereat. As can be seen in FIGS. 6 and 7, the cameras may be disposed in a spaced apart arrangement at the sheet metal or body portion of the vehicle and/or at the exterior mirrors and/or headlamps and/or taillights and/or exterior indicators/lights of the vehicle. The achievement will be that nearly every useful virtual viewpoint may be generatable for the imaging system with little to no optical restrictions and without severe image morphing operations (optically or by graphically computing).

Due to the relatively wide distances between the cameras (such as, for example, from about 20 cm to about 4 m or thereabouts), the environment surrounding the vehicle may be captured in true stereo vision. The light field cameras may be capable of providing or delivering a depth map of the scene in the field of view of the camera or cameras (providing depths and/or distances to objects present in the imaged scene), and this may be paired with delivering the scene's texture. By that, a Stanford light field array combines the properties of a conventional mono camera, which works in fusion, with a depth sensor, such as a LIDAR sensor or RADAR sensor (at least for a certain range). The depth map may be provided as an input to an advanced driver assistant system, such as, for example, a full or partially automated parking system for determining possible parking spaces or potential collision hazards, or a city mitigation system for conceiving the road scene in the path of the vehicle and possible hazards such as like inattentive pedestrians entering the road, and/or the like. Because the system captures image data and delivers image data or an image for processing, the system can employ classifiers to distinguish relevant objects from comparably less relevant or irrelevant objects and the system may track the moving directions of the objects since their distance is also determined.

Also, due to the comparably small number of pixels on a wafer level lens array, the image resolution of one array may not be very high, but because several wafer level cameras may add to a captured or composed image, the quality may be increased by image stacking and statistically waging methods, such as ‘multi-frame super resolution’ or the like. Because of vehicle design considerations, the even distribution of cameras may be difficult to achieve, especially where cameras placed at smooth surfaces may be unacceptable to the customer or vehicle manufacturer. Optionally, the vehicle may have or utilize materials at the camera that may be transparent to the camera's view, which allows to place the camera at any desired position. Optionally, the camera may operate in or be sensitive to light in the infrared and/or near infrared wave lengths, and the material disposed at or over the camera may be transmissive or at least partially or substantially transmissive to infrared and/or near infrared wave lengths of light.

More practically is the approach to place the cameras as well distributed as possible and at hidden locations at the vehicle. For example, and as shown in FIG. 8, a vehicle with cameras 151a-p (such as, for example, wafer level cameras) distributed at or over the vehicle at hidden locations or places (such as locations at or near or around windows, door handles, mirrors, wheel wells, bumpers, headlamps, taillights, indicators and/or the like, where the cameras may be hidden or not readily viewable or discernible and thus may be vehicle design wise acceptable). In the illustrated embodiment, the cameras may be:

head lamp integrated cameras 151a;

front bumper integrated cameras 151b;

wind skirt integrated cameras 151c;

wheel housing integrated cameras 151d;

integrated to windshield wiper cover cameras 151e;

side mirror integrated cameras 151f;

behind door gap cameras 151g;

door handle integrated cameras 151h;

window sealing integrated cameras 151i;

door pillar integrated cameras 151j;

door sill integrated cameras 151k;

compartment behind window cameras 151l;

tail light integrated cameras 151m;

rear trunk handle integrated cameras 151n;

rear license plate integrated cameras 151o; and/or

rear bumper integrated cameras 151p.

Clearly, other locations at or on the vehicle may be suitable as well. Some places may by less sufficient than others due to problems with pollution of the cameras, but due to the high redundancy of the imaging system (due to multiple cameras having overlapping fields of view), the system may be able to cope with the image limitations. Covered areas on one camera or camera array may be captured well by one or more other cameras or camera arrays, and the captured image of the other camera may replace entirely or in part the disturbed image captured by the covered or partially covered camera. The same may be done when a camera is off or damaged.

When using multiple wafer level cameras, it may be desirable to use low or reduced sophisticated communication interfaces. Optionally, for example, all or bunches or groups of cameras may share one or more common busses for communicating image data, commands, initialization and its position data to an image processing device (such as to a common image processor of the vehicle imaging system). It is known to transmit camera data and camera control data by LVDS on a twisted pair of wires or on coaxial cables. It may be possible to reduce the amount of data by utilizing image compression. Due to the fact that an image compression algorithm may have to run in each single camera, this add on may be too cost intensive in practice. Thus, a more powerful data communication bus, such as a flexray or other optical bus or the like, may be the data transmission medium of choice. Alternatively, all or some of the cameras may communicate wirelessly (digitally wireless LAN, Bluetooth or analog (FBAS) via radio transmission or the like) to an image processor or image processing system or the like. Optionally, the camera data and/or control data of one or multiple cameras may be transmitted via modulated carrier waves through coaxial cables, such as described in U.S. provisional application Ser. No. 61/864,837, filed Aug. 12, 2013, which is hereby incorporated herein by reference in its entirety.

The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in International Publication Nos. WO 2013/081984 and/or WO 2013/081985, which are hereby incorporated herein by reference in their entireties.

The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an EyeQ2 or EyeQ3 image processing chip available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580; and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.

The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ladar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, an array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (preferably a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.

For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, and/or International Publication Nos. WO 2011/028686; WO 2010/099416; WO 2012/061567; WO 2012/068331; WO 2012/075250; WO 2012/103193; WO 2012/0116043; WO 2012/0145313; WO 2012/0145501; WO 2012/145818; WO 2012/145822; WO 2012/158167; WO 2012/075250; WO 2012/103193; WO 2012/0116043; WO 2012/0145501; WO 2012/0145343; WO 2012/154919; WO 2013/019707; WO 2013/016409; WO 2012/145822; WO 2013/067083; WO 2013/070539; WO 2013/043661; WO 2013/048994; WO 2013/063014, WO 2013/081984; WO 2013/081985; WO 2013/074604; WO 2013/086249; WO 2013/103548; WO 2013/109869; WO 2013/123161; WO 2013/126715; WO 2013/043661 and/or WO 2013/158592 and/or U.S. patent application Ser. No. 14/082,573, filed Nov. 18, 2013 (Attorney Docket MAG04 P2183); Ser. No. 14/082,574, filed Nov. 18, 2013 (Attorney Docket MAG04 P2184); Ser. No. 14/082,575, filed Nov. 18, 2013 (Attorney Docket MAG04 P2185); Ser. No. 14/082,577, filed Nov. 18, 2013 (Attorney Docket MAG04 P2203); Ser. No. 14/071,086, filed Nov. 4, 2013 (Attorney Docket MAG04 P2208); Ser. No. 14/076,524, filed Nov. 11, 2013 (Attorney Docket MAG04 P2209); Ser. No. 14/052,945, filed Oct. 14, 2013 (Attorney Docket MAG04 P-2165); Ser. No. 14/046,174, filed Oct. 4, 2013 (Attorney Docket MAG04 P-2158); Ser. No. 14/016,790, filed Oct. 3, 2013 (Attorney Docket MAG04 P-2139); Ser. No. 14/036,723, filed Sep. 25, 2013 (Attorney Docket MAG04 P-2148); Ser. No. 14/016,790, filed Sep. 3, 2013 (Attorney Docket MAG04 P-2139); Ser. No. 14/001,272, filed Aug. 23, 2013 (Attorney Docket MAG04 P-1824); Ser. No. 13/970,868, filed Aug. 20, 2013 (Attorney Docket MAG04 P2131); Ser. No. 13/964,134, filed Aug. 12, 2013 (Attorney Docket MAG04 P-2123); Ser. No. 13/942,758, filed Jul. 16, 2013 (Attorney Docket MAG04 P-2127); Ser. No. 13/942,753, filed Jul. 16, 2013 (Attorney Docket MAG04 P-2112); Ser. No. 13/927,680, filed Jun. 26, 2013 (Attorney Docket MAG04 P-2091); Ser. No. 13/916,051, filed Jun. 12, 2013 (Attorney Docket MAG04 P-2081); Ser. No. 13/894,870, filed May 15, 2013 (Attorney Docket MAG04 P-2062); Ser. No. 13/887,724, filed May 6, 2013 (Attorney Docket MAG04 P-2072); Ser. No. 13/852,190, filed Mar. 28, 2013 (Attorney Docket MAG04 P2046); Ser. No. 13/851,378, filed Mar. 27, 2013 (Attorney Docket MAG04 P-2036); Ser. No. 13/848,796, filed Mar. 22, 2012 (Attorney Docket MAG04 P-2034); Ser. No. 13/847,815, filed Mar. 20, 2013 (Attorney Docket MAG04 P-2030); Ser. No. 13/800,697, filed Mar. 13, 2013 (Attorney Docket MAG04 P-2060); Ser. No. 13/785,099, filed Mar. 5, 2013 (Attorney Docket MAG04 P-2017); Ser. No. 13/779,881, filed Feb. 28, 2013 (Attorney Docket MAG04 P-2028); Ser. No. 13/774,317, filed Feb. 22, 2013 (Attorney Docket MAG04 P-2015); Ser. No. 13/774,315, filed Feb. 22, 2013 (Attorney Docket MAG04 P-2013); Ser. No. 13/681,963, filed Nov. 20, 2012 (Attorney Docket MAG04 P-1983); Ser. No. 13/660,306, filed Oct. 25, 2012 (Attorney Docket MAG04 P-1950); Ser. No. 13/653,577, filed Oct. 17, 2012 (Attorney Docket MAG04 P-1948); and/or Ser. No. 13/534,657, filed Jun. 27, 2012 (Attorney Docket MAG04 P-1892), and/or U.S. provisional application Ser. No. 61/901,127, filed Nov. 7, 2013; Ser. No. 61/905,461, filed Nov. 18, 2013; Ser. No. 61/905,462, filed Nov. 18, 2013; Ser. No. 61/895,610, filed Oct. 25, 2013; Ser. No. 61/895,609, filed Oct. 25, 2013; Ser. No. 61/893,489, filed Oct. 21, 2013; Ser. No. 61/886,883, filed Oct. 4, 2013; Ser. No. 61/879,837, filed Sep. 19, 2013; Ser. No. 61/879,835, filed Sep. 19, 2013; Ser. No. 61/878,877, filed Sep. 17, 2013; Ser. No. 61/875,351, filed Sep. 9, 2013; Ser. No. 61/869,195, filed. Aug. 23, 2013; Ser. No. 61/864,835, filed Aug. 12, 2013; Ser. No. 61/864,836, filed Aug. 12, 2013; Ser. No. 61/864,837, filed Aug. 12, 2013; Ser. No. 61/864,838, filed Aug. 12, 2013; Ser. No. 61/856,843, filed Jul. 22, 2013, Ser. No. 61/845,061, filed Jul. 11, 2013; Ser. No. 61/844,630, filed Jul. 10, 2013; Ser. No. 61/844,173, filed Jul. 9, 2013; Ser. No. 61/844,171, filed Jul. 9, 2013; Ser. No. 61/842,644, filed Jul. 3, 2013; Ser. No. 61/840,542, filed Jun. 28, 2013; Ser. No. 61/838,619, filed Jun. 24, 2013; Ser. No. 61/838,621, filed Jun. 24, 2013; Ser. No. 61/837,955, filed Jun. 21, 2013; Ser. No. 61/836,900, filed Jun. 19, 2013; Ser. No. 61/836,380, filed Jun. 18, 2013; Ser. No. 61/834,129, filed Jun. 12, 2013; Ser. No. 61/833,080, filed Jun. 10, 2013; Ser. No. 61/830,375, filed Jun. 3, 2013; Ser. No. 61/830,377, filed Jun. 3, 2013; Ser. No. 61/825,752, filed May 21, 2013; Ser. No. 61/825,753, filed May 21, 2013; Ser. No. 61/823,648, filed May 15, 2013; Ser. No. 61/823,644, filed May 15, 2013; Ser. No. 61/821,922, filed May 10, 2013; Ser. No. 61/819,835, filed May 6, 2013; Ser. No. 61/819,033, filed May 3, 2013; Ser. No. 61/816,956, filed Apr. 29, 2013; Ser. No. 61/815,044, filed Apr. 23, 2013; Ser. No. 61/814,533, filed Apr. 22, 2013; Ser. No. 61/813,361, filed Apr. 18, 2013; Ser. No. 61/810,407, filed Apr. 10, 2013; Ser. No. 61/808,930, filed Apr. 5, 2013; Ser. No. 61/807,050, filed Apr. 1, 2013; Ser. No. 61/806,674, filed Mar. 29, 2013; Ser. No. 61/793,592, filed Mar. 15, 2013; Ser. No. 61/772,015, filed Mar. 4, 2013; Ser. No. 61/772,014, filed Mar. 4, 2013; Ser. No. 61/770,051, filed Feb. 27, 2013; Ser. No. 61/766,883, filed Feb. 20, 2013; Ser. No. 61/760,366, filed Feb. 4, 2013; Ser. No. 61/760,364, filed Feb. 4, 2013; Ser. No. 61/756,832, filed Jan. 25, 2013; Ser. No. 61/754,804, filed Jan. 21, 2013; Ser. No. 61/736,104, filed Dec. 12, 2012; Ser. No. 61/736,103, filed Dec. 12, 2012; Ser. No. 61/733,598, filed Dec. 5, 2012 and/or Ser. No. 61/733,093, filed Dec. 4, 2012, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in International Publication Nos. WO/2010/144900; WO 2013/043661 and/or WO 2013/081985, and/or U.S. patent application Ser. No. 13/202,005, filed Aug. 17, 2011 (Attorney Docket MAG04 P-1595), which are hereby incorporated herein by reference in their entireties.

The imaging device and control and image processor and any associated illumination source, if applicable, may comprise any suitable components, and may utilize aspects of the cameras and vision systems described in U.S. Pat. Nos. 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094; 6,396,397; 6,806,452; 6,690,268; 7,005,974; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454; and 6,824,281, and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or U.S. patent application Ser. No. 12/508,840, filed Jul. 24, 2009, and published Jan. 28, 2010 as U.S. Pat. Publication No. US 2010-0020170, and/or PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012 (Attorney Docket MAG04 FP-1907(PCT)), and/or U.S. patent application Ser. No. 13/534,657, filed Jun. 27, 2012 (Attorney Docket MAG04 P-1892), which are all hereby incorporated herein by reference in their entireties. The camera or cameras may comprise any suitable cameras or imaging sensors or camera modules, and may utilize aspects of the cameras or sensors described in U.S. patent application Ser. No. 12/091,359, filed Apr. 24, 2008 and published Oct. 1, 2009 as U.S. Publication No. US-2009-0244361, and/or Ser. No. 13/260,400, filed Sep. 26, 2011 (Attorney Docket MAG04 P-1757), and/or U.S. Pat. Nos. 7,965,336 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties. The imaging array sensor may comprise any suitable sensor, and may utilize various imaging sensors or imaging array sensors or cameras or the like, such as a CMOS imaging array sensor, a CCD sensor or other sensors or the like, such as the types described in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,715,093; 5,877,897; 6,922,292; 6,757,109; 6,717,610; 6,590,719; 6,201,642; 6,498,620; 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 6,806,452; 6,396,397; 6,822,563; 6,946,978; 7,339,149; 7,038,577; 7,004,606; and/or 7,720,580, and/or U.S. patent application Ser. No. 10/534,632, filed May 11, 2005, now U.S. Pat. No. 7,965,336; and/or PCT Application No. PCT/US2008/076022, filed Sep. 11, 2008 and published Mar. 19, 2009 as International Publication No. WO/2009/036176, and/or PCT Application No. PCT/US2008/078700, filed Oct. 3, 2008 and published Apr. 9, 2009 as International Publication No. WO/2009/046268, which are all hereby incorporated herein by reference in their entireties.

The camera module and circuit chip or board and imaging sensor may be implemented and operated in connection with various vehicular vision-based systems, and/or may be operable utilizing the principles of such other vehicular systems, such as a vehicle headlamp control system, such as the type disclosed in U.S. Pat. Nos. 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 7,004,606; 7,339,149; and/or 7,526,103, which are all hereby incorporated herein by reference in their entireties, a rain sensor, such as the types disclosed in commonly assigned U.S. Pat. Nos. 6,353,392; 6,313,454; 6,320,176; and/or 7,480,149, which are hereby incorporated herein by reference in their entireties, a vehicle vision system, such as a forwardly, sidewardly or rearwardly directed vehicle vision system utilizing principles disclosed in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,877,897; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; and/or 7,859,565, which are all hereby incorporated herein by reference in their entireties, a trailer hitching aid or tow check system, such as the type disclosed in U.S. Pat. No. 7,005,974, which is hereby incorporated herein by reference in its entirety, a reverse or sideward imaging system, such as for a lane change assistance system or lane departure warning system or for a blind spot or object detection system, such as imaging or detection systems of the types disclosed in U.S. Pat. Nos. 7,720,580; 7,038,577; 5,929,786 and/or 5,786,772, and/or U.S. patent application Ser. No. 11/239,980, filed Sep. 30, 2005, now U.S. Pat. No. 7,881,496, and/or U.S. provisional application Ser. No. 60/628,709, filed Nov. 17, 2004; Ser. No. 60/614,644, filed Sep. 30, 2004; Ser. No. 60/618,686, filed Oct. 14, 2004; Ser. No. 60/638,687, filed Dec. 23, 2004, which are hereby incorporated herein by reference in their entireties, a video device for internal cabin surveillance and/or video telephone function, such as disclosed in U.S. Pat. Nos. 5,760,962; 5,877,897; 6,690,268; and/or 7,370,983, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties, a traffic sign recognition system, a system for determining a distance to a leading or trailing vehicle or object, such as a system utilizing the principles disclosed in U.S. Pat. Nos. 6,396,397 and/or 7,123,168, which are hereby incorporated herein by reference in their entireties, and/or the like.

Optionally, the circuit board or chip may include circuitry for the imaging array sensor and or other electronic accessories or features, such as by utilizing compass-on-a-chip or EC driver-on-a-chip technology and aspects such as described in U.S. Pat. No. 7,255,451 and/or U.S. Pat. No. 7,480,149; and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, and/or Ser. No. 12/578,732, filed Oct. 14, 2009 (Attorney Docket DON01 P-1564), which are hereby incorporated herein by reference in their entireties.

Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device disposed at or in the interior rearview mirror assembly of the vehicle, such as by utilizing aspects of the video mirror display systems described in U.S. Pat. No. 6,690,268 and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011 (Attorney Docket DON01 P-1797), which are hereby incorporated herein by reference in their entireties. The video mirror display may comprise any suitable devices and systems and optionally may utilize aspects of the compass display systems described in U.S. Pat. Nos. 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,677,851; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,508; 6,222,460; 6,513,252; and/or 6,642,851, and/or European patent application, published Oct. 11, 2000 under Publication No. EP 0 1043566, and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, which are all hereby incorporated herein by reference in their entireties. Optionally, the video mirror display screen or device may be operable to display images captured by a rearward viewing camera of the vehicle during a reversing maneuver of the vehicle (such as responsive to the vehicle gear actuator being placed in a reverse gear position or the like) to assist the driver in backing up the vehicle, and optionally may be operable to display the compass heading or directional heading character or icon when the vehicle is not undertaking a reversing maneuver, such as when the vehicle is being driven in a forward direction along a road (such as by utilizing aspects of the display system described in International Publication No. WO 2012/051500, which is hereby incorporated herein by reference in its entirety).

Optionally, the vision system (utilizing the forward facing camera and a rearward facing camera and other cameras disposed at the vehicle with exterior fields of view) may be part of or may provide a display of a top-down view or birds-eye view system of the vehicle or a surround view at the vehicle, such as by utilizing aspects of the vision systems described International Publication Nos. WO 2010/099416; WO 2011/028686; WO2012/075250; WO 2013/019795; WO 2012/075250; WO 2012/145822; WO 2013/081985; WO 2013/086249; and/or WO 2013/109869, and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011 (Attorney Docket DON01 P-1797), which are hereby incorporated herein by reference in their entireties.

Optionally, a video mirror display may be disposed rearward of and behind the reflective element assembly and may comprise a display such as the types disclosed in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,370,983; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187 and/or 6,690,268, and/or in U.S. patent application Ser. No. 12/091,525, filed Apr. 25, 2008, now U.S. Pat. No. 7,855,755; Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008; and/or Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are all hereby incorporated herein by reference in their entireties. The display is viewable through the reflective element when the display is activated to display information. The display element may be any type of display element, such as a vacuum fluorescent (VF) display element, a light emitting diode (LED) display element, such as an organic light emitting diode (OLED) or an inorganic light emitting diode, an electroluminescent (EL) display element, a liquid crystal display (LCD) element, a video screen display element or backlit thin film transistor (TFT) display element or the like, and may be operable to display various information (as discrete characters, icons or the like, or in a multi-pixel manner) to the driver of the vehicle, such as passenger side inflatable restraint (PSIR) information, tire pressure status, and/or the like. The mirror assembly and/or display may utilize aspects described in U.S. Pat. Nos. 7,184,190; 7,255,451; 7,446,924 and/or 7,338,177, which are all hereby incorporated herein by reference in their entireties. The thicknesses and materials of the coatings on the substrates of the reflective element may be selected to provide a desired color or tint to the mirror reflective element, such as a blue colored reflector, such as is known in the art and such as described in U.S. Pat. Nos. 5,910,854; 6,420,036; and/or 7,274,501, which are hereby incorporated herein by reference in their entireties.

Optionally, the display or displays and any associated user inputs may be associated with various accessories or systems, such as, for example, a tire pressure monitoring system or a passenger air bag status or a garage door opening system or a telematics system or any other accessory or system of the mirror assembly or of the vehicle or of an accessory module or console of the vehicle, such as an accessory module or console of the types described in U.S. Pat. Nos. 7,289,037; 6,877,888; 6,824,281; 6,690,268; 6,672,744; 6,386,742; and 6,124,886, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties.

While the above description constitutes a plurality of embodiments of the present invention, it will be appreciated that the present invention is susceptible to further modification and change without departing from the fair meaning of the accompanying claims.

Claims

1. A vehicular vision system, said vehicular vision system comprising:

a camera disposed at a vehicle and having a field of view, wherein said camera is operable to capture image data, and wherein said camera comprises an imager having a pixelated array of photosensing elements; and
wherein said camera comprises a lens array disposed at said pixelated array of photosensing elements, wherein said lens array comprises an array of lens elements for imaging light onto respective sub-arrays of photosensing elements of said pixelated array of photosensing elements;
an image processor operable to process captured image data; and
wherein, responsive at least in part to image processing of captured image data captured by photosensing elements associated with different lens elements of said lens array, said vehicular vision system is operable to determine distance to an object present in the field of view of said camera.

2. The vehicular vision system of claim 1, comprising a plurality of cameras having respective imagers and lens arrays.

3. The vehicular vision system of claim 2, wherein said plurality of cameras and imagers and lens arrays form a light field array.

4. The vehicular vision system of claim 3, wherein said light field array is a type of Stanford light field array.

5. The vehicular vision system of claim 3, wherein, responsive to image processing of captured image data, said image processor acquires depth information of a scene in view of said light field array.

6. The vehicular vision system of claim 5, wherein said image processor processes a depth map of a scene in view of said light field array out of the scene's depth information.

7. The vehicular vision system of claim 6, wherein said depth map comprises an input to a driver assistant system of the vehicle.

8. The vehicular vision system of claim 7, wherein said camera has an exterior field of view exterior of the vehicle.

9. The vehicular vision system of claim 5, wherein said camera has an interior field of view inside a cabin of the vehicle, and wherein said light field depth information is used for an input to at least one of (i) a head tracking system for tracking the head of a driver of the vehicle and (ii) an eye tracking system for tracking the eyes of a driver of the vehicle.

10. The vehicular vision system of claim 1, wherein said lens array comprises at least four lens elements, each for imaging light onto respective sub-arrays of at least about 500,000 photosensing elements.

11. The vehicular vision system of claim 10, wherein said lens array comprises at least four hundred lens elements, each for imaging light onto respective sub-arrays of at least about 5,000 photosensing elements.

12. The vehicular vision system of claim 10, wherein a virtual view point of said camera is adjustable, and wherein said image processor is operable to determine distance to the object present in the exterior field of view of said camera by comparing image data captured at different virtual view points.

13. The vehicular vision system of claim 12, wherein said camera has an interior field of view inside a cabin of the vehicle, and wherein said image processing of captured image data is used for an input to at least one of (i) a head tracking system for tracking the head of a driver of the vehicle and (ii) an eye tracking system for tracking the eyes of a driver of the vehicle.

14. A vehicular vision system, said vehicular vision system comprising:

a plurality of cameras disposed at a vehicle and having respective fields of view, wherein said cameras are operable to capture image data, and wherein said cameras each comprise an imager having a pixelated array of photosensing elements; and
an image processor operable to process captured image data;
wherein said plurality of cameras form a Stanford light field array;
wherein, responsive to image processing of captured image data, said image processor acquires depth information of a scene in view of said Stanford light field array;
wherein said image processor processes a depth map of a scene in view of said Stanford light field array out of the scene's depth information; and
wherein said depth map comprises an input to a driver assistant system of the vehicle.

15. The vehicular vision system of claim 14, wherein said camera has an exterior field of view exterior of the vehicle.

16. The vehicular vision system of claim 14, wherein, responsive at least in part to image processing of captured image data captured by said Stanford light field array, said vehicular vision system is operable to determine distance to an object present in the fields of view of said cameras.

17. The vehicular vision system of claim 14, wherein each of said cameras comprises a lens array disposed at said pixelated array of photosensing elements, wherein said lens array comprises an array of lens elements for imaging light onto respective sub-arrays of photosensing elements of said pixelated array of photosensing elements.

18. A vehicular vision system, said vehicular vision system comprising:

a camera disposed at a vehicle and having an exterior field of view, wherein said camera is operable to capture image data, and wherein said camera comprises an imager having a pixelated array of photosensing elements; and
wherein said camera comprises a lens array disposed at said pixelated array of photosensing elements, wherein said lens array comprises an array of lens elements for imaging light onto respective sub-arrays of photosensing elements of said pixelated array of photosensing elements;
an image processor operable to process captured image data;
wherein, responsive at least in part to image processing of captured image data captured by photosensing elements associated with different lens elements of said lens array, said vehicular vision system is operable to determine distance to an object present in the exterior field of view of said camera;
wherein a virtual view point of said camera is adjustable, and wherein said image processor, responsive at least in part to image processing of captured image data, is operable to determine distance to the object present in the exterior field of view of said camera by comparing image data captured at different virtual view points.

19. The vehicular vision system of claim 18, comprising a plurality of cameras having respective pixelated arrays of photosensing elements and lens arrays, wherein said plurality of cameras and imagers and lens arrays form a Stanford light field array.

20. The vehicular vision system of claim 18, wherein, responsive to image processing of captured image data, said image processor at least one of (i) acquires depth information of a scene in view of said camera and (ii) acquires depth information of a scene in view of said camera and wherein said image processor processes a depth map of a scene in view of said camera out of the scene's depth information.

Patent History
Publication number: 20140168415
Type: Application
Filed: Dec 6, 2013
Publication Date: Jun 19, 2014
Applicant: MAGNA ELECTRONICS INC. (Auburn Hills, MI)
Inventors: Joern Ihlenburg (Berlin), Goerg Pflug (Weil der Stadt)
Application Number: 14/098,817
Classifications
Current U.S. Class: Land Vehicle (348/118)
International Classification: G06K 9/00 (20060101); B60R 11/04 (20060101);