DIRTY LENS IMAGE CORRECTION
Systems and method for correcting images including artifacts due to dirty camera lenses of electronic device are disclosed. Correction of images by the systems and methods includes obtaining a first raw pixel image of a scene captured with a first camera, obtaining a second raw image of the scene captured with a second camera separate from the first camera in a camera baseline direction, rectifying the first and second raw pixel images to create respective first and second rectified pixel images, determining disparity correspondence between corresponding image pixel pairs of the first and second rectified images in the camera baseline direction, mapping first and second rectified images into the same domain using the determined disparity, detect image artifact regions within each domain mapped image by comparing corresponding regions of the domain mapped images, determining correction factors for each detected image artifact region, and correcting the rectified first and second images by applying the determined correction factors.
This application is a continuation of U.S. application Ser. No. 17/145,963 filed on Jan. 11, 2021, which is a continuation of U.S. application Ser. No. 16/567,005 filed on Sep. 11, 2019, now U.S. Pat. No. 10,896,494, and claims priority to U.S. Provisional Application Ser. No. 62/737,442 filed on Sep. 27, 2018, the contents of which are incorporated fully herein by reference.
TECHNICAL FIELDThe present subject matter relates to electronic devices, e.g., eyewear devices, and mobile devices and techniques to correct images obtained using cameras with dirty lenses.
BACKGROUNDElectronic devices, such as wearable devices, including portable eyewear devices (e.g., smartglasses, headwear, and headgear); mobile devices (e.g., tablets, smartphones, and laptops); and personal computers available today integrate image displays and cameras.
Wearable device may include multiple cameras for gathering image information from objects in a scene. If the lenses of one or more cameras are dirty (e.g., due to finger prints, dirt, grime, etc.), the image information from the scene may include undesirable artifacts. These artifacts interfere with the ability to accurately reproduce the scene on a display. Methods and systems for reducing the impact of artifacts due to dirty lenses are desirable.
The drawing figures depict one or more implementations, by way of example only, not by way of limitations. In the figures, like reference numerals refer to the same or similar elements.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, description of well-known methods, procedures, components, and circuitry are set forth at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.
The term “coupled” or “connected” as used herein refers to any logical, optical, physical or electrical connection, link or the like by which electrical or magnetic signals produced or supplied by one system element are imparted to another coupled or connected element. Unless described otherwise, coupled or connected elements or devices are not necessarily directly connected to one another and may be separated by intermediate components, elements or communication media that may modify, manipulate or carry the electrical signals. The term “on” means directly supported by an element or indirectly supported by the element through another element integrated into or supported by the element.
The orientations of the eyewear device, associated components and any complete devices incorporating cameras such as shown in any of the drawings, are given by way of example only, for illustration and discussion purposes. In operation for image correction, the eyewear device may be oriented in any other direction suitable to the particular application of the eyewear device, for example up, down, sideways, or any other orientation. Also, to the extent used herein, any directional term, such as front, rear, inwards, outwards, towards, left, right, lateral, longitudinal, up, down, upper, lower, top, bottom, side, horizontal, vertical, and diagonal are used by way of example only, and are not limiting as to direction or orientation of any camera or component of the camera.
Additional objects, advantages and novel features of the examples will be set forth in part in the following description, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The objects and advantages of the present subject matter may be realized and attained by means of the methodologies, instrumentalities and combinations particularly pointed out in the appended claims.
Reference now is made in detail to the examples illustrated in the accompanying drawings and discussed below.
Eyewear device 100, includes a right optical assembly 180B with an image display to present images, such as depth images. As shown in
Left and right visible light cameras 114A-B are sensitive to the visible light range wavelength. Each of the visible light cameras 114A-B have a different frontward facing field of view which are overlapping to enable generation of three-dimensional scenes, for example, right visible light camera 114B depicts a right field of view 111B. Generally, a “field of view” is the part of the scene that is visible through the camera at a particular position and orientation in space. Objects or object features outside the field of view 111A-B when the visible light camera captures the image are not recorded in a raw image (e.g., photograph or picture). The field of view describes an angle range or extent which the image sensor of the visible light camera 114A-B picks up electromagnetic radiation of a given scene in a captured image of the given scene. Field of view can be expressed as the angular size of the view cone, i.e., an angle of view. The angle of view can be measured horizontally, vertically, or diagonally.
In an example, visible light cameras 114A-B have a field of view with an angle of view between 15° to 30°, for example 24°, and have a resolution of 480×480 pixels. The “angle of coverage” describes the angle range that a lens of visible light cameras 114A-B or infrared camera 220 (see
Examples of such visible lights camera 114A-B include a high-resolution complementary metal-oxide-semiconductor (CMOS) image sensor and a video graphic array (VGA) camera, such as 640 p (e.g., 640×480 pixels for a total of 0.3 m 3 egapixels), 720 p, or 1080 p. As used herein, the term “overlapping” when referring to field of view means the matrix of pixels in the generated raw image(s) overlap by 30% or more. As used herein, the term “substantially overlapping” when referring to field of view means the matrix of pixels in the generated raw image(s) or infrared image of a scene overlap by 50% or more.
The eyewear device 100 may capture image sensor data from the visible light cameras 114A-B along with geolocation data, digitized by an image processor, for storage in a memory. The left and right raw images captured by respective visible light cameras 114A-B are in the two-dimensional space domain and comprise a matrix of pixels on a two-dimensional coordinate system that includes an X axis for horizontal position and a Y axis for vertical position. Each pixel includes a color attribute value (e.g., a red pixel light value, a green pixel light value, and/or a blue pixel light value); and a position attribute (e.g., an X location coordinate and a Y location coordinate).
To provide stereoscopic vision, an image processor (element 912 of
For stereoscopic vision, a pair of raw red, green, and blue (RGB) images are captured of a scene at a given moment in time—one image for each of the left and right visible light cameras 114A-B. When the pair of captured raw images from the frontward facing left and right field of views 111A-B of the left and right visible light cameras 114A-B are processed (e.g., by the image processor), depth images are generated, and the generated depth images, which a user perceives on an optical assembly 180A-B or other image display(s) (e.g., of a mobile device). The generated depth images are in the three-dimensional space domain and can comprise a matrix of vertices on a three-dimensional location coordinate system that includes an X axis for horizontal position (e.g., length), a Y axis for vertical position (e.g., height), and a Z axis for depth (e.g., distance). Each vertex includes a color attribute value (e.g., a red pixel light value, a green pixel light value, and/or a blue pixel light value); a position attribute (e.g., an X location coordinate, a Y location coordinate, and a Z location coordinate); a texture attribute, and/or a reflectance attribute. The texture attribute quantifies the perceived texture of the depth image, such as the spatial arrangement of color or intensities in a region of vertices of the depth image.
Generally, perception of depth arises from the disparity of a given 3D point in the left and right raw images captured by visible light cameras 114A-B. Disparity is the difference in image location of the same 3D point when projected under perspective of the visible light cameras 114A-B (d=xleft−xright). For visible light cameras 114A-B with parallel optical axes, focal length f, baseline b, and corresponding image points (xleft, yleft) and (xright, yright), the location of a 3D point (Z axis location coordinate) can be derived utilizing triangulation which determines depth from disparity. Typically, depth of the 3D point is inversely proportional to disparity. A variety of other techniques can also be used.
In an example, a dirty lens image correction system includes the eyewear device 100. The eyewear device 100 includes a frame 105 and a left temple 110A extending from a left lateral side 170A of the frame 105 and a right temple 110B extending from a right lateral side 170B of the frame 105. Eyewear device 100 further includes two cameras. The two cameras may include at least two visible light cameras with overlapping fields of view. In one example, the two cameras include a left visible light camera 114A with a left field of view 111A connected to the frame 105 or the left temple 110A to capture a left image of the scene. Eyewear device 100 further includes a right visible light camera 114B connected to the frame 105 or the right temple 110B with a right field of view 111B to capture (e.g., simultaneously with the left visible light camera 114A) a right image of the scene which partially overlaps the left image.
The dirty lens image correction system further includes a computing device, such as a host computer (e.g., mobile device 990 of
The dirty lens image correction system further includes a user input device to receive a two-dimensional input selection from a user. Examples of user input devices include a touch sensor (element 991 of
Execution of the image correction programming (element 945 of
The right chunk 110B includes chunk body 211 and a chunk cap, with the chunk cap omitted in the cross-section of
The right visible light camera 114B is coupled to or disposed on the flexible PCB 240 and covered by a visible light camera cover lens, which is aimed through opening(s) formed in the frame 105. For example, the right rim 107B of the frame 105 is connected to the right chunk 110B and includes the opening(s) for the visible light camera cover lens. The frame 105 includes a front-facing side configured to face outwards away from the eye of the user. The opening for the visible light camera cover lens is formed on and through the front-facing side. In the example, the right visible light camera 114B has an outward facing field of view 111B with a line of sight or perspective of the right eye of the user of the eyewear device 100. The visible light camera cover lens can also be adhered to an outward facing surface of the right chunk 110B in which an opening is formed with an outward facing angle of coverage, but in a different outwards direction. The coupling can also be indirect via intervening components.
Left (first) visible light camera 114A is connected to a left image display of left optical assembly 180A to capture a left eye viewed scene observed by a wearer of the eyewear device 100 in a left raw image. Right (second) visible light camera 114B is connected to a right image display of right optical assembly 180B to capture a right eye viewed scene observed by the wearer of the eyewear device 100 in a right raw image. The left raw image and the right raw image partially overlap to present a three-dimensional observable space of a generated depth image.
Flexible PCB 140B is disposed inside the right chunk 110B and is coupled to one or more other components housed in the right chunk 110B. Although shown as being formed on the circuit boards of the right chunk 110B, the right visible light camera 114B can be formed on the circuit boards of the left chunk 110A, the temples 125A-B, or frame 105.
In the eyeglasses example, eyewear device 100 includes a frame 105 including a left rim 107A connected to a right rim 107B via a bridge 106 adapted for a nose of the user. The left and right rims 107A-B include respective apertures 175A-B which hold a respective optical element 180A-B, such as a lens and a display device. As used herein, the term lens is meant to cover transparent or translucent pieces of glass or plastic having curved and/or flat surfaces that cause light to converge/diverge or that cause little or no convergence or divergence.
Although shown as having two optical elements 180A-B, the eyewear device 100 can include other arrangements, such as a single optical element or may not include any optical element 180A-B depending on the application or intended user of the eyewear device 100. As further shown, eyewear device 100 includes a left chunk 110A adjacent the left lateral side 170A of the frame 105 and a right chunk 110B adjacent the right lateral side 170B of the frame 105. The chunks 110A-B may be integrated into the frame 105 on the respective sides 170A-B (as illustrated) or implemented as separate components attached to the frame 105 on the respective sides 170A-B. Alternatively, the chunks 110A-B may be integrated into temples (not shown) attached to the frame 105.
In one example, the image display of optical assembly 180A-B includes an integrated image display. As shown in
In another example, the image display device of optical assembly 180A-B includes a projection image display as shown in
As the photons projected by the laser projector 150 travel across the lens of the optical assembly 180A-B, the photons encounter the optical strips 155A-N. When a particular photon encounters a particular optical strip, the photon is either redirected towards the user's eye, or it passes to the next optical strip. A combination of modulation of laser projector 150, and modulation of optical strips, may control specific photons or beams of light. In an example, a processor controls optical strips 155A-N by initiating mechanical, acoustic, or electromagnetic signals. Although shown as having two optical assemblies 180A-B, the eyewear device 100 can include other arrangements, such as a single or three optical assemblies, or the optical assembly 180A-B may have arranged different arrangement depending on the application or intended user of the eyewear device 100.
As further shown in
In one example, the image display includes a first (left) image display and a second (right) image display. Eyewear device 100 includes first and second apertures 175A-B which hold a respective first and second optical assembly 180A-B. The first optical assembly 180A includes the first image display (e.g., a display matrix 170A of
Mobile device 990 may be a smartphone, tablet, laptop computer, access point, or any other such device capable of connecting with eyewear device 100 using both a low-power wireless connection 925 and a high-speed wireless connection 937. Mobile device 990 is connected to server system 998 and network 995. The network 995 may include any combination of wired and wireless connections.
Eyewear device 100 further includes two image displays of the optical assembly 180A-B (one associated with the left lateral side 170A and one associated with the right lateral side 170B). Eyewear device 100 also includes image display driver 942, image processor 912, low-power circuitry 920, and high-speed circuitry 930. Image display of optical assembly 180-B are for presenting images and scenes. Image display driver 942 is coupled to the image display of optical assembly 180A-B to control the image display of optical assembly 180A-B to present the images Eyewear device 100 further includes a user input device 991 (e.g., touch sensor) to receive a two-dimensional input selection from a user.
The components shown in
Eyewear device 100 includes a memory 934 which includes image correction programing 945 to perform a subset or all of the functions described herein for correcting and displaying images. Flowcharts outlining functions which can be implemented in the image correction programing 945 are shown in
As shown in
Low-power wireless circuitry 924 and the high-speed wireless circuitry 936 of the eyewear device 100 can include short range transceivers (Bluetooth™) and wireless wide, local, or wide area network transceivers (e.g., cellular or WiFi). Mobile device 990, including the transceivers communicating via the low-power wireless connection 925 and high-speed wireless connection 937, may be implemented using details of the architecture of the eyewear device 100, as can other elements of network 995.
Memory 934 includes essentially any storage device capable of storing various data and applications, including, among other things, camera data generated by the left and right visible light cameras 114A-B, infrared camera 220, and the image processor 912, as well as images generated for display by the image display driver 942 on the image displays of the optical assembly 180A-B. While memory 934 is shown as integrated with high-speed circuitry 930, in other embodiments, memory 934 may be an independent standalone element of the eyewear device 100. In certain such embodiments, electrical routing lines may provide a connection through a chip that includes the high-speed processor 932 from the image processor 912 or low-power processor 922 to the memory 934. In other embodiments, the high-speed processor 932 may manage addressing of memory 934 such that the low-power processor 922 will boot the high-speed processor 932 any time that a read or write operation involving memory 934 is needed.
As shown in
Server system 998 may be one or more computing devices as part of a service or network computing system, for example, that include a processor, a memory, and network communication interface to communicate over the network 995 with the mobile device 990 and eyewear device 100. Eyewear device 100 is connected with a host computer. For example, the eyewear device 100 is paired with the mobile device 990 via the high-speed wireless connection 937 or connected to the server system 998 via the network 995.
Output components of the eyewear device 100 include visual components, such as the left and right image displays of optical assembly 180A-B as described in
Eyewear device 100 may optionally include additional peripheral device elements. Such peripheral device elements may include biometric sensors, additional sensors, or display elements integrated with eyewear device 100. For example, peripheral device elements may include any I/O components including output components, motion components, position components, or any other such elements described herein.
For example, the biometric components include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The position components include location sensor components to generate location coordinates (e.g., a Global Positioning System (GPS) receiver component), WiFi or Bluetooth™ transceivers to generate positioning system coordinates, altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Such positioning system coordinates can also be received over wireless connections 925 and 937 from the mobile device 990 via the low-power wireless circuitry 924 or high-speed wireless circuitry 936.
As shown, the mobile device 990 includes an image display 1080, an image display driver 1090 to control the image display, and a user input device 1091 similar to the eyewear device 100. In the example of
Examples of touch screen type mobile devices that may be used include (but are not limited to) a smart phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or other portable device. However, the structure and operation of the touch screen type devices is provided by way of example; and the subject technology as described herein is not intended to be limited thereto. For purposes of this discussion,
As shown in
To generate location coordinates for positioning of the mobile device 990, the mobile device 990 can include a global positioning system (GPS) receiver. Alternatively, or additionally the mobile device 990 can utilize either or both the short range XCVRs 1020 and WWAN XCVRs 1010 for generating location coordinates for positioning. For example, cellular network, WiFi, or Bluetooth™ based positioning systems can generate very accurate location coordinates, particularly when used in combination. Such location coordinates can be transmitted to the eyewear device over one or more network connections via XCVRs 1010, 1020.
The transceivers 1010, 1020 (network communication interface) conforms to one or more of the various digital wireless communication standards utilized by modern mobile networks. Examples of WWAN transceivers 1010 include (but are not limited to) transceivers configured to operate in accordance with Code Division Multiple Access (CDMA) and 3rd Generation Partnership Project (3GPP) network technologies including, for example and without limitation, 3GPP type 2 (or 3GPP2) and LTE, at times referred to as “4G.” For example, the transceivers 1010, 1020 provide two-way wireless communication of information including digitized audio signals, still image and video signals, web page information for display as well as web related inputs, and various types of mobile message communications to/from the mobile device 990 for image correction.
Several of these types of communications through the transceivers 1010, 1020 and a network, as discussed previously, relate to protocols and procedures in support of communications with the eyewear device 100 or the server system 998 for image correction, such as transmitting left raw image 858A and right raw image 858B. Such communications, for example, may transport packet data via the short range XCVRs 1020 over the wireless connections 925 and 937 to and from the eyewear device 100 as shown in
The mobile device 990 further includes a microprocessor, shown as CPU 1030, sometimes referred to herein as the host controller. A processor is a circuit having elements structured and arranged to perform one or more processing functions, typically various data processing functions. Although discrete logic components could be used, the examples utilize components forming a programmable CPU. A microprocessor for example includes one or more integrated circuit (IC) chips incorporating the electronic elements to perform the functions of the CPU. The processor 1030, for example, may be based on any known or available microprocessor architecture, such as a Reduced Instruction Set Computing (RISC) using an ARM architecture, as commonly used today in mobile devices and other portable electronic devices. Of course, other processor circuitry may be used to form the CPU 1030 or processor hardware in smartphone, laptop computer, and tablet.
The microprocessor 1030 serves as a programmable host controller for the mobile device 990 by configuring the mobile device 990 to perform various operations, for example, in accordance with instructions or programming executable by processor 1030. For example, such operations may include various general operations of the mobile device, as well as operations related to the image correction programming 945 and communications with the eyewear device 100 and server system 998. Although a processor may be configured by use of hardwired logic, typical processors in mobile devices are general processing circuits configured by execution of programming.
The mobile device 990 includes a memory or storage device system, for storing data and programming. In the example, the memory system may include a flash memory 1040A and a random access memory (RAM) 1040B. The RAM 1040B serves as short term storage for instructions and data being handled by the processor 1030, e.g., as a working data processing memory. The flash memory 1040A typically provides longer term storage.
Hence, in the example of mobile device 990, the flash memory 1040A is used to store programming or instructions for execution by the processor 1030. Depending on the type of device, the mobile device 990 stores and runs a mobile operating system through which specific applications, including image correction programming 945, are executed. Applications, such as the image correction programming 945, may be a native application, a hybrid application, or a web application (e.g., a dynamic web page executed by a web browser) that runs on mobile device 990 to correct images. Examples of mobile operating systems include Google Android, Apple iOS (I-Phone or iPad devices), Windows Mobile, Amazon Fire OS, RIM BlackBerry operating system, or the like.
It will be understood that the mobile device 990 is just one type of host computer in the dirty lens image correction system 900 and that other arrangements may be utilized.
At step 602, obtain two images of a scene. A processor (e.g., element 932 of eyewear device 100 of
If one of the raw images includes an artifact due to a dirty lens of one camera (assuming the lens of the other camera is not dirty in the same regions), the below steps can correct for the artifact by reducing or eliminating is effect to produce a more pleasing viewable image or scene.
At step 604, rectify the obtained images. The processor may rectify the images by applying an algorithm such as shown in Equation (1):
(xrectilinear,yrectilinear)=(rx*xRAW,ry*yRAW) (1)
-
- where rx=f(x2RAW); and
- ry=f(y2RAW);
- wherein x is a pixel location in a horizontal direction and y is a pixel location in a vertical direction.
(xrectilinear,yrectilinear)=(r*xRAW,r*yRAW) (2)
-
- where r=f(x2RAW+y2RAW).
In an example, a monotonic function f is applied, where f is a 1D transformation function (e.g., f(x)=1+k_1*x{circumflex over ( )}2+k_2*x{circumflex over ( )}4; where x is the horizontal pixel distance from the distortion center and k_1 and k_2 are parameters). The monotonic function prevents the mapped image from collapsing on itself (e.g., due to 2 pixels being mapped to the same target).
At step 606, determine disparity between corresponding image pixels of rectified images. The processor may determine disparity between the rectified images by correlating the rectified images 1008A/B and determining a number of pixels (typically in the horizontal direction) between a location of a pixel 1010B in the right rectified image 1008B corresponding to where the object pixel 101A appears in the left image 1008A and where the corresponding object pixel 101C actually appears in the right image 1008B. Correlation of the left and right pixels can be achieved with Semi-Global Block Matching (SGBM), for example.
At step 608, map rectified images into the same domain using the determined disparity. The processor may map each of the left and right rectified images into the same domain to create a left domain image (LeftDOM) and a right domain image (RightDOM) where distinctive features found in both rectified images are located at the same coordinates in each image.
At step 610, detect image artifact regions within each domain mapped image. The processor may detect image artifact regions within the left domain image (LeftDOM) and the right domain image (RightDOM) by comparing image characteristic values of corresponding regions between domain mapped images (step 702;
At step 612, determine correction factors for the regions determined to have artifacts. The processor may determine correction factors by transforming the domain mapped images into each other and determining a correction factor for each region based on the average gains for pixels in the respective regions. The processor may transform the right domain mapped image into the left domain mapped region and may transform the left domain mapped image in the right domain mapped region (step 802;
The processor may determine correction factors for corresponding regions within the left and the right rectified images from the determined gain (step 806). The processor may determine a color attribute factor for each region that can be used to adjust that region. For example, if a region is determined to have a color attribute value gain of 0.80, a color attribute correction factor of 1.25 (1.0/0.80) may be determined for the region. Additionally, the processor may determine correspondence between the regions of the domain mapped images (i.e., left domain mapped image and right domain mapped image) and regions of the respective rectified images (i.e., left rectified image and right rectified image) for application of the correction factors to the determined regions to the rectified images. The processor may determine correspondence by mapping the left artifact region of the left domain mapped image to a corresponding left region of the left rectified image, mapping the right artifact region of the right domain mapped image to a corresponding right region of the right rectified image, applying the left correction factor to the corresponding left region of the left rectified image, and applying the right correction factor to the corresponding right region of the right rectified image.
As step 614, correct the rectified images by applying the determined correction factors. The processor may correct the rectified images by apply (e.g., multiplying) the determined correction factors to the color attribute value in the respective regions of the appropriate rectified image.
At step 616, create a 3D scene using the corrected rectified images and the determined disparity. The processor may extract the image disparity by retrieving the image disparity determined in step 606 from memory (step 902;
At step 618, present the 3D scene. The processor may present the 3D scene on a display of the eyewear or a mobile device coupled to the eyewear.
Any of the image correction functionality described herein for the eyewear device 100, mobile device 990, and server system 998 can be embodied in one more applications as described previously. According to some embodiments, “function,” “functions,” “application,” “applications,” “instruction,” “instructions,” or “programming” are program(s) that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, a third party application (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating systems. In this example, the third party application can invoke API calls provided by the operating system to facilitate functionality described herein.
Hence, a machine-readable medium may take many forms of tangible storage medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the client device, media gateway, transcoder, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
The scope of protection is limited solely by the claims that now follow. That scope is intended and should be interpreted to be as broad as is consistent with the ordinary meaning of the language that is used in the claims when interpreted in light of this specification and the prosecution history that follows and to encompass all structural and functional equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirement of Sections 101, 102, or 103 of the Patent Act, nor should they be interpreted in such a way. Any unintended embracement of such subject matter is hereby disclaimed.
Except as stated immediately above, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims.
It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “includes,” “including,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises or includes a list of elements or steps does not include only those elements or steps but may include other elements or steps not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.
Unless otherwise stated, any and all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. Such amounts are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain. For example, unless expressly stated otherwise, a parameter value or the like may vary by as much as ±10% from the stated amount.
In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, the subject matter to be protected lies in less than all features of any single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
While the foregoing has described what are considered to be the best mode and other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that they may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all modifications and variations that fall within the true scope of the present concepts.
Claims
1. A dirty lens correction system comprising:
- a memory storing instructions; and
- a processor configured to execute the instructions to configure the dirty lens correction system to:
- obtain a first image of a scene captured from a first viewpoint;
- obtain a second image of the scene captured from a second viewpoint separated from the first viewpoint;
- rectify the first and second images to create respective first and second rectified images, the first and second rectified images including pixel data;
- determine disparity correspondence between corresponding image pixel pairs of the first and second rectified images;
- map the first and second rectified images into the same domain using the determined disparity to produce a first domain mapped image and a second domain mapped image;
- detect an image artifact region within the first domain mapped image if at least one of a color attribute value, a gradient value, or an intensity value in a region of the first domain mapped image is different than the color attribute value, the gradient value, or the intensity value in a corresponding region of the second domain mapped image;
- determine correction factors for the detected image artifact region;
- correct at least one of the rectified first or second images by applying the determined correction factors; and
- create a three-dimensional (3D) scene for presentation using the corrected rectified images and the determined disparity.
2. The system of claim 1, wherein the image artifact is detected within the first domain mapped image if the color attribute value of the image artifact region within the first domain mapped image is greater than the color attribute value of the corresponding region within the second domain mapped image.
3. The system of claim 2, wherein the color attribute value is a red, green, blue (RGB) color attribute value.
4. The system of claim 1, wherein the image artifact is detected within the first domain mapped image if the intensity value of the image artifact region within the first domain mapped image is greater than the intensity value of the corresponding region within the second domain mapped image.
5. The system of claim 4, wherein the intensity value is a grey level.
6. The system of claim 4, wherein the intensity value is a vector length of weighted color vectors.
7. The system of claim 1, wherein the image artifact is detected within the first domain mapped image if the gradient value of the image artifact region within the first domain mapped image is less than the gradient value of the corresponding region within the second domain mapped image.
8. The system of claim 7, wherein the gradient value is a gradient vector length.
9. The system of claim 1, wherein the region includes a group of pixels.
10. The system of claim 1, wherein the processor is configured to rectify the first and second images by applying:
- (xrectified image,yrectified image)=(rx*ximage,ry*yimage)
- where rx=f(x2image); and
- ry=f(y2image);
- wherein x is a pixel location in a horizontal direction and y is a pixel location in a vertical direction.
11. A dirty lens correction method comprising the steps of:
- determining disparity correspondence between corresponding image pixel pairs of first and second rectified images;
- mapping the first and second rectified images into the same domain using the determined disparity to produce a first domain mapped image and a second domain mapped image;
- detecting an image artifact region within the first domain mapped image if at least one of a color attribute value, a gradient value, or an intensity value in a region of the first domain mapped image is different than the color attribute value, the gradient value, or the intensity value in a corresponding region of the second domain mapped image;
- determining correction factors for the detected image artifact region;
- correcting at least one of the rectified first or second images by applying the determined correction factors; and
- creating a three-dimensional (3D) scene for presentation using the corrected rectified images and the determined disparity.
12. The method of claim 11, wherein the image artifact is detected within the first domain mapped image if the color attribute value of the image artifact region within the first domain mapped image is greater than the color attribute value of the corresponding region within the second domain mapped image.
13. The method of claim 12, wherein the color attribute value is a red, green, blue (RGB) color attribute value.
14. The method of claim 11, wherein the image artifact is detected within the first domain mapped image if the intensity value of the image artifact region within the first domain mapped image is greater than the intensity value of the corresponding region within the second domain mapped image.
15. The method of claim 14, wherein the intensity value is a grey level.
16. The method of claim 14, wherein the intensity value is a vector length of weighted color vectors.
17. The method of claim 11, wherein the image artifact is detected within the first domain mapped image if the gradient value of the image artifact region within the first domain mapped image is less than the gradient value of the corresponding region within the second domain mapped image.
18. The method of claim 17, wherein the gradient value is a gradient vector length.
19. The method of claim 11, wherein the first and second rectified images are obtained from respective first and second images by applying:
- (xrectified image,yrectified image)=(rx*ximage,ry*yimage)
- where rx=f(x2image); and
- ry=f(y2image);
- wherein x is a pixel location in a horizontal direction and y is a pixel location in a vertical direction.
20. A non-transitory computer readable medium including instructions for use with a dirty lens correction system, the instructions, when executed by a processor, configure the dirty lens correction system to:
- determine disparity correspondence between corresponding image pixel pairs of first and second rectified images;
- map the first and second rectified images into the same domain using the determined disparity to produce a first domain mapped image and a second domain mapped image;
- detect an image artifact region within the first domain mapped image if at least one of a color attribute value, a gradient value, or an intensity value in a region of the first domain mapped image is different than the color attribute value, the gradient value, or the intensity value in a corresponding region of the second domain mapped image;
- determine correction factors for the detected image artifact region;
- correct at least one of the rectified first or second images by applying the determined correction factors; and
- create a three-dimensional (3D) scene for presentation using the corrected rectified images and the determined disparity.
Type: Application
Filed: Aug 9, 2022
Publication Date: Dec 1, 2022
Patent Grant number: 12073536
Inventors: Sagi KATZ (Yokneam Ilit), David Ben Ezra (Los Angeles, CA)
Application Number: 17/883,602