SYSTEMS AND METHODS FOR SELECTIVE REPLACEMENT OF OBJECTS IN IMAGES
Exemplary embodiments are directed to a system for selective replacement of an object in an image. The system includes an interface configured to receive as input an original image and a background image, and a processing device in communication with the interface. The processing device is configured to process the original image using a neural network to detect one or more objects in the original image, generate a neural network mask of the original image for the one or more objects in the original image, generate a filtered original image including the original image without the one or more objects, generate a modulated background image including a replacement background based on the neural network mask, and generate a combined image including the filtered original image combined with the modulated background image.
The present application claims the benefit of priority to U.S. Provisional Application No. 62/936,845, filed Nov. 18, 2019, which is hereby incorporated by reference in its entirety.
TECHNICAL FIELDThe present disclosure relates generally to computer-based systems and methods for altering or editing digital images. More specifically, the present disclosure relates to systems and methods for selective replacement of objects in images, in order to generate a realistic image in an efficient manner.
BACKGROUNDReplacement of objects, such as the sky, specific structures, people, or the like, in a digital image is often useful in various fields, such as business, photography, digital editing, and the like. The process of such replacement is generally not automatic, and often requires a wide range of different image editing tools to achieve the desired replacement. Depending on the image and the object to be replaced (e.g., the scene and the type of sky), different approaches for replacing the object may be implemented. As such, there is generally no universal approach for replacement of objects in images.
A need exists for systems and methods for selective replacement of objects in images that allow for an automatic and efficient process of replacement of the objects in images having varying complexities. The systems and methods of the present disclosure solve these and other needs.
SUMMARYIn accordance with embodiments of the present disclosure, an exemplary system for selective replacement of an object in an image is provided. The system includes an interface configured to receive as input an original image and a background image, and a processing device in communication with the interface. The processing device can be configured to process the original image using a neural network to detect one or more objects in the original image, generate a neural network mask of the original image for the one or more objects in the original image, generate a filtered original image including the original image without the one or more objects, generate a modulated background image including a replacement background based on the neural network mask, and generate a combined image including the filtered original image combined with the modulated background image.
The original image can include a foreground and a background. The one or more objects can include the background of the original input. In some embodiments, the one or more objects include a sky as the background in the original image. The processing device can be configured to extract the foreground from the original image. The processing device can be configured to generate a refined mask for each pixel of the original image associated with the background. The processing device can be configured to generate a dilated or indented mask including a dilation or indentation from a border extending between the foreground and the background. The processing device can be configured to generate an interpolation grid corresponding to the foreground. The processing device can be configured to generate an extracted image including the original image with the foreground extracted based on the interpolation grid. The processing device can be configured to generate the filtered original image by extracting the foreground of the original image based on the interpolation grid.
The processing device can be configured to generate a blended image, the blended image including a smooth transition between the filtered original image and the modulated background image. The processing device can be configured to generate a toned image, the toned image including the combined image with adjustment of tone within the combined image. The processing device can be configured to generate a tint unified image, the tint unified image including tint correction at edges between the filtered original image and the modulated background image. The processing device can be configured to adjust one or more characteristics of the filtered original image independently from one or more characteristics of the modulated background image.
In some embodiments, the interface can include an image selection section with the combined image and one or more additional original images. In such embodiments, the interface can include a first submenu for selecting the combined image and copying the adjustments or enhancements applied to the combined image, and the interface can include a second submenu for selecting one or more of the additional original images and applying the copied adjustments or enhancements of the combined image to the selected one or more of the additional original images.
In accordance with embodiments of the present disclosure, an exemplary method for selective replacement of an object in an image is provided. The method can include receiving as input at an interface an original image and a background image, detecting one or more objects in the original image with a neural network, and generating a neural network mask of the original image for the one or more objects in the original image. The method can include generating a filtered original image, the filtered original image including the original image without the one or more objects. The method can include generating a modulated background image, the modulated background image including a replacement background based on the neural network mask. The method can include generating a combined image, the combined image including the filtered original image combined with the modulated background image.
The method can include adjusting one or more characteristics of the filtered original image independently from one or more characteristics of the modulated background image. The method can include receiving at the interface one or more additional original images, wherein the interface includes an image selection section with the combined image and the one or more additional original images. The method can include selecting the combined image and copying the adjustments or enhancements applied to the combined image at a first submenu of the interface, and selecting one or more of the additional images and applying the copied adjustments or enhancements of the combined image to the selected one or more of the additional images at a second submenu of the interface.
In accordance with embodiments of the present disclosure, an exemplary non-transitory computer-readable medium storing instructions at least for selective replacement of an object in an image is provided. The instructions are executable by a processing device. Execution of the instructions by the processing device can cause the processing device to receive as input at an interface an original image and a background image, detect one or more objects in the original image with a neural network, and generate a neural network mask of the original image for the one or more objects in the original image. Execution of the instructions by the processing device can cause the processing device to generate a filtered original image, the filtered original image including the original image without the one or more objects. Execution of the instructions by the processing device can cause the processing device to generate a modulated background image, the modulated background image including a replacement background based on the neural network mask. Execution of the instructions by the processing device can cause the processing device to generate a combined image, the combined image including the filtered original image combined with the modulated background image.
Other features and advantages will become apparent from the following detailed description considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed as an illustration only and not as a definition of the limits of the invention.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
To assist those of skill in the art in making and using the disclosed systems and methods for selective replacement of objects in images, reference is made to the accompanying figures, wherein:
In accordance with embodiments of the present disclosure, exemplary systems for selective replacement of objects in images are provided to generate a realistic output image. As an example, the exemplary systems can be used to replace the sky in an input image with a replacement sky, with the output image providing no indication of replacement of the original sky. The exemplary systems generate an object mask using a neural network to identify the object (e.g., background sky) to be replaced in the image, generate a polygonal mesh to identify and extract additional objects (e.g., foreground) in the image to be maintained in the output image, generate a replacement or target sky based on the object mask, and combine the replacement or target sky with the maintained objects using the polygonal mesh to generate the output image. The exemplary systems can be used to replace the sky completely, to combine a new sky with an original sky, to combine an original sky with a new object (e.g., image augmentation), combinations thereof, or the like. Although discussed herein as being used to replace the sky of an image, it should be understood that the exemplary systems can be used to identify/detect and replace any object(s) in the image.
The system 100 can include a central computing system 112 for controlling the steps performed by the system 100. In some embodiments, the central computing system 112 can include the one or more processing devices 108. The system 100 can include a user interface 114 (e.g., a device with a user interface), such as a user interface having a graphical user interface (GUI) 116. The GUI 116 can be used to input data and/or instructions into the system 100, and to output data and/or images to the user.
The system 100 can include one or more neural networks 118 executed by the processing device 108. The neural network 118 can include an object segmentation network 120 and a multi-class segmentation network 122. The network 118 can be trained via, e.g., manual input, machine learning, historical data input and analysis, combinations thereof, or the like, with sample images to assist in one or more steps of the process performed by the system 100. For example, the network 118 can be trained with sample images to detect and segment specific objects in input images. In one embodiment, the network 118 can be trained to recognize pixels in the input image that correspond with the sky (or with a high probability of corresponding with the sky). The networks 118 used can be small and fast to ensure efficient processing of the images within the system. The object segmentation network 120 can be selected to precisely segment objects (e.g., the sky) from the original image and to use quantization weights to reduce the size of the network. Augmentation can be used to artificially replace the sky in the original image.
In some embodiments, the object segmentation network 120 can be used to identify and segment the object to be replaced in the original image (e.g., the sky). The multi-class segmentation network 122 can include a dataset with a large number of classes (e.g., trees, humans, buildings, or the like) to identify and segment specific objects in the original image to be extracted and combined with a replacement object (e.g., a replacement sky). In some embodiments, the multi-class segmentation network 122 can be used for additional augmentations of the identified objects, e.g., random flip, random crop, random brightness, random rotation, affine transformation, combinations thereof, or the like. The system 100 can include a communication interface 124 configured to provide communication and/or transmission of data between the components of the system 100 shown in
At step 316, the target background module can be executed by the processing device to generate a modulated background image including a target background with the dilated mask. At step 318, the insertion module can be executed by the processing device to generate a combined image by combining the filtered original image having the foreground of the original image with the modulated background image. At step 320, the horizon blending module can be executed by the processing device to generate a blended image having a smooth transition between the filtered original image and the modulated background image. At step 322, the tone adjustment module can be executed by the processing device to generate a toned image. At step 324, the tinting unification module can be executed by the processing device to generate a tint unified image having corrected tinting in areas or edges between the filtered original image and the modulated background image. Details of the process 300 and additional optional steps will be discussed in greater detail below in combination with the sample images.
With reference to
With reference to
With reference to
float skyHist[N][N][N] (1)
float nonSkyHist[N][N][N] (2)
Int x=pixel.r*(N−1) (3)
Int y=pixel.g*(N−1) (4)
Int z=pixel.b*(N−1) (5)
where N is a dimension equal to 8. Two histograms can be used to count pixels under the mask 174 of the sky (one histogram) and pixels outside the mask 174 of the sky (second histogram) (e.g., skyHist and nonSkyHist). After counting, the histograms can be normalized by dividing by the number of pixels in each histogram. The result can be a model of the probability distribution of colors. The refined mask can be generated by comparing the probability distribution of colors using Equation 6 below:
refinedIsSkyPixel=skyHist[z][y][x]>nonSkyHist[z][y][x] (6)
With reference to
With reference to
With reference to
With reference to
OriginalDetailsMap=abs(OriginalImage−SrcSkyBox) (7)
where OriginalImage is either the low resolution image 172 or the input original image 170, SrcSkyBox is the extracted image 182, and OriginalDetailsMap is the details image 184. Equations 8-9 can be used to represent the more detailed process of obtaining the OriginalDetailsMap:
dp=RGB2LAB(OriginalImage)−RGB2LAB(SrcSkyBox) (8)
OriginalDetailsMap=sqrt(dp.r*dp.r+dp.g*dp.g+dp.b*dp.b) (9)
where OriginalImage is the original image, and SrcSkyBox is the original sky with the erased foreground objects (e.g., the extracted image 182).
A more refined mask 176 can be generated as represented by Equations 10-11:
amount=0.16*(isPlant( )?2.8:1.0)*amountThreshold (10)
DetailsBinaryMask=OriginalDetaiilsMap<amount?1:0 (11)
where isPlant( ) is a function that returns the mask of probabilities of vegetation received from the neural network 118 that segments the input original image 170 (e.g., the neural network mask 174), amountThreshold is a precision setting set by the system 100 (e.g., a SkyGlobal parameter), and DetailsBinaryMask is the refined mask 176. An example of the refined mask is shown in
With reference to
The underlying background 408 to be covered by the details shown in the details image 184 can be blended or blurred by the target background module 142 to have a similar color scheme as the replacement background 406. The blended or blurred image can be superimposed on the target sky using the DetailsBinaryMask of Equation 12 to generate the target background image 214. The underlying background 408 can be heavily blurred using a summed area table based on the target background image 214, creating the effect of frosted glass. The summed area table or integral image can be used to blur with a different radius closer to the bottom of the image 214, such that a stronger blur is gradually generated in the direction of the bottom of the image 214. The new sky texture can be inserted such that the lower part of the sky touches or is in contact with the lower edge of the border 410 from the refined mask 176, with the border 410 treated as the horizon for the image 214. In some embodiments, the system 100 can be used to rotate the horizon (thereby rotating the replacement background 406) to match the inclination of the sky or background 400 in the input original image 170. In some embodiments, the system 100 can be used to shift the horizon along the y-axis (e.g., vertically). The replacement background 406 generated by the target background module 142 ensures that details within the image receive the replacement background 406. For example, rather than attempting to fill in gaps between leaves or branches of the tree in the image 170, the system 100 generates a completely new background 406 on which the details of the image 170 can be placed.
With reference to
Dst=Src*DstSkyBox/SrcSkyBox (12)
where DstSkyBox is the target background image 214 and SrcSkyBox is the extracted image 182. Details of the image are transferred because on the target background image 214 the relationship between DstSkyBox and SrcSkyBox in Equation 12 changes insignificantly and affects the overall brightness in the image, and the details are formed by multiplying by Src in Equation 12. The details image 184 is therefore used to generate the binary mask to identify the details to be transferred.
In order to reduce the distortion of the target background image 214 of Equation 12, the insertion module 144 can be executed by the processing device 108 to perform a PreserveColor operation represented by Equations 13-15 below:
float toGray(Pixel p){return sqrt(p.r*p.r+p.g*p.g+p.b*p.b)/3;} (13)
Pixel dp=Src−SkyBoxSrc;
float v=toGray(dp*dp);
v=clampValue(v*30);
Interpolation function: lerp(a,b,amount)return a+(b−a)*amount;
float graySrc=toGray(SkyBoxSrc); (14)
SkyBoxSrc=lerp(SkyBoxSrc,lerp(SkyBoxSrc,Pixel(graySrc),v), amountPreserveColor)
float grayDst=toGray(SkyBoxDst); (15)
SkyDst=lerp(SkyBoxDst,lerp(SkyBoxDst,Pixel(grayDst),v),amountPreserveColor)
where v represents the details mask, Src represents the input original image 170, SkyBoxSrc represents the extracted image 182, the interpolation function represents lerp, Equation 14 desaturates the SkyBoxSrc under-matted region, SkyBoxDst represents the target background image 214, and Equation 15 represents desaturation of SkyDst. In the refined mask 176, the pixels are discolored such that their ratio does not distort the color of the target background image 214. In particular, the matte areas of the refined mask 176 are discolored such that when the details are transferred to the target background image 214, the colors are less distorted during modulation. The amountPreserveColor portion of Equation 15 is the parameter affecting the strength of the discoloration correction.
In some embodiments, the insertion module 144 can operate in combination with the horizon blending module 146 to generate a smooth transition between the old/original sky and the new/replacement sky. When generating a matte substrate, the gradient can be applied to the new/replacement sky at the horizon level, resulting in translucency of the old/original sky. The degree or power of transillumination and smoothness of blurring can be adjusted by a setting/solider at the user interface 114 to smoothly combine the new/replacement sky into the old/original sky. Such transition can be generated when there is a strong discrepancy between the original and the new sky, resulting in a smoother and more realistic transition.
With reference to
With reference to
In some embodiments, additional adjustments or enhancements to the combined image 194 can be made by the system 100. For example,
As a further example,
As a further example,
As a further example,
For example,
As a further example, based on the defocused nature of one or more portions of the foreground 402, the replacement background 406 may appear unrealistic. In such instances, as discussed above, the defocus module 202 can be executed by the processing device 108 to generate a defocused image 202.
As a further example, it may be desired to flip the replacement background 406 along an x-axis or a y-axis (e.g., vertical or horizontal flip). In such instances, the flip module 154 can receive as input the combined image 194, and is executed by the processing device 108 to generate a flipped image 208.
As a further example,
As a further example,
As a further example, it may be desirable to not only replace the original sky with a new sky, but also to add additional features, objects or details to the new sky. The new sky can be blended with the original sky and new objects can be added to the new sky, such as, e.g., new clouds, fireworks, lightning, rainbow, or any other object.
In some embodiments, additional adjustments or enhancements to the combined image 194 can be made by the system 100. For example, a strong difference may exist between the original image (e.g., the original foreground 402) and the replacement background 406, with such differences being noticeable and resulting in an unrealistic image 194. Generally, a real sky includes at least some haze. The system 100 can include a setting for adjusting the haze of the daytime replacement sky.
In some embodiments, for daytime blue skies, the haze module 165 can create a first layer (e.g., Layer 0) filled with a haze color, such as pure which in the example of
Instead, light bright haze is added to the replacement background 406. Adding a light haze allows the system 100 to adjust the overall brightness of the sky to the original image 170 and adds realism to the scene due to the fact that the new sky (e.g., replacement background 406) may appear too perfect or crisp in colors. Although pure white is used as the color for the haze layer, it should be understood any color could be used depending on the type of images being enhanced. White color may be suitable for daytime skies. For a sunset or other type of sky, a different base haze color can be used.
In some embodiments, the processing device 108 can execute a grain module 167 to receive as input the combined image 194, and generate the grain adjusted image 218. The grain module 167 can add texture on the palate with noise to match noise of the original image 170. Therefore, zooming in on the grain adjusted image 218 provides a substantially similar texture for the foreground 402 and the replacement background 406.
In some embodiments, after adjustments have been made to one image to create a final image with a replacement background 406 and (optionally) additional adjustments performed by the user and/or the system 100, it may be desirable to automatically apply the same adjustments to one or more other input original images 170 in the system 100. The system 100 provides an efficient process for applying or copying the same adjustments to one or more input original images 170 without having to repeat the editing steps again. The user interface 114 includes the image selection section 420 (e.g., an image filmstrip in
Virtualization may be employed in the computing device 500 so that infrastructure and resources in the computing device 500 may be shared dynamically. A virtual machine 514 may be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines may also be used with one processor. Memory 506 may include a computer system memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory 506 may include other types of memory as well, or combinations thereof.
A user may interact with the computing device 500 through a visual display device 518 (e.g., a personal computer, a mobile smart device, or the like), such as a computer monitor, which may display at least one user interface 520 (e.g., a graphical user interface) that may be provided in accordance with exemplary embodiments. The computing device 500 may include other I/O devices for receiving input from a user, for example, a camera, a keyboard, microphone, or any suitable multi-point touch interface 508, a pointing device 510 (e.g., a mouse), or the like. The input interface 508 and/or the pointing device 510 may be coupled to the visual display device 518. The computing device 500 may include other suitable conventional I/O peripherals.
The computing device 500 may also include at least one storage device 524, such as a hard-drive, CD-ROM, eMMC (MultiMediaCard), SD (secure digital) card, flash drive, non-volatile storage media, or other computer readable media, for storing data and computer-readable instructions and/or software that implement exemplary embodiments of the system described herein. Exemplary storage device 524 may also store at least one database 526 for storing any suitable information required to implement exemplary embodiments. For example, exemplary storage device 524 can store at least one database 526 for storing information, such as data relating to the cameras, the modules, the databases, the central computing system, the communication interface, the processing device, the neural networks, the user interface, combinations thereof, or the like, and computer-readable instructions and/or software that implement exemplary embodiments described herein. The databases 526 may be updated by manually or automatically at any suitable time to add, delete, and/or update one or more items in the databases.
The computing device 500 can include a network interface 512 configured to interface via at least one network device 522 with one or more networks, for example, a Local Area Network (LAN), a Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, T1, T3, 56 kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. The network interface 512 may include a built-in network adapter, a network interface card, a PCMCIA network card, Pa Cl/PCIe network adapter, an SD adapter, a Bluetooth adapter, a card bus network adapter, a wireless network adapter, a USB network adapter, a modem or any other device suitable for interfacing the computing device 500 to any type of network capable of communication and performing the operations described herein. Moreover, the computing device 500 may be any computer system, such as a workstation, desktop computer, server, laptop, handheld computer, tablet computer (e.g., the tablet computer), mobile computing or communication device (e.g., the smart phone communication device), an embedded computing platform, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein.
The computing device 500 may run any operating system 516, such as any of the versions of the Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the computing device and performing the operations described herein. In exemplary embodiments, the operating system 516 may be run in native mode or emulated mode. In an exemplary embodiment, the operating system 516 may be run on one or more cloud machine instances.
The environment 600 can include repositories or databases 616, 618, which can be in communication with the servers 602, 604, as well as the one or more cameras 606, one or more modules 608, at least one processing device 610, a user interface 612, and a central computing system 614, via the communications platform 620. In exemplary embodiments, the servers 602, 604, one or more cameras 606, one or more modules 608, at least one processing device 610, a user interface 612, and a central computing system 614 can be implemented as computing devices (e.g., computing device 500). Those skilled in the art will recognize that the databases 616, 618 can be incorporated into at least one of the servers 602, 604. In some embodiments, the databases 616, 618 can store data relating to the database 104, and such data can be distributed over multiple databases 616, 618.
While exemplary embodiments have been described herein, it is expressly noted that these embodiments should not be construed as limiting, but rather that additions and modifications to what is expressly described herein also are included within the scope of the invention. Moreover, it is to be understood that the features of the various embodiments described herein are not mutually exclusive and can exist in various combinations and permutations, even if such combinations or permutations are not made express herein, without departing from the spirit and scope of the invention.
Claims
1. A system for selective replacement of an object in an image, the system comprising:
- an interface configured to receive as input an original image and a background image; and
- a processing device in communication with the interface, the processing device configured to: (i) process the original image using a neural network to detect one or more objects in the original image; (ii) generate a neural network mask of the original image for the one or more objects in the original image; (iii) generate a filtered original image including the original image without the one or more objects; (iv) generate a modulated background image including a replacement background based on the neural network mask; and (v) generate a combined image including the filtered original image combined with the modulated background image.
2. The system of claim 1, wherein the original image includes a foreground and a background, wherein the one or more objects include the background of the original input.
3. The system of claim 2, wherein the one or more objects include a sky in the original image.
4. The system of claim 2, wherein the processing device extracts the foreground from the original image.
5. The system of claim 2, wherein the processing device generates a refined mask for each pixel of the original image associated with the background.
6. The system of claim 5, wherein the processing device generates a dilated mask including a dilation or indentation from a border extending between the foreground and the background.
7. The system of claim 6, wherein the processing device generates an interpolation grid corresponding to the foreground.
8. The system of claim 7, wherein the processing device generates an extracted image including the original image with the foreground extracted based on the interpolation grid.
9. The system of claim 7, wherein processing device generates the filtered original image by extracting the foreground of the original image based on the interpolation grid.
10. The system of claim 1, wherein the processing device generates a blended image, the blended image including a smooth transition between the filtered original image and the modulated background image.
11. The system of claim 1, wherein the processing device generates a toned image, the toned image including the combined image with adjustment of tone within the combined image.
12. The system of claim 1, wherein the processing device generates a tint unified image, the tint unified image including tint correction at edges between the filtered original image and the modulated background image.
13. The system of claim 1, wherein the processing device adjusts one or more characteristics of the filtered original image independently from one or more characteristics of the modulated background image.
14. The system of claim 1, wherein the interface includes an image selection section with the combined image and one or more additional original images.
15. The system of claim 14, wherein the interface includes a first submenu for selecting the combined image and copying the adjustments or enhancements applied to the combined image, and the interface includes a second submenu for selecting one or more of the additional original images and applying the copied adjustments or enhancements of the combined image to the selected one or more of the additional original images.
16. A method for selective replacement of an object in an image, the method comprising:
- receiving as input at an interface an original image and a background image;
- detecting one or more objects in the original image with a neural network;
- generating a neural network mask of the original image for the one or more objects in the original image;
- generating a filtered original image, the filtered original image including the original image without the one or more objects;
- generating a modulated background image, the modulated background image including a replacement background based on the neural network mask; and
- generating a combined image, the combined image including the filtered original image combined with the modulated background image.
17. The method of claim 16, comprising adjusting one or more characteristics of the filtered original image independently from one or more characteristics of the modulated background image.
18. The method of claim 16, comprising receiving at the interface one or more additional original images, wherein the interface includes an image selection section with the combined image and the one or more additional original images.
19. The method of claim 18, comprising selecting the combined image and copying the adjustments or enhancements applied to the combined image at a first submenu of the interface, and selecting one or more of the additional images and applying the copied adjustments or enhancements of the combined image to the selected one or more of the additional images at a second submenu of the interface.
20. A non-transitory computer-readable medium storing instructions at least for selective replacement of an object in an image that are executable by a processing device, wherein execution of the instructions by the processing device causes the processing device to:
- receive as input at an interface an original image and a background image;
- detect one or more objects in the original image with a neural network;
- generate a neural network mask of the original image for the one or more objects in the original image;
- generate a filtered original image, the filtered original image including the original image without the one or more objects;
- generate a modulated background image, the modulated background image including a replacement background based on the neural network mask; and
- generate a combined image, the combined image including the filtered original image combined with the modulated background image.
Type: Application
Filed: Oct 20, 2022
Publication Date: Feb 9, 2023
Inventors: Dmitry Sytnik (Kyiv), Andrey Frolov (Kyiv Oblast)
Application Number: 17/969,926