DEVICE AND METHOD FOR REDACTING AN IMAGE
A process for redacting an image. In operation, an electronic computing device receives an image captured by a camera corresponding to a field of view of the camera. The image contains a plurality of pixels and each pixel in the plurality of pixels capturing a respective region in a plurality of regions of the field of view of the camera. The electronic computing device measures a distance from the camera to each respective region in the plurality of regions of the field of view of the camera. The electronic computing device redacts at least one pixel in the plurality of pixels contained in the image as a function of the distance measured from the camera to a respective region captured in the at least one pixel.
In order to protect privacy, most camera surveillance systems implement a privacy mask feature. This feature allows security operators to select a portion of a scene from the camera's field-of-view that should be redacted from an image captured by the camera prior to the image being stored or transmitted. Redaction may include obscuring, removing, or blocking out certain portions of the images captured by the camera. While a user can manually select portions of the scene that are to be redacted, such manual redaction may be time consuming and further prone to redactions errors.
In the accompanying figures similar or the same reference numerals may be repeated to indicate corresponding or analogous elements. These figures, together with the detailed description, below are incorporated in and form part of the specification and serve to further illustrate various embodiments of concepts that include the claimed invention, and to explain various principles and advantages of those embodiments.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present disclosure.
The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION OF THE INVENTIONAs described above, certain portions of the scene captured by a camera can be redacted for privacy reasons. However, selecting a redaction zone within a camera's field of view would also redact portions of the scene that may be of interest to the user. For example, assume a user has installed a camera in the front of her apartment to monitor the movement of persons or objects immediately outside the front of her apartment. In this case, the camera's field-of-view may extend to cover the front of the neighbor's premise even though the user may have no interest in monitoring the neighbor's premise. The neighbor may also have privacy concerns about the user's camera monitoring his premise. As another example, a manufacturer may use cameras to track the presence of employees working at a manufacturing plant but may not want the manufacturing equipment or processes to be recorded by the camera. As a further example, an hospital administration may use cameras to track people entering or exiting their facilities, but may not want to record in-patients who are being transported from one room to another room within the same facility. Accordingly, a technological solution is needed to automatically redact objects in certain areas within the field of view of the camera that the user may have no interest in monitoring.
One embodiment provides a method for redacting an image. The method comprises: receiving, at an electronic computing device, an image captured by a camera corresponding to a field of view of the camera, the image containing a plurality of pixels and each pixel in the plurality of pixels capturing a respective region in a plurality of regions of the field of view of the camera; measuring, at the electronic computing device, a distance from the camera to each respective region in the plurality of regions of the field of view of the camera; and redacting, at the electronic computing device, at least one pixel in the plurality of pixels contained in the image as a function of the distance measured from the camera to a respective region captured in the at least one pixel.
Another embodiment provides an electronic computing device, comprising: a communications unit; and an electronic processor communicatively coupled to the communications unit. The electronic processor is configured to: receive, via the communications. interface, an image captured by a camera corresponding to a field of view of the camera, the image containing a plurality of pixels and each pixel in the plurality of pixels capturing a respective region in a plurality of regions of the field of view of the camera; measure, at the electronic computing device, a distance from the camera to each respective region in the plurality of regions of the field of view of the camera; and redact, at the electronic computing device, at least one pixel in the plurality of pixels contained in the image as a function of the distance measured from the camera to a respective region captured in the at least one pixel.
Each of the above-mentioned embodiments will be discussed in more detail below, starting with example system and device architectures of the system in which the embodiments may be practiced, followed by an illustration of processing blocks for achieving an improved technical method of redacting an image. Example embodiments are herein described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to example embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The methods and processes set forth herein need not, in some embodiments, be performed in the exact sequence as shown and likewise various blocks may be performed in parallel rather than in sequence. Accordingly, the elements of methods and processes are referred to herein as “blocks” rather than “steps.”
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational blocks to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide blocks for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. It is contemplated that any part of any aspect or embodiment discussed in this specification can be implemented or combined with any part of any other aspect or embodiment discussed in this specification.
Further advantages and features consistent with this disclosure will be set forth in the following detailed description, with reference to the figures.
Referring now to the drawings, and in particular
In accordance with embodiments, a pixel 132 contained in the image 130 captured by the camera 120 are redacted according to a distance measured from the camera 120 to a respective real-world space or region captured in the pixel 132. In the example shown in
The electronic computing device 110 communicates with the camera(s) 120 via one or more communication networks 140. For example, the electronic computing device 110 obtains image(s) captured by the camera(s) 120 via the communication network 140 for the purposes of redacting the image(s) in accordance with a redaction rule selected by the user. The communication network 140 includes wireless and wired connections. For example, the communication network 140 may be implemented using a wide area network, such as the Internet, a local area network, such as a Wi-Fi network, and personal area or near-field networks, for example a Bluetooth™ network. Portions of the communications network 140 may include a Long Term Evolution (LTE) network, a Global System for Mobile Communications (or Groupe Special Mobile (GSM)) network, a Code Division Multiple Access (CDMA) network, an Evolution-Data Optimized (EV-DO) network, an Enhanced Data Rates for GSM Evolution (EDGE) network, a 3G network, a 4G network, a 5G network, and combinations or derivatives thereof.
While only a single electronic computing device 110 is shown as being included in the system 100, the system 100 may include any number of electronic computing devices, where each electronic computing device may be controlled by different one or more service providers each providing a redaction service to a different group of users for redacting images captured by cameras 120 that are operated or deployed by the users in the respective groups.
As shown in
The processing unit 203 may include an encoder/decoder with a code Read Only Memory (ROM) 212 coupled to the common data and address bus 217 for storing data for initializing system components. The processing unit 203 may further include an electronic processor 213 (for example, a microprocessor, a logic circuit, an application-specific integrated circuit, a field-programmable gate array, or another electronic device) coupled, by the common data and address bus 217, to a Random Access Memory (RAM) 204 and a static memory 216. The electronic processor 213 may generate electrical signals and may communicate signals through the communications unit 202.
Static memory 216 may store operating code 225 for the electronic processor 213 that, when executed, performs one or more of the blocks set forth in
Turning now to
The electronic computing device 110 may execute the process 300 at power-on, at some predetermined periodic time period thereafter, in response to a trigger raised locally at the electronic computing device 110 via an internal process or via an input interface or in response to a trigger from an external device (e.g., a computing device associated with a user operating the camera 120) to which the electronic computing device 110 is communicably coupled, among other possibilities. As an example, the electronic computing device 110 is programmed to automatically trigger execution of the process 300 when a user subscribes to a redaction service for redacting images captured by a camera 120.
The process 300 of
At block 310, the electronic computing device 110 receives an image captured by the camera 120 corresponding to a field of view of the camera 120. The image captured by the camera contains a plurality of pixels 132 and each pixel 132 in the plurality of pixels capture a respective region in a plurality of regions of the field of view of the camera 120. In accordance with some embodiments, the electronic computing device 110 automatically has access (e.g., in response to a user subscribing the user's camera 120 to a redaction service) to each image captured by the camera 120. In these embodiments, the electronic computing device 110 automatically receives each image (either in real-time or after the image has been stored at the camera or in a remote storage device) captured by the camera 120 for the purpose of redacting the images. In another embodiment, the user may select and specify particular images captured by the camera 120 that should be subject to the redaction process. In this embodiment, the electronic computing device 110 receives access to only those images that are selected by the user for redaction.
At block 320, the electronic computing device 110 measures a distance from the camera 120 to each respective region in the plurality of regions in the field of view of the camera 120. The electronic computing device 110 may use any distance measurement technique to measure the distance from the camera 120 to a respective region within the field of view of the camera 120. In one embodiment, the electronic computing device 110 measures a distance from the camera 120 to a particular region in the field of view of the camera 120 by controlling the camera 120 to change the focus as long as the pixels capturing the particular region in the camera's field of view create a picture at a predefined quality level. The distance from the camera 120 to the region represented in one or more pixels is then calculated as a function of the change in the focus that is used to achieve the picture of the predefined quality level.
In another embodiment, the electronic computing device 110 measures a distance from the camera 120 to a particular region in the field of view of the camera 120 as a function of an elevated distance from the ground to a deployed position of the camera 120 and an angle at which the camera 120 is focused onto a particular region in the field of view of the camera 120. As an example, the electronic computing device 110 may use a mathematical function DI=EI*tan (AI) to measure a distance (Di) measured from the camera 120 to a particular region (captured in a particular pixel of an image) in the field of view of the camera 120, I is the particular real-world region or space (where I=1, 2. . . . N) (in the field of view of the camera 120) to which distance is measured from the camera 120, E is the elevated distance from the ground to a deployed position of the camera 120, and A is the angle at which the camera 120 is focused onto the particular region in the field of the camera 120. The electronic computing device 110 may repeat the distance measurement process for each region ‘I’ in the field of view of the camera 120 to measure a respective distance from the camera 120 to each respective region ‘I’ in the field of view of the camera 120. As an example, if the camera's field of view is divided into 4 gigapixels, the electronic computing device 110 may calculate and store a distance measured from the camera 120 to each of those 4 gigapixels contained within an image captured by the camera 120. Other techniques for measuring a distance from the camera to a respective region in the field of view of the camera 120 are also within the scope of the embodiments described herein.
At block 330, the electronic computing device 110 redacts at least one pixel (e.g., pixel 132-2 captured in an image 130 shown in
In one embodiment, the redaction technique may involve application of a redaction filter (e.g., blur filter) to pixel values of an identified pixel contained in the image. The application of a redaction filter may modify optical characteristics (e.g., reduction of optical intensity) of one or more pixel values to which the filter is applied. The modification of the optical characteristics of the pixel values may make the pixel (e.g., a pixel showing a person's facial feature) within a particular image more coherent and less distinct, resulting in a redacted image. Alternatively, the redaction technique may also involve removing certain pixel values within an image captured by the camera 120. In accordance with embodiments, the electronic computing device 110 generates a redacted version of the image captured by the camera 120 after redacting all pixels that are identified to have met the distance criteria specified in the redaction rule selected by the user. The electronic computing device 110 then stores the redacted version of the image at a local memory of the electronic computing device 110 and/or the camera 120. Optionally, the electronic computing device 110 may also transmit the redacted version of the image to a remote storage device. In some embodiments, the electronic computing device 110 may further delete the original unredacted version of the image captured by the camera 120 immediately after redaction or after a predefined time period.
In accordance with embodiments, a user requesting a redaction service for images captured by a particular camera may select a particular redaction rule from a plurality of different redaction rules (e.g., a first redaction rule, a second redaction rule, a third redaction rule, and a fourth redaction rule) provisioned at the electronic computing device 110 to determine which pixels contained in the images captured by the camera 120 should be redacted.
Now referring to
As shown in
Now referring to
As shown in
Now referring to
As shown in
Now referring to
With respect to the first sector 710 shown in
With respect to the second sector 720 shown in
With respect to the third sector 730 shown in
As should be apparent from this detailed description, the operations and functions of the computing devices described herein are sufficiently complex as to require their implementation on a computer system, and cannot be performed, as a practical matter, in the human mind. Electronic computing devices such as set forth herein are understood as requiring and providing speed and accuracy and complexity management that are not obtainable by human mental steps, in addition to the inherently digital nature of such operations (e.g., a human mind cannot interface directly with RAM or other digital storage, cannot transmit or receive electronic messages, electronically encoded video, electronically encoded audio, etc., among other features and functions set forth herein).
In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The disclosure is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “one of”, without a more limiting modifier such as “only one of”, and when applied herein to two or more subsequently defined options such as “one of A and B” should be construed to mean an existence of any one of the options in the list alone (e.g., A alone or B alone) or any combination of two or more of the options in the list (e.g., A and B together).
A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
The terms “coupled”, “coupling” or “connected” as used herein can have several different meanings depending on the context in which these terms are used. For example, the terms coupled, coupling, or connected can have a mechanical or electrical connotation. For example, as used herein, the terms coupled, coupling, or connected can indicate that two elements or devices are directly connected to one another or connected to one another through an intermediate elements or devices via an electrical element, electrical signal or a mechanical element depending on the particular context.
It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Any suitable computer-usable or computer readable medium may be utilized. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation. For example, computer program code for carrying out operations of various example embodiments may be written in an object oriented programming language such as Java, Smalltalk, C++, Python, or the like. However, the computer program code for carrying out operations of various example embodiments may also be written in conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or server or entirely on the remote computer or server. In the latter scenario, the remote computer or server may be connected to the computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Claims
1. A method of redacting an image, the method comprising:
- receiving, at an electronic computing device, an image captured by a camera corresponding to a field of view of the camera, the image containing a plurality of pixels and each pixel in the plurality of pixels capturing a respective region in a plurality of regions of the field of view of the camera;
- measuring, at the electronic computing device, a distance from the camera to each respective region in the plurality of regions of the field of view of the camera; and
- redacting, at the electronic computing device, at least one pixel in the plurality of pixels contained in the image as a function of the distance measured from the camera to a respective region captured in the at least one pixel.
2. The method of claim 1, wherein redacting comprises:
- retrieving a first redaction rule which requires redaction of pixels captured in regions that are greater than a predefined distance from the camera; and
- redacting the at least one pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is greater than the predefined distance.
3. The method of claim 1, wherein redacting comprises:
- retrieving a second redaction rule which requires redaction of pixels captured in regions that are not greater than a predefined distance from the camera; and
- redacting the at least one pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is not greater than the predefined distance.
4. The method of claim 1, wherein redacting comprises:
- retrieving a third redaction rule which requires redaction of pixels captured in regions that are greater than a first predefined distance from the camera or are not greater than a second predefined distance from the camera; and
- redacting the at least one pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is greater than the first predefined distance from the camera or is not greater than the second predefined distance from the camera.
5. The method of claim 1, wherein redacting comprises:
- retrieving a fourth redaction rule which requires defining a plurality of sectors in the field of view of the camera and further applying different redaction rules to different one of the sectors.
6. The method of claim 5, further comprising:
- determining that the at least one pixel is captured in a region bounded within a first sector of the plurality of sectors;
- determining that the fourth redaction rule requires a further application of a first redaction rule to pixels captured in regions bounded within the first sector;
- retrieving the first redaction rule which requires redaction of pixels captured in regions that are greater than a predefined distance from the camera; and
- redacting the at least one pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is greater than the predefined distance.
7. The method of claim 5, further comprising:
- determining that the at least one pixel is captured in a region bounded within a second sector of the plurality of sectors;
- determining that the fourth redaction rule requires a further application of a second redaction rule to pixels captured in regions bounded within the second sector;
- retrieving the second redaction rule which requires redaction of pixels captured in regions that are not greater than a predefined distance; and
- redacting the at least one pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is not greater than the predefined distance.
8. The method of claim 5, further comprising:
- determining that the at least one pixel is captured in a region bounded within a third sector of the plurality of sectors;
- determining that the fourth redaction rule requires a further application of a third redaction rule to pixels captured in regions bounded within the third sector;
- retrieving a third redaction rule which requires redaction of pixels captured in regions that are greater than a first predefined distance from the camera or are not greater than a second predefined distance from the camera; and
- determining that the distance measured from the camera to the respective region captured in the at least one pixel is greater than the first predefined distance but not greater than the second predefined distance.
9. The method of claim 1, further comprising:
- refraining from redacting, at the electronic computing device, at least one other pixel in the plurality of pixels contained in the image as a function of the distance measured from the camera to a respective region captured in the at least one other pixel.
10. The method of claim 9, wherein refraining from redacting comprises:
- retrieving a first redaction rule which requires redaction of pixels captured in regions that are greater than a predefined distance from the camera; and
- refraining from redacting the at least one other pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one other pixel is not greater than the predefined distance.
11. The method of claim 9, wherein refraining from redacting comprises:
- retrieving a second redaction rule which requires redaction of pixels captured in regions that are not greater than a predefined distance from the camera; and
- refraining from redacting the at least one other pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is greater than the predefined distance.
12. The method of claim 9, wherein refraining from redacting comprises:
- retrieving a third redaction rule requiring redaction of pixels captured in regions that are greater than a first predefined distance from the camera but not greater than a second predefined distance from the camera; and
- refraining from redacting the at least one other pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is either not greater than the first predefined distance or greater than the second predefined distance.
13. The method of claim 9, wherein refraining from redacting comprises:
- retrieving a fourth redaction rule which requires defining a plurality of sectors within the field of view of the camera and applying different redaction rules for different one of the sectors.
14. The method of claim 13, further comprising:
- determining that the at least one other pixel is captured in a region bounded within a first sector of the plurality of sectors;
- determining that the fourth redaction rule requires application of a first redaction rule to pixels captured in regions bounded within the first sector;
- retrieving the first redaction rule which requires redaction of pixels captured in regions that are greater than a predefined distance; and
- refraining from redacting the at least one other pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one other pixel is not greater than the predefined distance.
15. The method of claim 13, further comprising:
- determining that the at least one other pixel is captured in a region bounded within a second sector of the plurality of sectors;
- determining that the fourth redaction rule requires application of a second redaction rule to pixels captured in regions bounded within the second sector;
- retrieving the second redaction rule which requires redaction of pixels captured in regions that are not greater than a predefined distance; and
- refraining from redacting the at least one other pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is greater than the predefined distance.
16. The method of claim 13, further comprising:
- determining that the at least one other pixel is captured in a region bounded within a third sector of the plurality of sectors;
- determining that the fourth redaction rule requires application of a third redaction rule to pixels captured in regions bounded within the third sector;
- retrieving the third redaction rule which requires redaction of pixels captured in regions that are greater than a first predefined distance from the camera but not greater than a second predefined distance from the camera; and
- refraining from redacting the at least one other pixel in response to determining that the distance measured from the camera to the respective region captured in the at least one pixel is either not greater than the first predefined distance or greater than the second predefined distance.
17. The method of claim 1, wherein measuring comprises:
- measuring the distance from the camera to each respective region as a function of an elevated distance from the ground to a deployed position of the camera and an angle at which the camera is focused onto the respective region in the field of view of the camera.
18. The method of claim 1, further comprising:
- generating a redacted version of the image based on redacting the at least one pixel in the plurality of pixels contained in the image; and
- storing and/or transmitting the redacted version of the image.
19. An electronic computing device, comprising:
- a communications interface; and
- an electronic processor communicatively coupled to the communications interface, the electronic processor configured to: receive, via the communications interface, an image captured by a camera corresponding to a field of view of the camera, the image containing a plurality of pixels and each pixel in the plurality of pixels capturing a respective region in a plurality of regions of the field of view of the camera; measure, at the electronic computing device, a distance from the camera to each respective region in the plurality of regions of the field of view of the camera; and redact, at the electronic computing device, at least one pixel in the plurality of pixels contained in the image as a function of the distance measured from the camera to a respective region captured in the at least one pixel.
20. The electronic computing device of claim 19, wherein the electronic processor is configured to:
- refrain from redacting at least one other pixel in the plurality of pixels contained in the image as a function of the distance measured from the camera to a respective region captured in the at least one other pixel.
Type: Application
Filed: Apr 17, 2023
Publication Date: Oct 17, 2024
Inventors: WOJCIECH KORZYBSKI (KRAKOW), BARTOSZ LUSZCZAK (KRAKOW), JAKUB BARANSKI (KRAKOW), PAWEL ABRAMCZUK (KRAKOW), MIROSLAW KAWA (KRYSPINOW), KATARZYNA B RUGIELLO (KROSNO)
Application Number: 18/301,584