APPARATUS FOR PREVENTING UNAUTHORIZED ACCESS TO COMPUTER FILES AND FOR SECURING MEDICAL RECORDS
Apparatus for identifying a person who wishes to receive information, where identifying information for each of a plurality of registered individuals is stored in a database, calls for capturing images of an individual requesting information, and determining whether this individual is the same as one of the registered individuals. The stored identifying information includes images of a unique, observable biologic identifier on a body portion of each registered individual. The specificity of the identification process is enhanced by storing registered examples of altered biological information in the database, by allowing the information provider to induce an alteration in a biologic identifier of a requesting person at the time of the request, and by comparing the altered requesting person information to stored information. Further identification enhancement is obtained by rapidly altering the visual environment of the requesting person, and by providing the requested information to the retina of the requesting person.
The present application claims the benefit of priority from the following U.S. patent applications: U.S. Provisional Application Ser. No. 60/934,043, filed Jun. 11, 2007, entitled “APPARATUS AND METHODS FOR REMOTE VOTING AND FOR GOVERNMENT AND CORPORATE SYSTEMS BASED ON REMOTE VOTING”; U.S. patent application Ser. No. 12/157,469, filed Jun. 11, 2008, entitled “APPARATUS AND METHOD FOR VERIFYING THE IDENTITY OF AN AUTHOR AND A PERSON RECEIVING INFORMATION”, (now U.S. Pat. No. 8,233,672); U.S. patent application Ser. No. 13/563,399, filed Jul. 31, 2012, entitled “APPARATUS AND METHOD FOR VERIFYING THE IDENTITY OF AN AUTHOR AND A PERSON RECEIVING INFORMATION” (now U.S. Pat. No. 9,152,387); U.S. patent application Ser. No. 13/834,634, filed Mar. 15, 2013, entitled “APPARATUS FOR PREVENTING UNAUTHORIZED ACCESS TO COMPUTER FILES AND FOR SECURING MEDICAL RECORDS” (pending); and U.S. patent application Ser. No. 15/426,466 filed Feb. 7, 2017, entitled “APPARATUS AND METHOD FOR VERIFYING THE IDENTITY OF AN AUTHOR AND A PERSON RECEIVING INFORMATION” (now allowed). The present application is a continuation of the aforesaid applications Ser. Nos. 13/834,634 and 15/426,466.
This application is related to applicant's prior application Ser. No. 12/714,649, filed Mar. 1, 2010, entitled “Voting Apparatus and System” (U.S. Patent Publication No. 2010/0153190). This Publication No. 2010/0153190 is incorporated herein by reference.
BACKGROUND OF THE INVENTIONThere are a multitude of situations in which it is necessary to be able to document: the identity of an individual who produces visually observable material or actions indicating the thoughts or decisions of that individual. Examples of such situations involve an individual who (i) produces written. text material, (ii) indicates choices on a touch sensitive screen, (iii) produces alphanumeric entries using a keyboard, (iv) produces artwork, (v) produces a musical work with written material.
SUMMARY OF THE INVENTIONIt is a principal object of the present invention to provide a method and apparatus which links the image of an individual (containing identifying features), obtained during a registration process, to the image of an individual author, during his or her act of generating the observable material that reflects the author's thoughts or decisions, thereby to verify the identity of the author with a high degree of confidence.
This object, as well as further objects—which will become apparent from the discussion that follows, is achieved, in accordance with the invention, by apparatus which comprises:
(a) a computer database in which are stored an image of a visible identifying feature and other identification data of each of a plurality of registered human individuals;
(b) a computer processor coupled to the database for storing information therein and for accessing selected information therefrom; and
(c) one or more input devices, coupled to the processor and disposed at a local site where an individual is to create writings or make computer entries. The input device(s) includes at least one camera arranged to view and capture a local image of both the identifying feature and at least a portion of a hand of the individual that is engaged in a writing or computer entry process.
The processor is operative to store the local i ge(s) in said database for later retrieval, and to compare the stored identifying feature of said registered human individuals with the local image(s) of the individual's identifying feature generated during the writing or computer entry process.
By such comparison, the apparatus can thus verify that the identity of the individual who made the writing or computer entry is the same as one of the registered individuals.
Similarly, the aforementioned objects of the present invention are achieved by a method for identifying the writer of a document which comprises the steps of:
(a) storing in a database identifying information for each of a pluralityof registered human individuals, this identifying information including both an alphanumeric identifier and an image of a unique, visually observable biologic identifier on a body portion of the respective individual;
(b) capturing local images which include both:
-
- (i) making of at least one of writings and keyboard entries by an individual whose identifying information may be stored in the database; and
- (ii) [substantially simultaneous with the capture of (i),]a body portion of said one individual on which is visible said biologic identifier; and
(c) determining whether said individual making the writings and/or keyboard entries is the same as one of the registered individuals whose identifying information is stored in said database, by verifying the substantial equivalence of the local image of the visually observable biological identifier and one said images of the body portion stored in the database.
For a full understanding of the present invention, reference should now be made to the following detailed description of the preferred embodiments of the invention as illustrated in the accompanying drawings.
The preferred embodiments of the invention will now be described with reference to
-
- 1) At a registration event: a link between the name of the author (and/or other author identification data) 100 and a video image 102 that identifies the author;
- 2) At an authorship event (the time an author produces an original document): a simultaneously recorded image of
- a. the document as it is being authored 104, and
- b. an author image 106, i.e. an image of an identifiable feature of the author; and
- 3) At a verification event (a time when verification of the author identity is confirmed): a determination that the registered author image 102 is substantially identical to the author image 104 which is recorded at the time that the document is authored.
The registration event links 100 and 102; the authorship event links 106 and 104; and the verification event links 102 and 106. The net effect, symbolically is:
100↔102↔106↔104
. . . thereby establishing that the author is the same person as a registered person.
The document may be one of many types in which there needs to be certainty about the identity of the person who signed it, who authored it, or who indicated his or her thought(s) by one or more writings or keyboard entries. Examples include, but are not limited to:
-
- a) a financial matter which requires a verified signature, such as a check, a loan application, a promissory note, a funds transfer, etc.;
- b) a test, in which the test taker answers questions to demonstrate mastery of certain matters;
- c) an original work—literary, scientific, artistic, musical, etc.
- d) a vote—in a government election, a shareholder matter, etc.
- e) a medical record—including an entry by a physician or nurse, a signature on a “do not resuscitate order”; a signature (by a patient or physician) on a document indicating that informed consent was obtained;
- f) a legal document; such as a contract, a death certificate; a court document; a will; and
- g) a political document such as presidential signature on a legislative bill, a treaty, etc.
The term “author” is intended to include each of the types of person listed in a)-g) hereinabove; and in general is anyone whose identity is to be linked to an observable event. This identity may be a name, a social security number, a medical license number, etc. The observable event generally refers to events which may be seen; But embodiments of the invention which involve only video data, or only audio data (e.g. verification of a speaker or singer) are possible. The events which may be seen include writing using a pen, pencil etc. on a piece of paper, using a virtual pen to write on a touch sensitive screen, selecting a choice from a menu using a touch sensitive screen; using an actual keyboard, and using artistic tools to create a work of art.
Once the registrar accepts the association between the ID data and the registering person's image, the data-image pair is stored as a computer file in a database. The image of the data-image pair is then considered to be a registered image. A database may hold:
- a) one or multiple registered images of one person;
- b) registered images of multiple persons (which may include one or more images for each such person).
At block 202, at a time later than the registration process, an author (as defined hereinabove) who has previously registered (by the process indicated hereinabove) and who wishes to have his (male pronoun used hereinbelow without any intention of the choice indicating a preference, limitation, or advantage) identity confirmed, produces a document while simultaneous images are obtained showing:
-
- (i) the authored data, i.e. the actual writing as it is being produced, or (ii) keystrokes as they are being registered (on either an actual or virtual keyboard); and
- (ii) the author image, i.e. an identifiable biologic feature of the author.
In one preferred embodiment of the invention, the camera which captures the authored data also captures—within the same image—the author image. For example, the camera may be situated so as to capture both the face and the hands of the author, with the portion showing the hands also showing the written material/keystrokes in enough detail to identify its content. Ideally the camera would also show enough of the body region between the face and the hands, so that it was clear that the face and the hands belonged to the same person.
An example (discussed hereinbelow) which clearly demonstrates textual material and author identification in a single image, uses a device which shows author fingerprints, as the author makes keyboard entries.
In another preferred embodiment of the invention, two separate cameras may be used: one to capture the image of the biologic identifier, and one to capture the image of the textual material. The two images may stored as separate files with a secure label for each file, indicating the time and location of each image (to thereby allow for the conclusion that the two were recorded in essentially the same space and time); Alternatively, the two images may be merged into a single file, by techniques known in the art.
At block 204, the author image is compared with either (i) the registered image of the person believed to be the same person as the author; or (ii) some or all of the registered persons, if the identity of the author is either unknown, or substantially uncertain.
At block 206, a determination is made as to whether the author image and a registered image are a match. The definition of a match is further discussed hereinbelow in conjunction with
An author wishing to prove his identity, enters video images of (i) his work as it is being produced by him, and (ii) himself, through input device 308. Video camera 308A is used to produce file 308B, which contains simultaneously recorded author image(s) 307 and authored data ima.ge(s) 309. In an alternate embodiment of the invention, as discussed hereinabove and hereinbelow, there may be more than one camera 308A. 308B is sent to processor 304, which then compares the author image 307 with one or more registered images in database 306. If a match is found, the author data. 309—i.e. the signature, composition, document, etc. produced by the author—is then stored as verified writing or keyboard entries in storage apparatus 310. Storage apparatus 310 may be part of 306, or separate from it.
The comparison of the author image and the registered image may be:
-
- a) performed entirely by processor 304;
- b) performed entirely by optional human 312, who views the two images on display 314;)
- c) performed by processor 304, unless the result of the evaluation by 304 results in a state of uncertainty (e.g. if there is a less than good match between the two images); In this case, the task of comparison may be handed off to human 312. Processor 304 may be pre-programmed to indicate the level of goodness of match required to bypass human 312. Processor 304 may use neural networks to facilitate the process of visual comparison,
If the final decision regarding the comparison is made by the processor, the result may be indicated on display 314. Clock 316 allows for time-stamping of images and of comparisons.
The recording by any of the cameras either during the registration step or the authoring step—may be of a single image or of a sequence of images (e.g. a video or a “movie”). Hereinabove and hereinbelow, “image” is intended to refer to either one of these cases.
-
- a) registration;
- b) entries by an author who wishes to be a verified author; or
- c) both a) and b).
In the registration process, person 400 may use the apparatus to input two or more unique identifiers simultaneously, in the same image. For example, 400 may sign his name on 402. 402 may represent:
a) a transparent or semi-transparent surface/paper which allows a signature to be observed and recorded by video camera 404 inch is situated below 402;
b) a touch sensitive screen with enough resolution to provide a good quality copy of a signature.
404 may be used to capture both the signature and
a) an image of the face, iris or retina of 400; and/or
b) an image of one or more fingerprints, or a palm print of 400, visualized througl transparent surface 414.
Alternatively, 404 may capture both a fingerprint/palm print and a signature, without capturing the facial/iris/retinal image.
In yet another embodiment of the apparatus used for registration, multiple identifiers may be simultaneously captured in the same image using camera 410 which is situated behind and, if necessary, somewhat to the side of (or above) person 400, and may be pointed at mirror 412 With proper placement of 410 and 412, and proper angulation of 412, camera 410 may visualize both:
-
- a) the signature of person 400 on 402 (which need not be transparent or semi-transparent in this case); and
- b) the face/ iris/ retina of person 400.
In yet another alternate embodiment of the registration apparatus, 410 and 404 may both be used to input registration information. Each may be used to input the type of information described hereinabove. The information may be stored:
a) as two separate files, one for each camera, with each having associated ID data for the registrant, and each confirmed by the registrar (with each file preferably indicating the presence of additional registration information for the same person in another file); or
b) as a single
The information from 404 and 410 rmay be obtained simultaneously or at separate times.
A simplified form of the registration process would be to enter only a single identifier for 400, e.g. one of the signature, facial image, etc. The apparatus in
Embodiments of the registration apparatus with more than two cameras are possible. The operating principles parallel those of the two-camera case.
The apparatus shown in
-
- a) written entries or touch sensitive screen based entries; and
- b) at least one visual identifier of the author (e.g. face, signature, fingerprint( s), etc.).
The mode of operation would be the similar to that described hereinabove for the registration process, except that it may be desirable to enter more text (perhaps a lot more text) than just the author's signature. Furthermore, screen 408 may be viewed by camera 410, and may be used to display either:
(i) textual material in a document that the author is signing; or
(ii) a display of what the author is writing on 402 (as observed by camera 404 or another camera (not shown in this figure) which may be placed above 402). In addition, by angulating mirror 412 so that it shows the author's face, and by properly angulating 408 and 412 and properly positioning 410, both the face (and/or iris, and/or retina) and the authored data as shown on screen 408, may be recorded in a single image by 410 (or in each of a series of images recorded by 410).
In an alternate embodiment of the invention, a largely transparent keyboard could be used for 510, This would facilitate 404 observing the face of 400.
Furthermore, a keyboard in which the key surfaces are largely transparent—shown in
a) the author's fingerprint, and
b) the sequence of selected keystrokes.
In the figure, camera 604 is positioned underneath keyboard 610, to show both fingerprints and keystrokes in each image.
a) a camera 704A located behind the transparent or semitransparent touch sensitive screen which records an image which shows each of (i) the finger touching the “no” choice box, 703, (ii) the contiguous parts of the hand lying between the finger which selects the touch sensitive region and the finger which is the source of the print, and, optionally (iii) the fingerprint itself, viewable through 706; and
b) a camera 704B which is located behind the individual, and records the selection of the “no” choice at the same moment that the fingerprint is visualized by 706.
In the case of a “yes” choice, the functioning of the apparatus is analogous to its functioning for a “no” choice: The left hand of 700 may be used to simultaneously touch fingerprint identification apparatus 708 and touch box 701 on the touch sensitive screen.
Apparatus similar to that shown in
(i) the authored material;
(ii) the author image; and
(iii) the witness image;
a highly verifiable and very difficult to corrupt/ hack, system is the result. If in addition (not shown in the figure), the witness is also a person who has been registered by the same process that the author has, an even greater degree of hardening of the system is the result.
Since the registrar has the role of matching the ID data and the registered images, the robustness of the system will depend on the reliability of the registrar. Various methods of enhancing registrar reliability are possible including having multiple registrars, each of whom reviews the correctness of a paired ID data-registration image set. Yet another method of security enhancement would be to have super-registrars, i.e. people with a high level of security clearance who are responsible for registering ordinary registrars.
Another method of enhancing security during the registration step is shown in
The concept of linking a particular person to a particular body of information has, hereinabove, been considered with respect to providing a strong linkage between provided information and the person providing the information. Hereinbelow, the concept and invention is presented with respect to providing a strong linkage between provided information and the person requesting the information. It will be clear that such a strong link will be useful for (a) providing secure communications, (b) preventing access to information stored in a computer memory or other digital device by an inappropriate person, and (c) preventing the modification of information stored in a computer memory or other digital device by an inappropriate person.
(a) repeatedly examining a biologic feature of the person and comparing it to information in a database which contains files comprise
-
- (i) information pertaining to the details of the biologic feature of a registered person, the information having been obtained under a plurality of different conditions, and
- (ii) alphanumeric identification [e.g. name, social security number, date of birth, etc. are also stored in correspondence to the biologic information] of the registered person; and (b) providing a prompt which induces a change in the appearance of the biologic identifier.
By providing a prompt which alters the appearance of the identifier, and by repeatedly observing the identifier, the invention provides a far more secure system than the static approach, known in the art, of simply comparing an image of a biologic identifier (“BI”) of a person requesting access to a digital system to file images. An example of the static approach is described in 8,189,096 to Azar.
Defeating the static approach, i.e. causing a computer or communication system protected by the requirement of providing a static image, entails (i) obtaining and storing a BI image, during a process that is perhaps unknown to the person associated with the BI, and (ii) providing the previously stored image of the BI, at a time when information or computer system access is desired by someone who is not the person associated with the BI [i.e. an inappropriate person (“IP”)], but who is in possession of and can provide the information contained in the static BI.
The static system becomes harder to defeat if multiple (static) images must be provided to gain access to the system. But it still may be defeatable by an IP, by obtaining a multiplicity of static images of the BI of a person registered to use the system
In one embodiment of the current invention, advantage is taken of the ability to change the appearance of a BI upon the request of the person or system providing secure information or desiring secure communication. A simple approach is a voluntary request to the RP to perform a motion which results in a change in the appearance of that person's BI, Examples of such changes include a request to turn the RP's face in one direction or another, to wink one eye, to look to the right, left, up, down, etc, (with or without moving the head) or to move a finger containing a fingerprint in a particular way, or a palm containing a palm print ora pattern of blood vessels in a particular way.
Still other requests may involve moving one part of the body containing one BI so that its relationship with another part of the body containing another BI is geometrically altered. The value of such a voluntary prompt is that the nature and timing of the request is entirely under the control of the information source (“IS”), whether the source is a person or a computational device.
Still other requests may be for the RP to follow a moving point or object on a display screen, using apparatus in which the IS controls the trajectory of the point on the screen, while a camera observes the user eye motion, iris image, retinal vein image, image of blood vessels on the surface of the eye, or facial motion. Although the tracking of such a point by the RP would not perfectly match the apparent motion of the point, software methods to compensate, and statistical techniques to assess a match could be applied as are known in the art. Clearly, attempts by an IP to communicate inappropriately with such a system would be extremely difficult, requiring the IP to very quickly provide a sequence of BI images which match a not previously expected pattern of variance. By making the choice and timing of prompts random or pseudo random (e.g. by using a variety of techniques to generate such random information including the digitization of white noise, the use of minutae related to sports information [e.g. number of milliseconds between pitches in an ongoing baseball game], stock market minutae [e.g. ongoing trades and their timing], astronomic information [e.g. solar activity], traffic information minutae [patterns of people walking through Times Square], by electronically generating pseudorandom number patterns), the task of the IP becomes more difficult.
A still greater burden on the inappropriate person attempting to gain access entails the use of prompts resulting in entirely involuntary physiologic actions. One such example is the application of light to the human eye, As shown in
In turn, the extent of incident light may be controlled by apparatus at the information source. Prompts can control the light intensity, the wavelength, the spatial placement of the light, the size of the light source, the number of light impulses, the time interval between impulses, the duration of each impulse, etc. Furthermore, the IS may store prompt details and generate an expected iris response for comparison with an observed one. Furthermore, the IS may generate linear and other combinations of iris images stored in a computer database, thereby potentially expanding the database limitlessly. The IS may also adjust the amount of applied light to attempt to match an iris image on file.
In addition, alteration in iris size may be induced by having the RP change focus from a distant object to a near one (which may be presented on a computer or digital device display screen), or vice versa. In addition, dilation of the pupil/constriction of the iris may be induced by a painful stimulus, which may be applied to the RP under remote control via a device attached to the patient (e.g. one which provides a mild electric shock).
A given induced change in iris image, (i.e. the varying biologic identifier,) may not always occur identically for a given amount of light. The system administrators and architects will overcome this by either (a) storing a variety of responses to each prompt, obtained during a registration period for the person who is to be an authorized user of the system, (b) utilizing linear or other combinations of previously observed responses by a particular user, or (c) by utilizing neural networks to learn the patterns of authorized system users.
The pupil/iris changes, in turn, will change the appearance of another BI, the observed pattern of retinal blood vessels. A constricted pupil narrows the area of the retinal surface (and vascular pattern) available for view, while a dilated pupil has the opposite effect. Thus another embodiment of the invention entrails RP identification using retinal vessels as the with prompts causing a change in iris/pupil geometry which changes the viewable retinal field.
The aforementioned involuntary changes in the appearance of the BIs caused by IS prompts in an essentially unpredictable manner would create a situation that would be extremely difficult for an IP to defeat.
As indicated hereinabove, the certification that AP information stored in 1600 is indeed correct may be accomplished by utilizing a registrar, i.e. a registration person who is authorized to input information to 1600. This input occurs via input device 1624, which may also input alphanumeric and/or biologic identification information pertaining to the registrar.
Whereas for the embodiment shown in
Communication between (i) the processor 1646 and (ii) each of camera 1642 and prompt device 1644 may be by a public or private telephone network, the inte let, a private digital or analog communication network, radiofrequency communication (including the microwave portion of the spectrum, and Bluetooth communication satellite-based communication, light communication (including infrared and ultraviolet), communication by modulated magnetic fields, and communication by sound, ultrasound, or subsonic longitudinal wave modulation means.
RP 1640, camera 1642 and prompt device 1644 may be situated in a location which is different, and possibly remote from processor 1646, and its associated input devices and memory devices. Such a separation between the corresponding elements of
Each of the remaining elements in
As is known in the art, each processor 1670 (which is analogous to processor 1646 in
Although
Visible ID database 1804 may also contain biologic II) images which show a body part from a variety of vantage points and angles.
Another approach to further increasing the security and accuracy of identification of a user, and the location of the user of the system is shown in each of
It is to be understood that each step in the passage of the information reflected by the code may involve a degree of distortion/degradation of the information. The conversion of the information from digital signal to visual display is one such step, as is the conversion of the screen information to a camera image, and the conversion of the camera image to a camera signal. Further losses of integrity may occur during each limb of signal transmission from and to the processor. Thus the analysis of the received code by the processor, and its comparison with the sent code will result in a less than perfect match even when system integrity is uncompromised. Algorithms for assessing the goodness of fit of the received version of the code information to the sent version will be apparent to those skilled in the art.
The difficulty of having an inappropriate person gain access to processor 1700 is enhanced by including the code component in the composite image: Gaining such access would require the IP to be able to reproduce the BI and reproduce the image of the code, and to do so within a single image. The difficulty of such reproduction is enhanced by rapidly changing the code. Each of the elements (the noise generator, the random outside events information and the processor itself) utilized to generate random and pseudorandom variations discussed in conjunction with
This variation may utilize either a single image of biologic ID/code image or repeated ones. In addition, a prompt producing device and activation means (not shown in the figure), could be added to further augment the degree of security.
Each of the remaining elements of
A flow diagram of the algorithm which embodies the schematics of
Yet another technique for merging the code screen and the BI is shown in
In the aforementioned detailed description of each of
However, an embodiment of the invention in which the screen contents are a message to the RP is also possible. Yet another variation is an embodiment in which the screen contents comprise a mixture of a message and a code. In some cases, these arrangements may be considered to be less desirable because the verification process would require two-way transmission of at least a part of the message: i.e. (a) from the processor to the screen, and (b) back to the processor as part of the composite image.
Although a two way version of the invention is possible, in which two people communicate each having a respective RP camera and display screen (or RP camera and prompt producing device), and each having a respective processor for analysis of the aforementioned matches, the two way transmission would increase the chance of interception and/or diversion of the message.
An additional means of security entails projecting a code image onto a reflective portion of the eye of the RP, and then imaging the reflected image. This is shown in
(a) a camera positioned 2312, which receives positioning signals from processor 2308. Camera 2315, which views eye and face position provides positioning information for 2308, as does the image viewed by 2306;
(b) the addition of alternate cameras, such as camera 2314;
(c) (not shown in the figure) a positioning device similar to 2312, electrically linked to processor 2308 and mechanically linked to projection device 2302.
In addition, prompts directed to the RP indicating instructions for orienting the face and eyes will enhance the success of this approach. Once such apparatus is shown in
Referring again to
- 2306→2308→2318→2320 or
- 2306→2308→2320 or
- 2301→“A”→2318→2320 or
- 2301→“A”→2320. Variations in the route of information entry to 2320, and in the arrangement and number of cameras and processors will be apparent to those skilled in the art. Although three processors are shown in the figure, designs with a smaller or larger number of processors are possible. In each variation, the critical “loop” which allows the sender of information to properly identify the receiver of the information includes
- 2300→2302→2304→2306→2310.
Camera 2322, referred to as an anti-tamper camera is configured to view one or more of the elements in the figure to prevent tampering with them. It can also view a screen 2324 on the housing of camera 2306, which displays a code (either the same of a different code as reflected off of the eye), for verifying the identity of camera 2306. Alternatively, or in addition, camera 2306 can view a code on screen 2326 attached to the housing of the anti-tamper camera. The code displayed by 2326 may be the same as either of the aforementioned codes or different.
Referring again to
(a) to utilize at least one biologic identifier of the receiver of the information;
(b) to minimize the distance between the biologic identifier of the recipient and the body part of the recipient which receives the information; and
(c) in concert with (b), to minimize the distance between the apparatus for detecting the biologic identifier and the apparatus for providing the information.
Minimizing these distances allows for the implementation and optimization of various security measures including:
(a) the inclusion of each of the camera observing the biologic identifier and the projection device within a single housing (see
(b) allowing one or more cameras to view two or more of each of the essential items in the arrangement (as shown in each of
(c) utilizing one device to perform both the projection function and the identification function, as shown in
Referring to
Biodynamic identification, i.e. the remotely controlled manipulation of the appearance of a body part containing a biologic identifier is the subject of parent U.S. Pat. No. 9,152,837, parent application Ser. No. 14/874,922. An additional biodynamic identification approach is presented in Matos U.S. patent application Ser. No. 14/816,382: the use of remotely controlled implanted pacemaker devices to manipulate the fine points of vascular anatomy of the person in whom the device is implanted (by the alteration of a pacing rate or by the insertion of premature cardiac impulses), and to thereby allow for the determination of who is the recipient of a message (by observation of alteration of vascular anatomy in concert with the altered heart rhythm).
Input device 2600 (e.g. a receiving device utilizing RF or microwave or other frequency, a connection [either electrical or optical] to the Internet or to a private digital communications network, or a hardwired connection to any device which is a source of secure information), provides message information to projection device 2602. 2602 projects a representation of the information contained in the message onto the retina of eye 2304. 2602 may be a laser or other source of light output. Beam directing and focusing apparatus as are known in the art will allow the message to be directly projected onto the retina. The power output of 2602 will be sufficiently low to avoid damage to the eye.
Output device 2610 allows the transmission of biologic identification information and other security information to be sent to the entity which is the source of the secure information. Options for the particular type of output device are similar to those listed hereinabove for input device 2600.
Other functions of the projection device may include:
(a) providing visual prompts as discussed hereinabove, and
(b) providing a source of illumination, for optimally viewing eye 2304 and its structures.
The illumination may comprise visible light, infrared or both. Either of both of the prompt and illuminating functions may alternatively be performed by devices other than projection device 2602.
The other objects in
The source of the secure message for the person whose eye is denoted by 2304 may be local (as shown in
(a) as shown in
(b) as shown in
As discussed in conjunction with each of
The inventions described herein are applicable for preventing an inappropriate person from gaining access to secret or classified information in a remote computer memory; Gaining access includes copying the information and corrupting the information.
There has thus been shown and described a novel system for verifying the identity of an author and for verifying the identity of a person receiving information or using a computer system, which fulfills all the objects and advantages sought therefor. Many changes, modifications, variations and other uses and applications of the subject invention will, however, become apparent to those skilled in the art after considering this specification and the accompanying drawings which disclose the preferred embodiments thereof. All such changes, modifications, variations and other uses and applications which do not depart from the spirit and scope of the invention are deemed to be covered by the invention, which is to be limited only by the claims which follow.
Claims
1.-69. (canceled)
70. A method for identifying the presence of a particular human, at a given location, the method comprising:
- (a) storing, in a database, a first digital file including (1) information representing a plurality of registered images of a body part displaying a unique biologic feature of a registered human, and (2) alphanumeric information pertaining to said registered human;
- (b) receiving, by a processor, a request to identify a putative human as the registered human;
- (c) based on receiving the request, generating, by the processor, at least one first code, said code comprising information, and providing a first code signal representing said first code to each of (1) a code display device and (2) the database; wherein said code display device is located in the vicinity of said putative human;
- (d) receiving, by the database, the first code signal; and storing by the database, information representing said first code;
- (e) receiving, by the code display device, the first code signal;
- (f) in response to the received first code signal, producing, by the code display device, a code screen image representing the first code signal;
- (g) capturing, by a digital camera, a plurality of composite images, each image including both (1) a body part image including a visible unique biologic feature of the putative human, and (2) at least a portion of the code screen image; and providing a camera signal representing said composite images to said processor;
- (h) based on the received camera signal, comparing, by the processor, (1) information representing the putative human body part with (2) the stored information representing the registered human body part, and determining a match between the received information representing the putative human body part and the stored information representing the registered human body part;
- (i) based on the received camera signal, comparing, by the processor, (1) information representing the received code screen image with (2) the stored information representing the first code, and determining a match between the received code screen image information and the stored first code; and
- (j) generating, by the processor, an output indicating a result of said comparing steps (h) and (i).
71. The method of claim 70, wherein said step (h) further comprises determining a likelihood of a match between the putative human body part image and the stored registered human body part image.
72. The method of claim 70, wherein said step (i) further comprises determining a likelihood of a match between the code screen image and the stored first code.
73. The method of claim 70, wherein, based on determining said match in each of said step (h) and said step (i), said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
74. The method of claim 73, wherein said providing of access is limited to a period of time following a determination of said match.
75. The method of claim 73, wherein said providing of access comprises granting permission to retrieve said secure information.
76. The method of claim 73, wherein said providing of access comprises granting permission to alter said secure information.
77. The method of claim 73, wherein said providing of access comprises granting permission to input secure information to said digital memory.
78. The method of claim 73, wherein said access comprises granting permission to delete secure information from said digital memory.
79. The method of claim 70, wherein said step (b) further comprises inputting, to said processor, alphanumeric information pertaining to said putative human.
80. The method of claim 73, wherein
- said step (c) comprises repeatedly generating said first code signal, wherein each first code signal may differ from a previously generated first code signal;
- said steps (d) through (j) are each sequentially performed, following each performance of said step (c)
- and wherein, based on determining said match in both of said repeated steps (h) and (i), said repeated step (j) further comprises continued providing, by said processor, of access, for said putative human, to said secure information.
81. The method of claim 80, wherein, upon a failure to determine a match in a particular step (h) or a particular step (i), said step (j) further comprises halting of access to said secure memory.
82. The method of claim 70, wherein said body part displaying a unique biologic feature of a registered human is selected from the group consisting of:
- (A) a pattern of an iris of an eye;
- (B) a pattern of retinal veins of an eye;
- (C) a pattern of blood vessels of a sclera of an eye;
- (D) a facial image;
- (E) a fingerprint;
- (F) a palm print; and
- (G) a pattern of blood vessels of a hand.
83. The method of claim 70, wherein
- said step (a) further comprises storing, in the database, a second digital file including (1) information representing a plurality of registered images of a second body part displaying a second unique biologic feature of the registered human, and (2) alphanumeric information pertaining to the registered human;
- said step (g) further comprises capturing, by a second digital camera, a plurality of second composite images, each second image including both (1) a second body part image including a second visible unique biologic feature of the putative human, and (2) at least a second portion of the code screen image; and providing a second camera signal representing said second composite images to said processor;
- said step (h) further comprises, based on the received second camera signal, comparing, by the processor, (1) information representing the putative human second body part with (2) the stored information representing the registered human second body part, and determining a match between the putative human second body part image and the stored registered human second body part image;
- and said step (j) further comprises generating, by the processor, an output indicating a result of said comparing step (i) and each of said comparing steps (h); and based on determining a match in each of said step (i) and both of said steps (h), said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
84. The method of claim 70, wherein
- said step (a) further comprises storing, in the database, a second digital file including (1) information representing a plurality of registered images of a second body part displaying a second unique biologic feature of the registered human, and (2) alphanumeric information pertaining to the registered human;
- said step (g) further comprises capturing, by a second digital camera, a plurality of second composite images, each second image including both (1) a second body part image including a second visible unique biologic feature of the putative human, and (2) at least a second portion of the code screen image; and providing a second camera signal representing said second composite images to said processor;
- said step (h) further comprises, based on the received second camera signal, comparing, by the processor, (1) information representing the putative human second body part with (2) the stored information representing the registered human second body part, and determining a match between the putative human second body part image and the stored registered human second body part image;
- and said step (j) further comprises generating, by the processor, an output indicating a result of said comparing step (i) and each of said comparing steps (h); and based on determining a match in each of said step (i) and at least one of said steps (h), said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
85. The method of claim 70, wherein
- said step (a) further comprises storing, in the database, an additional first digital file including (1) information representing a second plurality of registered images of the body part displaying the unique biologic feature of the registered human, said second plurality comprising images obtained from at least one viewing angle differing from the viewing angle pertaining to said plurality of registered images; and (2) alphanumeric information pertaining to the registered human;
- said step (g) further comprises capturing, by an additional digital camera arranged to image said putative human from an angle differing from that of said digital camera, a plurality of additional composite images, each additional image including both (1) an additional body part image including said visible unique biologic feature of the putative human, and (2) at least an additional portion of the code screen image; and providing an additional camera signal representing said additional composite images to said processor;
- said step (h) further comprises, based on the received additional camera signal, comparing, by the processor, (1) information representing the putative human body part imaged by said additional camera, with (2) the stored second plurality of registered images, and determining a match;
- and said step (j) further comprises generating, by the processor, an output indicating a result of said comparing step (i) and each of said comparing steps (h); and based on determining a match in each of said step (i) and both of said steps (h), said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
86. The method of claim 70, wherein
- said step (a) further comprises storing, in the database, an additional first digital file including (1) information representing a second plurality of registered images of the body part displaying the unique biologic feature of the registered human, said second plurality comprising images obtained from at least one viewing angle differing from the viewing angle pertaining to said plurality of registered images; and (2) alphanumeric information pertaining to the registered human;
- said step (g) further comprises capturing, by an additional digital camera arranged to image said putative human from an angle differing from that of said digital camera, a plurality of additional composite images, each additional image including both (1) an additional body part image including said visible unique biologic feature of the putative human, and (2) at least an additional portion of the code screen image; and providing an additional camera signal representing said additional composite images to said processor;
- said step (h) further comprises, based on the received additional camera signal, comparing, by the processor, (1) information representing the putative human body part imaged by said additional camera, with (2) the stored second plurality of registered images, and determining a match;
- and said step (j) further comprises generating, by the processor, an output indicating a result of said comparing step (i) and each of said comparing steps (h); and based on determining a match in each of said step (i) and at least one of said steps (h), said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
87. The method of claim 70, wherein said step (h) further comprises generating, by said processor, information representing calculated registered images, based on the information representing the stored plurality of registered images, and utilizing said calculated registered images to determine said match.
88. The method of claim 70, wherein said step (h) further comprises generating, by said processor, information representing calculated putative human body part images, based on the received information representing the putative human body part, and utilizing said calculated registered images to determine said match.
89. The method of claim 70, wherein the method of generation of each first code is selected from the group consisting of:
- (A) generation of at least one pseudorandom number, and
- (B) generation of at least one random number.
90. The method of claim 70, wherein one of said at least one first code comprises a message to said putative human.
91. The method of claim 70, wherein
- said step (c) further comprises generating, by the processor, at least one first security code, said first security code comprising information, and providing a first security code signal representing said first security code to each of (1) a first security code display device and (2) the database; wherein said first security code display device is attached to a housing of said digital camera;
- said step (d) further comprises receiving, by the database, the first security code signal; and storing, by the database, information representing said first security code;
- said step (e) further comprises receiving, by the first security code display device, the first security code signal;
- said step (f) further comprises producing by the first security code display device, in response to the received first security code signal, a first security code screen image representing the first security code signal;
- said step (g) further comprises capturing, by a security camera, at least one first security image including at least a portion of said first security code screen image, and providing a first security camera signal representing said at least one first security image to said processor;
- said step (i) further comprises comparing, by the processor, (1) information representing the received first security code screen image, based on the received first security camera signal, with (2) the stored information representing the first security code, and determining a match between the received first security code screen image information and the stored first security code; and
- said step (j) comprises generating, by the processor, an output indicating a result of said comparing step (h) and each of said two comparing steps (1).
92. The method of claim 91, wherein, based on determining said match for each of
- (1) said step (h) pertaining to said body part information,
- (2) said step (i) pertaining to said first code information, and
- (3) said step (i) pertaining to said first security code information,
- said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
93. The method of claim 91, wherein
- said step (g) further comprises capturing, by said security camera, at least one composite first security image including both (1) a body part image including a visible unique biologic feature of the putative human, and (2) at least a portion of said first security code screen image, and providing a composite first security camera signal representing said at least one composite first security image to said processor;
- said (h) further comprises, based on the received composite first security camera signal, comparing, by the processor, (1) information representing the putative human body part with (2) the stored information representing the registered human body part, and determining a match between the received information representing the putative human body part and the stored information representing the registered human body part; and
- said step (j) comprises generating, by the processor, an output indicating a result of each of said two comparing steps (h) and each of said two comparing steps (1).
94. The method of claim 93, wherein, based on determining said match for each of
- (1) said step (h) pertaining to said body part information of said composite images,
- (2) said step (h) pertaining to said body part information of said composite first security images,
- (3) said step (i) pertaining to said first code information, and
- (4) said step (i) pertaining to said first security code information,
- said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
95. The method of claim 91, wherein
- said step (g) further comprises capturing, by said security camera, at least one additional composite first security image including both (1) at least a portion of said code screen image, and (2) at least a portion of said first security code screen image, and providing an additional composite first security camera signal representing said at least one additional composite first security image to said processor;
- said step (i) further comprises comparing, by the processor, (1) information representing the received first security code screen image, based on the received first security camera signal, with (2) the stored information representing the first security code, and determining a match between the received first security code screen image information and the stored first security code; and
- said step (i) still further comprises comparing, by the processor, (1) information representing the received code screen image, based on the received first security camera signal, with (2) the stored information representing the first code, and determining a match between the received code screen image information and the stored first code;
- said step (j) comprises generating, by the processor, an output indicating a result of said step (h) and each of said three comparing steps (i).
96. The method of claim 95, wherein, based on determining said match for each of
- (1) said step (h) pertaining to said body part information of said composite images captured by said digital camera,
- (2) said step (i) pertaining to said first code information, of said composite images captured by said digital camera,
- (3) said step (i) pertaining to said first code information of said at least one additional composite first security image, captured by said security camera, and
- (4) said step (i) pertaining to said first security code information of said at least one additional composite first security information, captured by said security camera,
- said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
97. The method of claim 91, wherein
- said step (c) further comprises generating, by the processor, at least one second security code, said second security code comprising information, and providing a second security code signal representing said second security code to each of (1) a second security code display device and (2) the database; wherein said second security code display device is attached to a housing of said security camera;
- said step (d) further comprises receiving, by the database, the second security code signal; and storing, by the database, information representing said second security code;
- said step (e) further comprises receiving, by the second security code display device, the second security code signal;
- said step (f) further comprises producing by the second security code display device, in response to the received second security code signal, a second security code screen image representing the second security code signal;
- said step (g) further comprises capturing, by the digital camera, a plurality of supplemental composite images, each of said images including each of (1) a body part image including a visible unique biologic feature of the putative human, (2) at least a portion of the code screen image, and (3) at least a portion of said second security code screen image; and providing a supplemental camera signal representing said supplemental composite images to said processor;
- said step (i) still further comprises comparing, by the processor, (1) information representing the received second code screen image, based on the received supplemental camera signal, with (2) the stored information representing the second security code, and determining a match between the received second code screen image information and the stored second first code;
- said step (j) comprises generating, by the processor, an output indicating a result of said step (h) and each of said two comparing steps (i).
98. The method of claim 97, wherein, based on determining said match for each of
- (1) said step (h) pertaining to said body part information of said supplemental composite images, captured by said digital camera,
- (2) said step (i) pertaining to said first code information, of said supplemental composite images captured by said digital camera,
- (3) said step (i) pertaining to said second security code information of said supplemental composite images, captured by said digital camera, and
- (4) said step (i) pertaining to said first security code information of said at least one first security image, captured by said security camera,
- said step (j) further comprises providing, by said processor, of access, for said putative human, to secure information in a digital memory.
99. The method of claim 70, further comprising:
- (g2) following said step (g) based on receiving the request, generating, by the processor, at least one prompt, said prompt comprising information, and providing a prompt signal representing said prompt to a prompt producing device;
- wherein said prompt device is located in the vicinity of said putative human and arranged to provide said prompt to said putative human;
- wherein said prompt causes a visible change in the appearance of said unique biologic feature of said putative human;
- (g3) following said step (g2) receiving, by the prompt producing device, the prompt signal;
- (g4) following said step (g3) in response to the received prompt signal, producing, by the prompt device, a prompt;
- (g5) following said step (g4) capturing, by the digital camera, a plurality of post-prompt composite images, each image including both (1) a body part image including the visible unique biologic feature of the putative human, and (2) at least a portion of the code screen image; and providing a post-prompt camera signal representing said post-prompt composite images to said processor; and
- (g6) following said step (g5) based on the received post-prompt camera signal, comparing, by the processor, (1) information representing the post-prompt appearance of the putative human body part with (2) the stored information representing the registered human body part, and determining a match between the received post-prompt information representing the putative human body part and the stored information representing the registered human body part;
- wherein said step (j) further comprises generating, by the processor, an output indicating a result of said comparing step (g6).
100. The method of claim 99, wherein said prompt causes an involuntary change in the appearance of a body part of the putative human.
101. The method of claim 100, wherein said step (a) comprises storing, in said database, a plurality of registered images of an iris or a registered human, said plurality including images including images obtained under a plurality of ambient light conditions;
- said step (g2) comprises providing said prompt signal to an electronically controllable light source, arranged to cause a light output to be directed to a least one iris of an eye of said putative human;
- said step (g3) comprises receiving said prompt signal by said light source;
- said step (g4) comprises producing, by said light source, a light output specified by said prompt signal; and
- said step (g6) includes comparing post-prompt iris image information provided by said post-prompt camera signal to post-prompt iris image information stored in said database.
102. The method of claim 99, wherein said prompt is an instruction requesting a voluntary action by the putative human.
Type: Application
Filed: Mar 17, 2020
Publication Date: Jul 9, 2020
Inventor: JEFFREY A. MATOS (New Rochelle, NY)
Application Number: 16/821,221