Apparatus and method for detecting images within spam
A method is described that includes comparing a characteristic of an image to stored characteristics of spam images. The method also includes generating a signature of the present image. The method further includes comparing the signature of the present image to stored signatures of spam images. The method also includes determining the spam features corresponding to the stored signatures of spam images that match the signature of the present image.
Latest Proofpoint, Inc. Patents:
- Bulk messaging detection and enforcement
- Prompting users to annotate simulated phishing emails in cybersecurity training
- Machine learning uniform resource locator (URL) classifier
- Using signed tokens to verify short message service (sms) message bodies
- Context based authorized external device copy detection
This application is a divisional of U.S. patent application Ser. No. 11/652,715, filed on Jan. 11, 2007, now U.S. Pat. No. 8,290,203, all of which are hereby incorporated by reference in their entirety into this application.
BACKGROUND OF THE INVENTIONField of the Invention
This invention relates to electronic message analysis and filtering. More particularly, the invention relates to a system and method for improving a spam filtering feature set.
Description of the Related Art
“Spam” is commonly defined as unsolicited bulk e-mail, i.e., email that was not requested (unsolicited) and sent to multiple recipients (bulk). Although spam has been in existence for quite some time, the amount of spam transmitted over the Internet and corporate local area networks (LANs) has increased significantly in recent years. In addition, the techniques used by “spammers” (those who generate spam) have become more advanced in order to circumvent existing spam filtering products.
Spam represents more than a nuisance to corporate America. Significant costs are associated with spam including, for example, lost productivity and the additional hardware, software, and personnel required to combat the problem. In addition, many users are bothered by spam because it interferes with the amount of time they spend reading legitimate e-mail. Moreover, because spammers send spam indiscriminately, pornographic messages may show up in e-mail inboxes of workplaces and children—the latter being a crime in some jurisdictions.
Spam filters attempt to remove spam without removing valid e-mail messages from incoming traffic. For example, spam filters scan email message headers, metatag data, and/or the body of messages for words that are predominantly be used in spam, such as “Viagra” or “Enlargement.” Current email filters may also search for images which are known to be used in spam messages. Hashing algorithms such as MD5 are used to generate image “fingerprints” which uniquely identify known spam images.
Over the years, spammers have become more creative in disguising their messages, e-mails, or advertisements as legitimate incoming traffic to avoid detection by spam filters. Specifically, spammers typically obfuscate words which would normally be identified by spam filters. For example, “Viagra” may be spelled “V!agra” or “Enlargement” may be spelled “En!@rgement.” With respect to images, spammers often embed random data within spam images to modify the image fingerprint, and thereby avoid detection.
Thus, improved mechanisms for detecting obfuscated images within email messages are needed.
A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:
Described below is a system and method for detecting images used in spam. Throughout the description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the present invention.
Message Filtering ApparatusWhen an email 105 including an image is received by the message filtering apparatus 101, as shown in
Once the features of an email message have been identified, a mathematical model 103 is used to apply “weights” to each of the features. Features which are known to be a relatively better indicator of spam are given a relatively higher weight than other features. The feature weights are determined via “training” of classification algorithms such as Naïve Bayes, Logistic Regression, Neural Networks, etc.
The combined weights are then used to arrive at a spam “score” 108. If the score is above a specified threshold value, then the email is classified as spam and filtered out of the email stream. By contrast, if the score is below the specified value, then email is forwarded on to a user's email account.
The embodiments of the invention described below focus on the manner in which the image analyzer 104 identifies image features within email messages. It should be noted that not all of the specific operations set forth below are needed for complying with the underlying principles of the invention. Furthermore, the discussion below is not inclusive of all methods, steps, or processes covered by the present invention.
The Image AnalyzerIf the image from email 105 is oversized, then an “oversized” feature is fired by the image analyzer 104 at 202 and no further image analysis is performed. If the image is not “oversized”, then the image is pre-processed at 203. In one embodiment, pre-processing includes obtaining the image format, image width, image height and/or image size. Then the image analyzer 104 determines whether the image is in a supported format at 204 (e.g., a format which the image analyzer is capable of analyzing). Examples of supported formats are Graphic Interchange Format (“GIF”) and Joint Photographic Experts Group (“JPEG”) images. If so, the image data is read by the image processing application at 206. If the image is unsupported, then an “unsupported” feature is fired at 205 and the process terminates.
As described below, in one embodiment of the invention, the image format and image width are used as a hash key to an image fingerprint hash table.
For GIFs and JPEGs, the width is relatively straightforward to determine. Once the image is pre-processed at 203, then the image data of the image is read and analyzed at 206. In one embodiment of the present invention, ImageMagick™ is the program used to read the image data. However, various other image processing programs may be used while still complying with the underlying principles of the invention (e.g., Adobe Photoshop, Corel Draw, Paint Shop Pro).
After the image data is read at 206, the image analyzer 104 determines if the image is corrupted at 207. Spammers may “corrupt” an image by making the image unreadable by filters. For example, a spammer may change the format or embed data within the image so that the reader in 206 is unable to read the image data.
In one embodiment of the invention, if the image is corrupted, then the image analyzer 104 fires a “corrupt” feature at 208 in response to being unable to read the image data. At 209, the image analyzer 104 performs a “GIF/JPEG Feature Detection Algorithm” (hereinafter “GIF80/JPEG80”) to create a signature for the image and to search for a match for the signature. The “GIF80/JPEG80” algorithm is described in detail below.
By contrast, if the image is not corrupted, the image analyzer 104 executes a “Fuzzy Feature Detection Algorithm” (hereinafter “Fuzzy80”) to create a different signature for the image and to search for a match for the signature. In this embodiment, computing resources are conserved because the GIF80/JPEG80 algorithm is more computationally intensive than the Fuzzy80 algorithm (although the GIF80/JPEG80 algorithm is more suitable for corrupted images). Therefore, running the Fuzzy80 algorithm, if possible, over GIF80/JPEG80 algorithm saves processing power.
In one embodiment of the invention, both the Fuzzy80 algorithm and the GIF80/JPEG80 algorithm may be run in conjunction with each other in order to cross-validate the algorithm results. In yet another embodiment of the invention, only the GIF80/JPEG80 algorithm is performed on the image and the Fuzzy80 algorithm is omitted. In this embodiment, the image analyzer 104 may not need to determine if the image is corrupted 208.
Returning to the first described embodiment, shown in
If the image analyzer 104 does not match the signatures of the current image to any signatures of the previously encountered spam images, then the image analyzer 104 fires a “No Match” feature at 212 to indicate that the image does not have any features. If, however, the image analyzer 104 matches at least one signature of the current image with at least one signature of a previously encountered spam image, then the image analyzer 104 fires the features 107 corresponding to the matched signatures 213. The features 107 may be identified by title, type, a special list, or any other technique for identifying data objects.
As previously stated, once the image features 107 are sent to the model 103, the model 103 attaches weights to the image features 107 (along with the weights of the other features of the email 105), and computes a score 108 to determine if the email 105 is spam (e.g., if the score is above a specified threshold value).
Image SignaturesA “Fuzzy80” algorithm and a “GIF80/JPEG80” algorithm were briefly mentioned above. Different embodiments of these two algorithms will now be described in detail.
An image “signature” is a unique code created by performing a hash on the image data. In one embodiment of the present invention, a Message Direct 5 (MD5) hash function is performed on the image data to create a 128-bit signature. In another embodiment of the invention, a Secure Hash Algorithm (SHA), such as SHA-1, may be used. The present invention should not be limited, though, to a specific algorithm, as almost any hash function (or one-way hash function) may be used to create the image signature.
In order to trick signature-based spam filters, spammers manipulate images in various ways including, for example:
-
- randomly changing the values of unused entries in the GIF color table;
- appending random data within the image file after the image data; and
- randomly changing the image data in the last few rows of the image.
In contrast to prior art spam filters, the “Fuzzy80” algorithm and the “GIF80/JPEG80” algorithm described below produce a recognizable signature for images that have been manipulated by spammers.
Fuzzy Feature Detection Algorithm (Fuzzy80)Through cropping of the bottom percentage 501 of the image 500, the width 503 of the image 500 is kept intact. The width 503 may be the original width of the image of email 105 or may be the modified width of the image after pre-processing the image in 204, illustrated in
By cropping the image as described above, the effects of random data appended to the end of the image and/or modifications to the data at the end of the image are removed. Thus, performing a hash function on the remaining image data still produces a recognizable signature.
Referring back to
In other embodiments, the image may be converted to a size different than a 4×4 image, such as a 10×10 image. The larger the image after conversion, the more accurate the signature results will be, but more computing resources will be necessary as more points will exist within the image. Thus, the underlying principles of the invention do not depend on the size of the image, as different applications may require different sizes.
Referring back to
Once a vector is created for the converted image, the vector is matched against vectors of known spam images at 304. For the present embodiment of the Fuzzy80 algorithm, the vector of an image is considered the image's “signature.”
The image analyzer 104 determines if any spam images exist that have the width of the present image 605. In one embodiment, the image widths 605 are contained within a hash table indexed by width. If no spam images of the same width exist, then the image analyzer 605 does not need to compare vectors. In the example, shown in
Then, the image analyzer 104 crops the present image for each different x-offset, y-offset, x-limit, y-limit listed in the list 609 and the vectors of spam images categorized under these x-offet, y-offset, x-limit, y-limit are compared to the vector of the cropped present image.
For example, as illustrated in
In one embodiment, error caused by depixelation is accounted for by comparing vectors to determine if they are within a range of similarity instead of being exactly the same. For example, in one embodiment of the present invention, the image analyzer 104 determines if the absolute differences between the numbers of the two vectors are cumulatively below a specified threshold. For example, in comparing two 48 number vectors, the first number of each vector is compared, then the second number, and the third, and so on, until all numbers of both vectors are compared. Then, the absolute difference between the first numbers is added to the absolute difference between the second numbers, which is added to the absolute difference between the third numbers, etc, until all of the absolute differences are added together. If the sum is less than a specific numerical threshold, then the two vectors are considered a “match.” If the sum is greater than the threshold, then the two vectors are considered to not match.
In one embodiment of the invention, the threshold is five times the size of the vector. Therefore, on average, each position of the present vector must be within five numbers of the equivalent position of a compared vector in order for there to exist a match. In addition, in one embodiment of the invention, multiple thresholds for segments of the vectors exist, and other means of comparing the vectors may exist.
A present vector may match multiple vectors of known spam images. Therefore, the present vector is compared to all of the vectors categorized under the same list 609 When a match is determined, the features corresponding to the matching vector are determined.
As illustrated in
Once all of the matching features are determined, the image analyzer sends the features as image features (213 of
The underlying principles of the invention do not depend on the bit level of color. Therefore, any number of bits for color depth may be used (e.g., 16, 32, 64, 128, 256, etc., shades of red, green, and blue). In addition, any format of the pixel definition may be used, such as YMV (creating a 48 number vector for 16 pixels) or a static color table (creating a 16 bit vector for 16 pixels).
GIF/JPEG Feature Detection Algorithm (GIF80/JPEG80)In one embodiment of the present invention, the image analyzer 104 first determines whether the present image is a GIF or a JPEG. If the image is a GIF, the image analyzer 104 runs the GIF80 algorithm (illustrated by
If the image is a “moving” image or multiple images shown in succession to simulate motion, the above information is typically universal to all of the images within the sequence of images for the “moving” image. In one embodiment, the image analyzer 104 may analyze only the first image in the stack of images. Alternatively, the image analyzer 104 may analyze any image in the stack or may analyze any number of the images in the stack.
Referring back to
In one embodiment of the invention, the image analyzer determines if a GIF is manipulated by comparing the overall width 803 to the width 814, the overall height 804 to the height 815, the overall color field 805 to the color field 816, and/or the overall color table 808 to the color table 817. Compared fields that do not match may indicate that the GIF has been manipulated.
Referring back to
If no signature exists for a spam image of the same size as the present image, then the GIF80 algorithm is ended. If, however, a signature exists for a spam image of the same size as the present image, then the image analyzer 104 crops the image 703. The image may be cropped as previously discussed under the Fuzzy80 algorithm and illustrated in
Once the image is cropped, the image analyzer determines the signature for the image data 704. As previously described, the signature is determined by performing a hash (e.g., MD5 algorithm) on the effective image data 819 of the GIF. Once the signature is determined for the image, the image analyzer 104 compares the determined signature to signatures of previously encountered spam images to identify features of the present image.
First, the image analyzer 104 compares the width of the present image to the widths used to categorize the stored signatures 901. As described above, the signatures may be stored in the form of a hash table indexed by width. The widths illustrated are n1 902, n2 903, and n3 904, etc. The image analyzer 104 uses the image width array 901 to determine if the present image width is the same width as any previously encountered spam images. Alternative to width, other size measurements may be used to categorize stored signatures 901, such as height and overall number of pixels.
As illustrated in
For the GIF80 algorithm, the determined signature of the present image may be compared with spam signatures for an exact match.
By way of example, and not limitation, if the present image is of size n1 902, the signature of the present image may be compared to all of the signatures 906-908, etc, identified by that size. In another embodiment of the present invention, the image analyzer 104 may stop comparing signatures once an exact match is found.
Once all of the matches for the signature of the present image have been determined, the features 915 corresponding to the matching signatures are fired by the image analyzer 104 and corresponding weights are assigned by the model 103. In the specific example of
Referring to
In other embodiments of the present invention, the image analyzer 104 categorizes signatures of JPEGs by image width.
Embodiments of the invention may include various steps as set forth above. The steps may be embodied in machine-executable instructions which cause a general-purpose or special-purpose computer processor to perform certain steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.
Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
Throughout the foregoing description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. Moreover, the underlying principles of the invention may be implemented within virtually any type of computing platform including standalone personal computer configurations and server configurations.
Claims
1. A computer-implemented method for determining spam features of a present image, comprising:
- determining whether a format of the present image is in a supported format of an image analyzer;
- reading the present image by an image reader of the image analyzer when the format of the present image is determined to be in the supported format;
- analyzing the present image by the image analyzer to determine whether the present image is corrupted, wherein the present image is determined to be corrupted when the image reader of the image analyzer determines that the format of the present image has been changed;
- selecting between at least a first feature detection algorithm and a second feature detection algorithm based on the determination of whether the present image is corrupted;
- when the present image is determined not to be corrupted, executing the first feature detection algorithm by the image analyzer to create a first signature to determine if the present image is a spam image, wherein the first feature detection algorithm comprises depixelating at least a first portion of the present image and processing the portion of the present image after the depixelation to create the first signature, wherein processing the portion of the present image comprises comparing a vector associated to the portion of the present image with a set of vectors of known spam images, and determining if the vector is within a range of similarity with one of the set of vectors; and
- when the present image is determined to be corrupted, executing the second feature detection algorithm by the image analyzer to create a second signature to determine if the present image is a spam image, wherein the second feature detection algorithm comprises analyzing at least a second portion of the present image to create the second signature, and comparing the second signature to stored signatures of known spam images.
2. The computer-implemented method as in claim 1 wherein, if the first feature detection algorithm is executed and does not detect that the image is a spam image, then executing the second feature detection algorithm to validate the determination of the first feature detection algorithm.
3. The computer implemented method as in claim 2 wherein the second feature detection algorithm computes a precise signature of the second portion of the present image and wherein the first feature detection algorithm computes a signature of the first portion of the present image which is relatively less precise than the signature calculated by the second feature detection algorithm.
4. The computer-implemented method as in claim 1 wherein the first feature detection algorithm comprises creating the first signature for the present image after the depixelation and searching for a match for the first signature, and wherein the second feature detection algorithm comprises creating the second signature for the present image and searching for a match for the second signature.
5. The computer-implemented method as in claim 1 wherein the first portion of the present image is selected through cropping, wherein the cropping removes a percentage of at least one side of the present image.
6. The computer-implemented method as in claim 1 wherein the depixelation comprises blurring of the first portion of the present image until the present image reaches a certain size.
7. The computer-implemented method as in claim 4 wherein the first signature for the present image is the vector for the first portion of the present image.
8. A non-transitory machine-readable medium having machine-executable instructions stored thereon, which when executed by a computer processing system, causes the computer processing system to perform a method for determining spam features of a present image, the method comprising:
- determining whether a format of the present image is in a supported format of an image analyzer;
- reading the present image by an image reader of the image analyzer when the format of the present image is determined to be in the supported format;
- analyzing the present image by the image analyzer to determine whether the present image is corrupted, wherein the present image is determined to be corrupted when the image reader of the image analyzer determines that the format of the present image has been changed;
- selecting between at least a first feature detection algorithm and a second feature detection algorithm based on the determination of whether the present image is corrupted;
- when the present image is determined not to be corrupted, executing the first feature detection algorithm by the image analyzer to create a first signature to determine if the present image is a spam image, wherein the first feature detection algorithm comprises depixelating at least a first portion of the present image and processing the portion of the present image after the depixelation to create the first signature, wherein processing the portion of the present image comprises comparing a vector associated to the portion of the present image with a set of vectors of known spam images, and determining if the vector is within a range of similarity with one of the set of vectors; and
- when the present image is determined to be corrupted, executing the second feature detection algorithm by the image analyzer to create a second signature to determine if the present image is a spam image, wherein the second feature detection algorithm comprises analyzing at least a second portion of the present image to create the second signature, and comparing the second signature to stored signatures of known spam images.
5465353 | November 7, 1995 | Hull et al. |
5721788 | February 24, 1998 | Powell et al. |
5872865 | February 16, 1999 | Normile et al. |
5893095 | April 6, 1999 | Jain et al. |
5898779 | April 27, 1999 | Squilla et al. |
5899999 | May 4, 1999 | De Bonet |
5913205 | June 15, 1999 | Jain et al. |
5937084 | August 10, 1999 | Crabtree et al. |
5963670 | October 5, 1999 | Lipson et al. |
6175829 | January 16, 2001 | Li et al. |
6307955 | October 23, 2001 | Zank et al. |
6389424 | May 14, 2002 | Kim et al. |
6415282 | July 2, 2002 | Mukherjea et al. |
6779021 | August 17, 2004 | Bates et al. |
6847733 | January 25, 2005 | Savakis et al. |
6931591 | August 16, 2005 | Brown |
7089241 | August 8, 2006 | Alspector et al. |
7437408 | October 14, 2008 | Schwartz et al. |
7475061 | January 6, 2009 | Bargeron et al. |
7624274 | November 24, 2009 | Alspector et al. |
7715059 | May 11, 2010 | Advocate et al. |
7716297 | May 11, 2010 | Wittel et al. |
7817861 | October 19, 2010 | Lee |
8103048 | January 24, 2012 | Sheinin et al. |
8175387 | May 8, 2012 | Hsieh et al. |
8290203 | October 16, 2012 | Myers et al. |
8290311 | October 16, 2012 | Myers et al. |
8356076 | January 15, 2013 | Wittel et al. |
8489689 | July 16, 2013 | Sharma et al. |
8792728 | July 29, 2014 | Tang |
20010004739 | June 21, 2001 | Sekiguchi et al. |
20010039563 | November 8, 2001 | Tian |
20020032672 | March 14, 2002 | Keith, Jr. |
20020076089 | June 20, 2002 | Muramatsu |
20020180764 | December 5, 2002 | Gilbert et al. |
20030053718 | March 20, 2003 | Yamamoto |
20030123737 | July 3, 2003 | Mojsilovic et al. |
20030174893 | September 18, 2003 | Sun et al. |
20030215135 | November 20, 2003 | Caron et al. |
20040148330 | July 29, 2004 | Alspector et al. |
20040153517 | August 5, 2004 | Gang et al. |
20040177110 | September 9, 2004 | Rounthwaite et al. |
20040213437 | October 28, 2004 | Howard et al. |
20040221062 | November 4, 2004 | Starbuck et al. |
20040260776 | December 23, 2004 | Starbuck et al. |
20050030589 | February 10, 2005 | El-Gazzar et al. |
20050050150 | March 3, 2005 | Dinkin |
20050060643 | March 17, 2005 | Glass et al. |
20050076220 | April 7, 2005 | Zhang et al. |
20050097179 | May 5, 2005 | Orme |
20050120042 | June 2, 2005 | Shuster et al. |
20050120201 | June 2, 2005 | Benaloh |
20050216564 | September 29, 2005 | Myers et al. |
20060092292 | May 4, 2006 | Matsuoka et al. |
20060093221 | May 4, 2006 | Kasutani |
20060123083 | June 8, 2006 | Goutte et al. |
20060168006 | July 27, 2006 | Shannon et al. |
20060168041 | July 27, 2006 | Mishra et al. |
20060184574 | August 17, 2006 | Wu et al. |
20060190481 | August 24, 2006 | Alspector et al. |
20070130350 | June 7, 2007 | Alperovitch et al. |
20070130351 | June 7, 2007 | Alperovitch |
20070211964 | September 13, 2007 | Agam et al. |
20080002914 | January 3, 2008 | Vincent et al. |
20080002916 | January 3, 2008 | Vincent et al. |
20080063279 | March 13, 2008 | Vincent et al. |
20080091765 | April 17, 2008 | Gammage et al. |
20080127340 | May 29, 2008 | Lee |
20080130998 | June 5, 2008 | Maidment et al. |
20080159585 | July 3, 2008 | True et al. |
20080159632 | July 3, 2008 | Oliver et al. |
20080175266 | July 24, 2008 | Alperovitch et al. |
20080177691 | July 24, 2008 | Alperovitch et al. |
20080178288 | July 24, 2008 | Alperovitch et al. |
20080219495 | September 11, 2008 | Hulten et al. |
20090043853 | February 12, 2009 | Wei et al. |
20090077476 | March 19, 2009 | Hua et al. |
20090100523 | April 16, 2009 | Harris |
20090110233 | April 30, 2009 | Lu et al. |
20090113003 | April 30, 2009 | Lu et al. |
20090141985 | June 4, 2009 | Sheinin et al. |
20090220166 | September 3, 2009 | Choi et al. |
20090245635 | October 1, 2009 | Yehezkel et al. |
20100158395 | June 24, 2010 | Sathish et al. |
20110052074 | March 3, 2011 | Hayaishi |
20110244919 | October 6, 2011 | Aller et al. |
20120087586 | April 12, 2012 | Sheinin et al. |
20120240228 | September 20, 2012 | Alperovitch et al. |
20130212090 | August 15, 2013 | Sperling et al. |
2005011325 | January 2005 | JP |
- Yan Gao, Ming Yang and Xiaonan Zhao, “Image Spam Hunter”, Jun. 15, 2006, http://www.cs.northwestern.edu/˜yga751/ML/ISH.htm, p. 1-8.
- Author Unknown, “Tumbleweed advances the war on spam with new adaptive image filtering technology”, Nov. 7, 2006, http://www.axway.com/press-releases/2006/tumbleweed-advances-war-spam-new-adaptive-image-filtering-technology, p. 1-2.
- Garrettson, Cara, “IronPort: Image-spam filters produce high catch rate”, Nov. 17, 2006, “http://www.computerworld.com.au/article/166513/ironport_image-spam_filters_produce_high_catch_rates/”, p. 1-3.
- Corman, T, et al., “Introduction to Algorithms”, 2000, McGraw-Hill Book Company, p. 221-226.
- “The New Wave of Spam”, http://www.symantec.com/business/resources/articles/article.jsp?aid=new_wave_of_spam, (May 7, 2007).
- Aradhye, H.B. , et al., “Image Analysis for Efficient Categorization of Image-Based Spam e-mail”, Proceedings of the 2005 Eighth International Conference on Document Analysis and Recognition (ICDAR '05), (2005).
- Burton, Brian, “Spam Probe—Bayesian Spam Filtering Tweaks”, http://web.archive.org/web/20060207003157/http://spamprobe.sourceforge.net/, (2006), 1-5.
- Johnson, Neil F., “In Search of the Right Image: Recognition and Tracking of Images in Image Databases, Collections, and the Internet”, Center for Secure Information Systems Technical Report CSIS-TR-99-05NFJ., (Apr. 1999), 1-13.
- Schwartz, Randal , “Finding Similar Images, Perl of Wisdom”, Linux Magazine, Aug. 14, 2003, www.linux-mag.com/index2.php?option=com_content&task=view&id=Itemid=221, (Aug. 15, 2003), 6 pages.
- Wang, Zhe, et al., “Filtering Image Spam with Near-Duplicate Detection”, CEAS 2007—The Third Conference on Email and Anti-Spam, 2007, (2007), 1-10.
- Wu, C.T., et al., “Using Visual Features for Anti-Spam Filtering”, IEEE International Conference on Image Processing, vol. 3, (2005), 4 pages.
- Notice of Allowance, U.S. Appl. No. 11/652,715, dated Jun. 12, 2012, 9 pages.
- Final Office Action, U.S. Appl. No. 11/652,715, dated Nov. 28, 2011, 25 pages.
- Non-Final Office Action, U.S. Appl. No. 11/652,715, dated Mar. 10, 2011, 35 pages.
- Final Office Action, U.S. Appl. No. 11/652,715, dated Nov. 23, 2010, 25 pages.
- Non-Final Office Action, U.S. Appl. No. 11/652,715, dated Jun. 11, 2010, 31 pages.
- Notice of Allowance, U.S. Appl. No. 11/652,716, dated Jun. 13, 2012, 9 pages.
- Final Office Action, U.S. Appl. No. 11/652,716, dated Nov. 28, 2011, 9 pages.
- Non-Final Office Action, U.S. Appl. No. 11/652,716, dated Mar. 18, 2011, 34 pages.
- Final Office Action, U.S. Appl. No. 11/652,716, dated Oct. 28, 2010, 41 pages.
- Non-Final Office Action, U.S. Appl. No. 11/652,716, dated May 12, 2010, 30 pages.
Type: Grant
Filed: Oct 11, 2012
Date of Patent: Oct 9, 2018
Patent Publication Number: 20130039582
Assignee: Proofpoint, Inc. (Sunnyvale, CA)
Inventors: John Gardiner Myers (Santa Clara, CA), Yanyan Yang (Sunnyvale, CA)
Primary Examiner: Gandhi Thirugnanam
Application Number: 13/650,107
International Classification: G06K 9/54 (20060101); G06K 9/00 (20060101); G06Q 10/10 (20120101);