Extracted From Alphanumeric Characters Patents (Class 382/198)
  • Patent number: 10210423
    Abstract: Object identification through image matching can utilize ratio and other data to accurately identify objects having relatively few feature points otherwise useful for identifying objects. An initial image analysis attempts to locate a “scalar” in the image, such as may include a label, text, icon, or other identifier that can help to narrow a classification of the search, as well as to provide a frame of reference for relative measurements obtained from the image. By comparing the ratios of dimensions of the scalar with other dimensions of the object, it is possible to discriminate between objects containing that scalar in a way that is relatively robust to changes in viewpoint. A ratio signature can be generated for an object for use in matching, while in other embodiments a classification can identify priority ratios that can be used to more accurately identify objects in that classification.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: February 19, 2019
    Assignee: A9.com, Inc.
    Inventors: Ismet Zeki Yalniz, Colin Jon Taylor, Mehmet Nejat Tek, Shanghsuan Tsai
  • Patent number: 10127199
    Abstract: The visual similarity between fonts is determined using visual descriptors of character images in the fonts. A model used to generate the visual descriptors may include a set of letterforms, keypoint locations on each letterform, and detail shapes at zero, one, or more detail areas on the letterform. In some instances, the model may also set forth one or more geometric measurements. Based on the model, a visual descriptor may be generated for a character image from a font by identifying a letterform of the character image, identifying keypoint locations on the character image, and identifying a detail shape at any detail areas on the character image. Additionally, the visual descriptor may include any geometric measurement defined by the model. The visual similarity between two fonts may be determined as a function of the differences between pairs of visual descriptors for the fonts that correspond with the same letterform.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: November 13, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: R. David Arnold, Zhihong Ding, Judy S. Lee, Eric Muller, Timothy Wojtaszek
  • Patent number: 10127447
    Abstract: Described are methods and systems for determining authenticity. For example, the method may include providing an object of authentication, capturing characteristic data from the object of authentication, deriving authentication data from the characteristic data of the object of authentication, and comparing the authentication data with an electronic database comprising reference authentication data to provide an authenticity score for the object of authentication. The reference authentication data may correspond to one or more reference objects of authentication other than the object of authentication.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: November 13, 2018
    Inventors: Gary L. Duerksen, Seth A. Miller
  • Patent number: 10096382
    Abstract: A medical imaging system (10) comprises one or more displays (66). A viewer device (86) generates an interactive user interface screen (80) on the display (66), which viewer device (86) enables a user to simultaneously inspect selected image data of multiple patients or multiple images.
    Type: Grant
    Filed: August 24, 2012
    Date of Patent: October 9, 2018
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Yang-Ming Zhu, Xiangyu Wu, Charles A. Nortmann, Ronald W. Sukalac, Steven M. Cochoff, L. Alan Love, Richard Cheng-Hsiu Chen, Chris A. Dauterman, Madhavi Ahuja, Dawn M. Maniawski
  • Patent number: 10049291
    Abstract: According to the present disclosure, an image-processing apparatus identifies for each gradation value a connected component of pixels of not less than or not more than the gradation value neighboring and connected to each other in an input image, thereby generating hierarchical structure data of a hierarchical structure including the connected component, extracts based on the hierarchical structure data a connected component satisfying character likelihood as a character-like region, acquires a threshold value of binarization used exclusively for the character-like region, acquires a corrected region where the character-like region is binarized, acquires a background where a gradation value of a pixel included in a region of the input image other than the corrected region is changed to a gradation value for a background, and acquires a binary image data of a binary image composed of the corrected region and the background region.
    Type: Grant
    Filed: November 17, 2016
    Date of Patent: August 14, 2018
    Assignee: PFU LIMITED
    Inventors: Mitsuru Nishikawa, Kiyoto Kosaka
  • Patent number: 9836819
    Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. The mobile image capture device is operable to input an image into at least one neural network and to receive at least one descriptor of the desirability of a scene depicted by the image as an output of the at least one neural network. The mobile image capture device is operable to determine, based at least in part on the at least one descriptor of the desirability of the scene of the image, whether to store a second copy of such image and/or one or more contemporaneously captured images in a non-volatile memory of the mobile image capture device or to discard a first copy of such image from a temporary image buffer without storing the second copy of such image in the non-volatile memory.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: December 5, 2017
    Assignee: Google LLC
    Inventors: Aaron Michael Donsbach, Benjamin Vanik, Jon Gabriel Clapper, Alison Lentz, Joshua Denali Lovejoy, Robert Douglas Fritz, III, Krzysztof Duleba, Li Zhang, Juston Payne, Emily Anne Fortuna, Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Hugh Brendan McMahan, Oliver Fritz Lange, Jess Holbrook
  • Patent number: 9721362
    Abstract: Auto-completion of an input partial line pattern. Upon detecting that the user has input the partial line pattern, the scope of the input partial line pattern is matched against corresponding line patterns from a collection of line pattern representations to form a scoped match set of line pattern representations. For one or more of the line pattern representations in the scoped match set, a visualization of completion options is then provided. For example, the corresponding line pattern representation might be displayed in a distinct portion of the display as compared to the input partial line pattern, or perhaps in the same portion in which case, in which case the remaining portion of the line pattern representation might extend off of the input partial line pattern representation.
    Type: Grant
    Filed: April 24, 2013
    Date of Patent: August 1, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam Smolinski, Michael John Ebstyne
  • Patent number: 9582739
    Abstract: A system and method that transforms data formats into contour metrics and further transforms each contour of that mapping into contours pattern metric sets so that each metric created has a representation of one level of contour presentation, at each iteration of the learning contour identification system defined herein. This transformation of data instance to contour metrics permits a user to take relevant data of a data set, as determined by a learning contour identification system, to machines of other types and function, for the purpose of further analysis of the patterns found and labeled by said system.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: February 28, 2017
    Inventor: Harry Friedbert Padubrin
  • Patent number: 9373048
    Abstract: The present disclosure relates to a method and a system for recognizing characters. In one embodiment, the input image comprising one or more characters to be recognized is received and processed to extract one or more nodes and edges of each character in the input image. Using the extracted nodes and edges, a graphical representation and adjacency matrix of each character is generated and compared with a predetermined graphical representation and adjacency matrix to determine a match. Based on the comparison, a matching probability is determined based on which one or more characters in the input image is recognized and displayed as output. The proposed recognition method and system recognizes character with more accuracy and speed. Further, the present disclosure is simple, cost-effective and reduces the complexity involved in automatic recognition of characters.
    Type: Grant
    Filed: March 3, 2015
    Date of Patent: June 21, 2016
    Assignee: Wipro Limited
    Inventors: Raghavendra Hosabettu, Anil Kumar Lenka
  • Patent number: 9224196
    Abstract: Described are methods and systems for determining authenticity. For example, the method may include providing an object of authentication, capturing characteristic data from the object of authentication, deriving authentication data from the characteristic data of the object of authentication, and comparing the authentication data with an electronic database comprising reference authentication data to provide an authenticity score for the object of authentication. The reference authentication data may correspond to one or more reference objects of authentication other than the object of authentication.
    Type: Grant
    Filed: March 12, 2015
    Date of Patent: December 29, 2015
    Assignee: ClearMark Systems, LLC
    Inventors: Gary L. Duerksen, Seth A. Miller
  • Patent number: 9208401
    Abstract: A system and method uses one or more images provided to an image recognition capable search engine to obtain search results. The image recognition system may use one or more image match algorithms to create one or more possible product match sets. In the event multiple product match sets are created, the search results may be limited to product that appears in one or more of the plural possible product match sets.
    Type: Grant
    Filed: May 22, 2013
    Date of Patent: December 8, 2015
    Assignee: W.W. Grainger, Inc.
    Inventor: Geoffry A. Westphal
  • Patent number: 9014481
    Abstract: A method for Arabic and Farsi font recognition for determining the font of text using a nearest neighbor classifier, where the classifier uses a combination of features including: box counting dimension, center of gravity, the number of vertical and horizontal extrema, the number of black and white components, the smallest black component, the Log baseline position, concave curvature features, convex curvature features, direction and direction length features, Log-Gabor features, and segmented Log-Gabor features. The method is tested using various combination of features on various text fonts, sizes, and styles. It is observed the segmented Log-Gabor features produce a 99.85% font recognition rate, and the combination of all non-Log-Gabor features produces a 97.96% font recognition rate.
    Type: Grant
    Filed: April 22, 2014
    Date of Patent: April 21, 2015
    Assignees: King Fahd University of Petroleum and Minerals, King Abdulaziz City for Science and Technology
    Inventors: Hamzah Abdullah Luqman, Sabri Abdullah Mohammed
  • Patent number: 9014480
    Abstract: A difference in intensities of a pair of pixels in an image is repeatedly compared to a threshold, with the pair of pixels being separated by at least one pixel (“skipped pixel”). When the threshold is found to be exceeded, a selected position of a selected pixel in the pair, and at least one additional position adjacent to the selected position are added to a set of positions. The comparing and adding are performed multiple times to generate multiple such sets, each set identifying a region in the image, e.g. an MSER. Sets of positions, identifying regions whose attributes satisfy a test, are merged to obtain a merged set. Intensities of pixels identified in the merged set are used to generate binary values for the region, followed by classification of the region as text/non-text. Regions classified as text are supplied to an optical character recognition (OCR) system.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: April 21, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Pawan Kumar Baheti, Kishor K. Barman, Raghuraman Krishnamoorthi, Bojan Vrcelj
  • Patent number: 8977058
    Abstract: According to one embodiment, an image processing apparatus includes following units. The correlation calculation unit calculates correlations between a first region and predetermined first basis vectors. The distance calculation unit calculates distances between the first region and second regions on a subspace generated by the second basis vectors selected from the first basis vectors. The feature quantity calculation unit calculates a feature quantity based on the correlations. The weight calculation unit calculates weights based on the distances and the feature quantity. The pixel value calculation unit calculates a weighted average of pixel values according to the weights to generate an output pixel value.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: March 10, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Satoshi Kawata, Takuma Yamamoto, Yasunori Taguchi, Nobuyuki Matsumoto
  • Patent number: 8942484
    Abstract: A method includes receiving an indication of a set of image regions identified in image data. The method further includes, selecting image regions from the set of image regions for text extraction at least partially based on image region stability.
    Type: Grant
    Filed: March 6, 2012
    Date of Patent: January 27, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Hyung-Il Koo, Kisun You
  • Patent number: 8917939
    Abstract: A client captures at least one identification indicator of an individual arriving at a threshold representing themself as a vendor of an organization. The client extracts vendor indicia and organization indicia from the captured identification. The client sends a query comprising the organization indicia to an identification service. Responsive to the client receiving a response from the identification service with a network address of a particular identity verification service associated with the at least one organization indicia, the client sends a query comprising the vendor indicia and the current location of the threshold to the particular identity verification service. Responsive to the verification client receiving a response from the particular identity verification service indicating that the user is validated, the client notifies the user that the individual is validated as the vendor of the organization.
    Type: Grant
    Filed: February 21, 2013
    Date of Patent: December 23, 2014
    Assignee: International Business Machines Corporation
    Inventor: Michael P. Clarke
  • Patent number: 8903175
    Abstract: A system and method for script and orientation detection of images are disclosed. In one example, textual content in the image is extracted. Further, a vertical component run (VCR) and horizontal component run (HCR) are obtained by vectorizing each connected component in the extracted textual content. Furthermore, a concatenated vertical document vectors (VDV) and a horizontal document vector (HDV) are computed. In addition, a substantially matching script and orientation is obtained by comparing the computed concatenated VDV and HDV of the image with reference VDV and HDV associated with each script and orientation, respectively. Also, the substantially matching script and orientation are declared as the script and orientation of the image, if the computed concatenated VDV and HDV of the image substantially match with the reference VDV and HDV of the matching script and orientation, respectively.
    Type: Grant
    Filed: August 29, 2011
    Date of Patent: December 2, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Chirag Jain, Srinidhi Kadagattur, Yifeng Wu
  • Patent number: 8879827
    Abstract: Systems and methods may include utilizing a structured light pattern that may be, among other things, decoded in the three directions (e.g., vertical, horizontal, and diagonal). In one example, the method may include detecting a first feature of a target image in a return image, designating a feature type of the first feature, and an index with the letter, wherein the index is associated with the pattern slide. The method may also include calculating a horizontal position in the pattern slide of the first feature, calculating a vertical position in the pattern slide of the first feature, and calculating a depth of the first feature.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: November 4, 2014
    Assignee: Intel Corporation
    Inventors: Ziv Aviv, David Stanhill, Ron Ferens, Roi Ziss
  • Patent number: 8866820
    Abstract: A difference of coordinate values stored adjacent to each other is compressed by means of a statistical coding system when reading out outline font data storing coordinate values necessary for drawing a contour of a character in order of drawing the contour in a clockwise or counterclockwise direction and also a category of a line connecting a pair of coordinates simultaneously, followed by compressing the coordinate values of the outline font data. A value of a result of subtracting “A?1” from a difference of coordinate values is determined to be a difference value of coordinates if the difference of coordinate value is equal to or greater than a certain value A, and a code expressing the difference value of “0” is added in front of the codes of difference values that are smaller than the value A in the case of a category of line connecting adjacent coordinates to each other being a straight line.
    Type: Grant
    Filed: February 28, 2007
    Date of Patent: October 21, 2014
    Assignee: Fujitsu Limited
    Inventors: Kohei Terazono, Yoshiyuki Okada, Masashi Takechi
  • Patent number: 8867828
    Abstract: A method for detecting a text region in an image is disclosed. The method includes detecting a candidate text region from an input image. A set of oriented gradient images is generated from the candidate text region, and one or more detection window images of the candidate text region are captured. A sum of oriented gradients is then calculated for a region in one of the oriented gradient images. It is classified whether each detection window image contains text by comparing the associated sum of oriented gradients and a threshold. Based on the classifications of the detection window images, it is determined whether the candidate text region is a true text region.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: October 21, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Chunghoon Kim, Hyung-Il Koo, Kyu Woong Hwang
  • Publication number: 20140270539
    Abstract: Systems and methods read machine readable symbols, the systems and methods capture multiple images of the symbol and can locate symbol data region(s) from an image even when the symbol data is corrupted and not decodable. Binary matrices are generated of the symbol data regions obtained from the multiple images and can be accumulated to generate a decodable image. A correspondence can be established among multiple images acquired on the same symbol when the symbol has moved from one image to the next.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Xianju Wang, Xiangyun Ye
  • Patent number: 8792724
    Abstract: A system and method for script and orientation detection of images are disclosed. In one example, textual content in the image is extracted. Further, a vertical component run (VCR) and horizontal component run (HCR) are obtained by vectorizing each connected component in the extracted textual content. Furthermore, a concatenated vertical document vectors (VDV) and a horizontal document vector (HDV) are computed. In addition, a substantially matching script and orientation is obtained by comparing the computed concatenated VDV and HDV of the image with reference VDV and HDV associated with each script and orientation, respectively. Also, the substantially matching script and orientation are declared as the script and orientation of the image, if the computed concatenated VDV and HDV of the image substantially match with the reference VDV and HDV of the matching script and orientation, respectively.
    Type: Grant
    Filed: August 29, 2011
    Date of Patent: July 29, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Chirag Jain, Srinidhi Kadagattur, Yifeng Wu
  • Patent number: 8705862
    Abstract: The object of this invention is to provide an image processing apparatus in which, in processing of a document image read by a document reading device, an inclination of a character string in the document image which is recognized in character recognition is obtained more accurately. The image processing apparatus includes a similar character extraction portion which extracts and outputs a character group comprised of characters having a shape and a size that are same with or similar to each other from among characters constituting a character string comprised of a character recognized in optical character recognition from a document image read by a document reading device; and an inclination calculation portion which calculates an inclination value of the character string based on position information of each character of the character group output from the similar character extraction portion.
    Type: Grant
    Filed: March 12, 2012
    Date of Patent: April 22, 2014
    Assignee: Sharp Kabushiki Kaisha
    Inventor: Takeshi Kutsumi
  • Patent number: 8670623
    Abstract: There are provided a labeling portion that extracts a character included in raster format image data, a complexity calculation portion that obtains a degree of complexity indicating complexity of the character, an approximation method determination portion that determines whether or not to use curve approximation to convert the character based on the degree of complexity thus obtained. In the case where it has been determined to use the curve approximation, the character is converted into a vector format by performing straight-line approximation or curve approximation on each part of a contour of the character, whereas in the case where it has been determined not to use the curve approximation, the character is converted into a vector format by performing the straight-line approximation on each part of the contour of the character without performing the curve approximation.
    Type: Grant
    Filed: March 18, 2009
    Date of Patent: March 11, 2014
    Assignee: Konica Minolta Business Technologies, Inc.
    Inventor: Yuko Oota
  • Publication number: 20130188875
    Abstract: A vector graphics classification engine and associated method for classifying vector graphics in a fixed format document is described herein and illustrated in the accompanying figures. The vector graphics classification engine defines a pipeline for categorizing vector graphics parsed from the fixed format document as font, text, paragraph, table, and page effects, such as shading, borders, underlines, and strikethroughs. Vector graphics that are not otherwise classified are designated as basic graphics. By sequencing the detection operations in a selected order, misclassification is minimized or eliminated.
    Type: Application
    Filed: January 23, 2012
    Publication date: July 25, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Milan Sesum, Milos Raskovic, Drazen Zaric, Milos Lazarevic, Aljosa Obuljen
  • Patent number: 8494278
    Abstract: A method and computer program product for recognizing handwriting. A handwritten character is captured as an image of black pixels and white pixels. The image is partitioned into segments, each of which having a pixel ratio of a total number of black pixels in the segment to a total number of black pixels in the image. A reference character has segments corresponding to the image segments. Each reference character segment has a value range of a pixel ratio of a total number of black pixels in the segment of the reference character to a total number of black pixels in the reference character. It is ascertained that the pixel ratio of more than a predetermined number of segments in the image are within the value range of the pixel ratio of the corresponding segments of the reference character, from which, the handwritten character is recognized as the reference character.
    Type: Grant
    Filed: January 4, 2013
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Choudhary Khushboo, Shiva C T Kumar, Mukundan Sundararajan
  • Patent number: 8494277
    Abstract: A system for recognizing handwriting. A handwritten character is captured as an image of black pixels and white pixels. The image is partitioned into segments, each of which having a pixel ratio of a total number of black pixels in the segment to a total number of black pixels in the image. A reference character has segments corresponding to the image segments. Each reference character segment has a value range of a pixel ratio of a total number of black pixels in the segment of the reference character to a total number of black pixels in the reference character. It is ascertained that the pixel ratio of more than a predetermined number of segments in the image are within the value range of the pixel ratio of the corresponding segments of the reference character, from which, the handwritten character is recognized as the reference character.
    Type: Grant
    Filed: January 4, 2013
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Choudhary Khushboo, Shiva C T Kumar, Mukundan Sundararajan
  • Publication number: 20130163881
    Abstract: A system and method for identifying characters-of-interest from markings on a surface of an object. The system includes a vector-generating module configured to receive and analyze an image of the markings to provide a feature vector having a vector address. The system also includes a sparse distributed memory (SDM) module. The SDM module includes hard locations having stored vector location addresses within an address space and stored content counters. The location addresses form multiple concentrated groups within the address space. The concentrated groups are associated with different characters of an identification system. The system also includes an identification module that is configured to identify the character(s)-of-interest using the SDM module.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 27, 2013
    Applicant: General Electric Company
    Inventors: Joseph Salvo, John Carbone, Lynn Ann DeRose, Adam McCann, William Leonard
  • Patent number: 8411313
    Abstract: In an image forming apparatus, a reader reads an image of one page of an original thereby obtaining image data. A determining unit determines whether embedded data has been embedded in the image data. An extracting unit extracts the embedded data, acquires a target page number of a reading unnecessary page based on the embedded data, and saves the page number in a storage unit. A page number determining unit determines whether a page number of a next page of the original matches with the target page number. Upon the page number of the next page matching with the target page number, the reader does not read image data of the next page.
    Type: Grant
    Filed: September 12, 2008
    Date of Patent: April 2, 2013
    Assignee: Ricoh Company, Limited
    Inventor: Yuka Kihara
  • Patent number: 8363947
    Abstract: A method, system and computer program product for recognizing cursive and non-cursive handwriting. The invention comprises capturing a handwritten character as an image of pixels, partition the image into a plurality of segments each having a pixel ratio of the number of pixels in the segment divided by the total number of pixels in the image, and compare the pixel ratio for each segment to a value range associated with a corresponding segment of a reference character. The handwritten character is recognized as the reference character if more than a predetermined number of the segments in the image have the pixel ratios within the respective value ranges of the reference character.
    Type: Grant
    Filed: July 31, 2010
    Date of Patent: January 29, 2013
    Assignee: International Business Machines Corporation
    Inventors: Choudhary Khushboo, Shiva C T Kumar, Mukundan Sundararajan
  • Patent number: 8355578
    Abstract: Even when captions of a plurality of objects use an identical anchor expression, the present invention can associate an appropriately explanatory text in a body text as metadata with the objects.
    Type: Grant
    Filed: March 3, 2011
    Date of Patent: January 15, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hidetomo Sohma, Tomotoshi Kanatsu, Ryo Kosaka, Reiji Misawa
  • Patent number: 8325386
    Abstract: The transfer of a duplicate electronic document between image forming devices is done with an electronic document that is formed of only resolution-independent vector data and the like, and a normal printing of the duplicate electronic document or printing of different resolution is performed by developing the vector data. In high-quality printing of the duplicate electronic document, a Fill Map included in the original document the information indicating the location of an electronic document of copy source that is embedded in a metadata specifies is obtained, and this printing is made using the Fill Map.
    Type: Grant
    Filed: May 11, 2009
    Date of Patent: December 4, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hisashi Koike
  • Patent number: 8320676
    Abstract: A wide range of digital devices either have or are provided with imaging devices which are capable of imaging externally provided information in the form of special codes that contain setup and/or configuration information. Processors within these devices, which include cell phones, cameras, PDAs and personal computers, to name just a few, recognize the image and convert it to the desired configuration and/or setup information.
    Type: Grant
    Filed: January 31, 2008
    Date of Patent: November 27, 2012
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey E. Bisti, Eli M. Dow
  • Patent number: 8301363
    Abstract: The invention relates to a method and a system for identifying moving objects by employing a tag, said tag comprising at least alphanumeric characters and said tag being extracted from pictures taken by cameras located in at least two different points within a certain distance comprising extracting alphanumeric characters of said tag from the pictures taken by at least two cameras; converting said alphanumeric characters into other new characters of another representation space; creating a string of said new characters for each of the tags extracted from the pictures taken by the cameras at different locations, said cameras being synchronized and said pictures taken by the cameras within a predetermined time interval; comparing the strings by associating a correlation score; inputting a threshold score; identifying the moving object if the correlation score is over the predetermined threshold score.
    Type: Grant
    Filed: October 16, 2007
    Date of Patent: October 30, 2012
    Assignee: Eng Celeritas S.R.L.
    Inventor: Nicola Grassi
  • Patent number: 8289581
    Abstract: An image processing apparatus which is capable of managing a large number of transfer jobs without using a large-capacity storage device. A transfer job for transferring image information input to the image processing apparatus to at least one destination is performed, and character information is extracted from the input image information. History information indicative of the execution result of the transfer job is generated, and recorded in association with the extracted character information in a storage device. At least one of at least one piece of history information and at least one piece of character information recorded in association with the history information are perused.
    Type: Grant
    Filed: December 23, 2005
    Date of Patent: October 16, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kenji Hara
  • Patent number: 8290274
    Abstract: A method is provided that includes capturing an input handwritten character with parameter representation for each stroke and applying a polygonal approximation thereto; assuming each polygonal line segment approximated to be vector that reaches an end point from a start point, and obtaining an angle between an axis that becomes a reference and each line segment as a polygonal line segment angle sequence; obtaining an exterior angle sequence of vertices of the line segments; making a sum of exterior angles of the same sign, where the same sign of plus or minus in the exterior angle sequence continues, to be a winding angle sequence; extracting a global feature according to each obtained sequence and a localized or quasi-localized feature in each curved portion divided corresponding to the winding angle sequence, hierarchically and divisionally; and performing character recognition by comparing the extracted result with a template of an object character.
    Type: Grant
    Filed: February 15, 2006
    Date of Patent: October 16, 2012
    Assignee: Kite Image Technologies Inc.
    Inventors: Shunji Mori, Tomohisa Matsushita
  • Patent number: 8260006
    Abstract: A system and method is provided for aligning images of one resolution based on both features contained in the images and the alignment of related images of a different resolution.
    Type: Grant
    Filed: March 14, 2008
    Date of Patent: September 4, 2012
    Assignee: Google Inc.
    Inventors: Francesco Callari, Michael Weiss-Malik
  • Patent number: 8249357
    Abstract: A system and a method for automatic restoration of isotropic degradations of a digital image, based on receiving a blurred image by an image capture assembly, automatically finding proper step edge, calculating the PSF from the step edge, and restoring the blurred image by means of a processor, and with the option to display the resorted image by means of an output assembly.
    Type: Grant
    Filed: October 22, 2007
    Date of Patent: August 21, 2012
    Assignee: Ben Gurion University of the Negev, Research and Development Authority
    Inventors: Yitzhak Yitzhaky, Omri Shacham, Oren Haik
  • Patent number: 8194983
    Abstract: The present invention provides method and system for preprocessing an image including one or more of Arabic text and non-text items for Optical Character Recognition (OCR). The method includes determining a plurality of components associated with one or more of the Arabic text and the non-text items, wherein a component includes a set of connected pixels. A first set of characteristic parameters is then calculated for the plurality of components. The plurality of components are subsequently merged based on the first set of characteristic parameters to form one or more of one or more sub-words and one or more words.
    Type: Grant
    Filed: May 13, 2010
    Date of Patent: June 5, 2012
    Inventors: Hussein Khalid Al-Omari, Mohammad Sulaiman Khorsheed
  • Patent number: 8189961
    Abstract: An image deskew system and techniques are used in the context of optical character recognition. An image is obtained of an original set of characters in an original linear (horizontal) orientation. An acquired set of characters, which is skewed relative to the original linear orientation by a rotation angle, is represented by pixels of the image. The rotation angle is estimated, and a confidence value may be associated with the estimation, to determine whether to deskew the image. In connection with rotation angle estimation, an edge detection filter is applied to the acquired set of characters to produce an edge map, which is input to a linear hough transform filter to produce a set of output lines in parametric form. The output lines are assigned scores, and based on the scores, at least one output line is determined to be a dominant line with a slope approximating the rotation angle.
    Type: Grant
    Filed: June 9, 2010
    Date of Patent: May 29, 2012
    Assignee: Microsoft Corporation
    Inventors: Djordje Nijemcevic, Sasa Galic
  • Patent number: 8103132
    Abstract: A method for correcting results of OCR or other scanned symbols. Initially scanning and performing OCR classification on a document. Clustering character/symbol classifications resulting from the OCR based on shapes. Creating super-symbols based on at least a first difference in the shapes of the clustered characters/symbols exceeding a first threshold. A carpet of super-symbols, emphasizing localized differences in similar symbols, is displayed for analysis testing.
    Type: Grant
    Filed: March 31, 2008
    Date of Patent: January 24, 2012
    Assignee: International Business Machines Corporation
    Inventors: Asaf Tzadok, Eugeniusz Walach
  • Patent number: 8094939
    Abstract: Described is searching directly based on digital ink input to provide a result set of one or more items. Digital ink input (e.g., a handwritten character, sketched shape, gesture, drawing picture) is provided to a search engine and interpreted thereby, with a search result (or results) returned. Different kinds of digital ink can be used as search input without changing modes. The search engine includes a unified digital ink recognizer that recognizes digital ink as a character or another type of digital ink. When the recognition result is a character, the character may be used in a keyword search to find one or more corresponding non-character items, e.g., from a data store. When the recognition result is a non-character item, the non-character item is provided as the result, without keyword searching. The search result may appear as one or more item representations, such as in a user interface result panel.
    Type: Grant
    Filed: June 26, 2007
    Date of Patent: January 10, 2012
    Assignee: Microsoft Corporation
    Inventors: Dongmei Zhang, Xiaohui Hou, Yingjun Qiu, Jian Wang
  • Patent number: 8023741
    Abstract: Aspects of the present invention are related to systems and methods for determining the location of numerals in an electronic document image.
    Type: Grant
    Filed: May 23, 2008
    Date of Patent: September 20, 2011
    Assignee: Sharp Laboratories of America, Inc.
    Inventors: Ahmet Mufit Ferman, Richard John Campbell
  • Patent number: 7929772
    Abstract: A method for generating typographical line is provided. In the present method, an asymptote of an upper or a lower edge of a line of printing words is obtained first. Then, two typographical lines of the other edge of the line of printing words are obtained according to the asymptote. Two typographical lines of the present edge of the line of printing words are obtained based on the previously obtained typographical lines. Finally, the relations of these typographical lines and edge reference points of the line of printing words are used for removing useless typographical lines. Therefore, the typographical lines obtained by the present invention can provide the means of recognizing word direction, large or small character writing, and punctuation marks, so as to increase the efficiency and accuracy of character recognition.
    Type: Grant
    Filed: June 27, 2007
    Date of Patent: April 19, 2011
    Assignee: Compal Electronics Inc.
    Inventors: Wen-Hann Tsai, Hsin-Te Lue
  • Patent number: 7864194
    Abstract: Methods and systems for motion adaptive filtering detect movement of text or areas of high spatial frequency in one frame to another frame of an image. When such movement is detected and meets a certain level or threshold, the subpixel rendering processing of such text or areas of high spatial frequency may be changed.
    Type: Grant
    Filed: January 19, 2007
    Date of Patent: January 4, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Thomas Lloyd Credelle, Stuart Philip Kaler
  • Patent number: 7844115
    Abstract: An information processing apparatus includes feature extraction means for extracting features of a designated image in plural images which are associated with each other, image determination means for determining whether the designated image is an image of the face of a certificate, a receipt, a ticket or a note on which a character string is written based on the extracted features, and metadata addition means for adding the first metadata which is character string data of the character string to another image in the plural images, when it is determined that the designated image is an image of the face of a certificate, a receipt, a ticket or a note on which a character string is written.
    Type: Grant
    Filed: March 27, 2007
    Date of Patent: November 30, 2010
    Assignee: Sony Corporation
    Inventors: Tsunayuki Ohwa, Misa Tamura, Satoshi Akagawa
  • Patent number: 7840072
    Abstract: Pattern matching can be achieved by considering only the position numbers of a source pattern and a target pattern within ordered sequences of possible source patterns and target patterns respectively. The position numbers of source patterns containing the target pattern form a number of groups. The number of source patterns within each group and the number of source patterns in the gaps between groups depend on the position of the target pattern within the source pattern, the length of the target pattern and the number of elements in the alphabet set. Each group also has a position number, its position within an ordered sequence of groups. The group position number of an input source pattern is compared to a series derived from the position number of the target pattern, the length of the target pattern and the number of elements in the alphabet set (9). If the group position number is a member of the series (10), then the source pattern contains the target pattern (11).
    Type: Grant
    Filed: March 13, 2003
    Date of Patent: November 23, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Sriram K. N. V. Kumar, Rama Shankar Mantha, Chandrasekhar Sarasvat Revur
  • Patent number: 7792369
    Abstract: A form processing apparatus extracts layout information and character information from a form document. A candidate extracting unit extracts word candidates from the character information. A frequency digitizing unit calculates emission probability of a word candidate from each element. A relation digitizing unit calculates transition probability that relationship between word candidates is established. An evaluating unit calculates an evaluation value indicative of a probability of appearance of word candidates in respective logical elements. A determining unit determines the element and a word candidate thereof as the element and a character string thereof in the form document, based on the evaluation value.
    Type: Grant
    Filed: November 15, 2006
    Date of Patent: September 7, 2010
    Assignee: Fujitsu Limited
    Inventors: Akihiro Minagawa, Hiroaki Takebe, Katsuhito Fujimoto
  • Patent number: 7724957
    Abstract: Systems and methods that exploit unique properties of a language script (e.g., condition joining rules for Arabic language) to enable a two tier text recognition. In such two tier system, one tier can recognize predetermined groups of linked letters that are connected based on joining rules of a language associated with the text, and another tier dissects (and recognizes) such linked letters to respective constituent letters that form the predetermined group of linked letters. Various classifiers and artificial intelligence components can further facilitate text recognition at each level.
    Type: Grant
    Filed: July 31, 2006
    Date of Patent: May 25, 2010
    Assignee: Microsoft Corporation
    Inventor: Ahmad A. Abdulkader
  • Patent number: RE45406
    Abstract: Unique encoding of each of a substantial number of distribution video copies of a program such as a motion picture is produced by altering the images slightly at several pre-selected locations in the program in a uniquely coded pattern. Suspected counterfeits can be compared with an unaltered master video to determine the encoded number for the copy which was counterfeited to enable tracking the source of the counterfeit. Preferably, each frame of several whole scenes is altered at each location by shifting an image so as to make the alterations largely undetectable by counterfeiters but easily detected by comparison with an unaltered master video. Artifacts are inserted in patterns representing a unique number for the program. These supplement the encoding by alteration of images and gives added means to aid in tracing counterfeit copies.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: March 3, 2015
    Assignee: Deluxe Laboratories, Inc.
    Inventor: Jeffrey H. Dewolde