Template Matching (e.g., Specific Devices That Determine The Best Match) Patents (Class 382/209)
  • Patent number: 8929606
    Abstract: A machine may be configured as a vehicle identification machine to identify a model of a vehicle based on an image that depicts a dashboard of the vehicle. As configured, the machine may receive an image of the dashboard, where the image depicts a layout of instrumentation within the dashboard. The machine may identify the layout of instrumentation by processing the image. For example, the machine may process the image by determining a position of an instrument within the layout of instrumentation, determining an outline of instrument, or both. The machine may access a data record that correlates a model of the vehicle with the identified layout of instrumentation and, based on the data record, identify the model of the vehicle. The machine may then provide a notification that references the vehicle, references the identified model of the vehicle, or references both.
    Type: Grant
    Filed: June 17, 2014
    Date of Patent: January 6, 2015
    Assignee: eBay Inc.
    Inventor: Jonathan Leonard Conradt
  • Publication number: 20150003745
    Abstract: A method and an apparatus for recognizing characters using an image are provided. A camera is activated according to a character recognition request and a preview mode is set for displaying an image photographed through the camera in real time. An auto focus of the camera is controlled and an image having a predetermined level of clarity is obtained for character recognition from the images obtained in the preview mode. The image for character recognition is character-recognition-processed so as to extract recognition result data. A final recognition character row is drawn that excludes non-character data from the recognition result data. A first word is combined including at least one character of the final recognition character row and a predetermined maximum number of characters. A dictionary database that stores dictionary information on various languages using the first word is searched, so as to provide the user with the corresponding word.
    Type: Application
    Filed: September 12, 2014
    Publication date: January 1, 2015
    Inventors: Hyun-Soo KIM, Seong-Taek HWANG, Sang-Wook OH, Sang-Ho KIM, Yun-Je OH, Hee-Won JUNG, Sung-Cheol KIM
  • Patent number: 8925057
    Abstract: Completely automated tests that exploit capabilities of human vision to tell humans apart from automated entities are disclosed herein. Persistence of vision and simultaneous contrasts are some of the properties of human vision that can be used in these tests. A video of an image is generated in colors that are distinguishable to the human eye but are not easily distinguished numerically. The image includes text manipulated such that positive image data and negative whitespace data occur at equal rates along with a noise component included in each of the video frames. Thus, raw data is made ambiguous while qualities of human visual interpretation are relied upon for extracting relevant meaning from the video.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: December 30, 2014
    Assignee: New Jersey Institute of Technology
    Inventors: Nirwan Ansari, Christopher Andrew Neylan, Amey Bhaskar Shevtekar
  • Patent number: 8918277
    Abstract: A method for recognizing road signs in the vicinity of a vehicle and for synchronization thereof to road sign information from a digital map where, in the case of a recognition of road signs, road-sign recognition data are generated, navigation data being provided for localizing the vehicle in digital map data, and the road-sign recognition data being synchronized to the map data. To be able to provide unambiguous and the most accurate possible and thus improved road sign information to be output to the driver in the context of such a method, in the case of a discrepancy between the map data and the road-sign recognition data, the decision is made with the aid of reliability factors for the camera and map as to whether the data from the digital map or the road-sign recognition data are output in the vehicle.
    Type: Grant
    Filed: December 7, 2011
    Date of Patent: December 23, 2014
    Assignee: Robert Bosch GmbH
    Inventors: Wolfgang Niem, Philipp Ibele
  • Patent number: 8917937
    Abstract: Methods and apparatus for identifying primary media content in a post-production media content presentation are disclosed. An example computer-implemented method to detect primary media content included in a secondary media content presentation disclosed herein comprises determining a first image corresponding to the secondary media content presentation, the first image comprising a plurality of image subregions, each image subregion representative of an inter-frame variation associated with a corresponding subregion of the secondary media content presentation, selecting a region of the first image comprising a plurality of connected image subregions of the first image together exhibiting a first type of inter-frame variation, and when a shape of the selected region of the first image corresponds to a predefined shape, processing a region of the first captured image corresponding to the selected region of the first synthetic image to identify the primary media content.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: December 23, 2014
    Assignee: The Nielsen Company (US), LLC
    Inventors: David Howell Wright, Ronan Heffernan, Michael McLean, Kevin Keqiang Deng, Paul Bulson
  • Patent number: 8917942
    Abstract: An information processing apparatus for matching a position and/or orientation of a measurement object with that of a model of the measurement object includes an acquisition unit configured to acquire a captured image of the measurement object, a calculation unit configured to calculate information indicating a surface shape of the measurement object based on the captured image, and a limitation unit configured to limit a position and/or orientation of the model based on the information indicating the surface shape.
    Type: Grant
    Filed: June 21, 2010
    Date of Patent: December 23, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Takayuki Saruta, Masato Aoba, Masakazu Matsugu
  • Patent number: 8913837
    Abstract: An image matching device includes: a mixed image generation portion generating a mixed image in an operation satisfying linearity, the mixed image being obtained by multiplying each of two or more recorded images and each of phase components of a complex plane different from each other and adding multiplied recorded images; a complex similarity image generation portion generating a complex similarity image through a similarity operation between one or more input image and the mixed image; and a similarity obtain portion obtaining similarity from a projected component of the complex similarity image toward a vector of the phase component.
    Type: Grant
    Filed: February 16, 2011
    Date of Patent: December 16, 2014
    Assignee: Fujitsu Limited
    Inventor: Takahiro Aoki
  • Patent number: 8913823
    Abstract: An image processing method for removing a moving object includes an input step of inputting input images; a matching step of matching the input images according to corresponding positions; a determining step of determining a background image from the input images; a marking step of marking at least one moving object from at least one of the input images; and a replacing step of replacing a region, occupied by the moving object in at least one of the input images with a corresponding regional background in another input image.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: December 16, 2014
    Assignee: East West Bank
    Inventors: Qiao-Ling Bian, Yan-Qing Lu, Jin Wang
  • Patent number: 8908902
    Abstract: A device for verifying the integrity of an item of data displayed on a display device controlled by a video controller, the video controller being connected to the display device by an appropriate connection and transmitting to it a video signal including an input interface which allows the device to be connected at a branch of the connection between the video controller and the display device, a reconstruction device which is capable, from the derived video signal, of reconstructing an image corresponding to the image displayed on the display device; an analysis device which is capable of extracting an item of reconstructed data from the reconstructed image, a comparison device which is capable of comparing the reconstructed data with a reference value of the item of data to be displayed and an alarm means which is capable of activating a malfunction alarm in accordance with the result at the output of the comparison device.
    Type: Grant
    Filed: July 20, 2010
    Date of Patent: December 9, 2014
    Assignee: ALSTOM Transport SA
    Inventors: Jacques Fifis, Christian Louis Georges Henri Euvrard
  • Patent number: 8908930
    Abstract: A primary object of the present invention is to extract a difference in shading of a blood vessel image in a picked up image as information to be used for authentication, and to acquire a larger number of pieces of biometric information from one image. An individual authentication device to be used to authenticate an individual using feature information of a vascular pattern acquired from a living body includes an imaging unit that images a region of the living body serving as an object of authentication, and an arithmetic unit that acquires the picked up image as an authentication image. The arithmetic unit extracts a vascular pattern from the authentication image, and acquires a degree of shading of the vascular pattern as the feature information.
    Type: Grant
    Filed: September 6, 2011
    Date of Patent: December 9, 2014
    Assignee: Hitachi, Ltd.
    Inventors: Yusuke Matsuda, Akio Nagasaka, Naoto Miura, Harumi Kiyomizu, Takafumi Miyatake
  • Patent number: 8908949
    Abstract: A system for detecting post-operative retained foreign bodies has a data storage unit adapted to receive and store a reference image of a surgical object, and a data processor in communication with the data storage unit. The data processor is configured to receive an image of an internal region of a patient and to receive the reference image from the data storage unit, and the data processor is configured to perform operations based on an algorithm to compare the reference image to at least a portion of the image of the internal region of the patient and determine whether a retained foreign body is present in the patient.
    Type: Grant
    Filed: February 22, 2011
    Date of Patent: December 9, 2014
    Assignee: The Johns Hopkins University
    Inventors: Bolanle Asiyanbola, Ralph Etienne-Cummings, Chao-Cheng Wu
  • Patent number: 8908975
    Abstract: An apparatus and method for automatically recognizing a QR code without a need to control the distance for recognition in relation to one QR code or two or more QR codes. The apparatus includes a photographing unit obtaining a surrounding image the QR code including recognition points and surroundings, a QR code recognition unit converting the surrounding image into a grayscale image of a pixel unit, converting the grayscale image into a histogram indicative of a distribution map according to the luminosity of each pixel, extracting only pixels having a luminosity value concentration level corresponding to a threshold or higher based on the histogram, setting the extracted pixels as a candidate pixel group, searching the set candidate pixel group for recognition points through a recognition marker, when the recognition points are conceived, recognizing a region in which the conceived recognition points are placed as a QR code.
    Type: Grant
    Filed: October 9, 2012
    Date of Patent: December 9, 2014
    Inventors: So Woon Bae, Byoung Sun Kim, Sung Ho Yang
  • Patent number: 8908903
    Abstract: Image recognition methods, apparatus and articles or manufacture to support shelf auditing for consumer research are disclosed herein. An example method disclosed herein comprises comparing an input image depicting a shelf to be audited with a set of reference images depicting reference shelves displaying respective sets of reference items, identifying a first reference image from the set of reference images that has been determined to match the input image, and determining an initial audit result for the shelf depicted in the input image based on a first set of reference items associated with the first reference image.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: December 9, 2014
    Assignee: The Nielsen Company (US), LLC
    Inventors: Kevin Keqiang Deng, Wei Xie
  • Patent number: 8908974
    Abstract: An image capturing device capable of simplifying characteristic value sets of captured images and a control method thereof. The image capturing device comprises a characteristic conversion module, a data storage module, a characteristic simplification module, a template storage module and a recognition module. The characteristic conversion module converts an image captured by the image capturing device into a characteristic image, and the characteristic image includes a group of first characteristic value sets. The data storage module stores a lookup table which comprises second characteristic value sets. The characteristic simplification module performs a simplification process according to the lookup table to produce a simplified group of characteristic value sets. Finally, compares the simplified group of characteristic value sets with the plurality of templates stored in the template storage module to recognize a specific object in the image.
    Type: Grant
    Filed: March 27, 2012
    Date of Patent: December 9, 2014
    Assignee: Altek Corporation
    Inventor: Tai-Chang Yang
  • Publication number: 20140355889
    Abstract: Aspects of the present invention include feature point matching systems and methods. In embodiments, a tree model is used to find candidate matching features for query feature points. In embodiments, the tree model may be pre-learned using a set of sample images, or alternatively, the tree model may be constructed using one or more of the input images. In embodiments, features in one of the stereo images are registered with the tree model, and then features from the other stereo image are queried through the tree model to identify their correspondences in the registered stereo image. As compared to prior brute force matching methodologies, embodiments disclosed herein reduce the complexity and calculation time for determining matching feature points in stereo images.
    Type: Application
    Filed: May 30, 2013
    Publication date: December 4, 2014
    Inventors: Yuanyuan Ding, Jing Xiao
  • Patent number: 8903124
    Abstract: The present invention relates to an object learning method that minimizes time required for learning an object, an object tracking method using the object learning method, and an object learning and tracking system. The object learning method includes: receiving an image to be learned through a camera to generate a front image by a terminal; generating m view points used for object learning and generating first images obtained when viewing the object from the m view points using the front image; generating second images by performing radial blur on the first images; separating an area used for learning from the second images to obtain reference patches; and storing pixel values of the reference patches.
    Type: Grant
    Filed: December 14, 2010
    Date of Patent: December 2, 2014
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Woon Tack Woo, Won Woo Lee, Young Min Park
  • Patent number: 8903180
    Abstract: A mechanism is provided for security screening image analysis simplification through object pattern identification. Popular consumer electronics and other items are scanned in a control system, which creates an electronic signature for each known object. The system may reduce the signature to a hash value and place each signature for each known object in a “known good” storage set. For example, popular mobile phones, laptop computers, digital cameras, and the like may be scanned for the known good signature database. At the time of scan, such as at an airport, objects in a bag may be rotated to a common axis alignment and transformed to the same signature or hash value to match against the known good signature database. If an item matches, the scanning system marks it as a known safe object.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: December 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Joaquin Madruga, Barry L. Minor, Michael A. Paolini
  • Patent number: 8897490
    Abstract: A vision-based user interface includes an image input unit for capturing frame images, an image processor for recognizing a posture in at least one of the captured frame images, and generating a recognized gesture according to the posture, and a control unit for generating a control command corresponding to the recognized gesture.
    Type: Grant
    Filed: March 23, 2011
    Date of Patent: November 25, 2014
    Assignee: ArcSoft (Hangzhou) Multimedia Technology Co., Ltd.
    Inventors: ZhiWei Zhang, JianFeng Li, Li Mei, Jin Wang
  • Patent number: 8896885
    Abstract: A printer with integral scanner for obtaining a digital signature from a sheet of paper or other article as it is printed. The integral scanner has a coherent source which directs a light beam to illuminate the article and a detector arrangement to collect data points from light scattered from many different parts of the article to collect a large number of independent data points, typically 500 or more. The digital signature derived from the data points is stored in a database with an image of what was printed on the article. At a later time, the authenticity of an article purported to be the originally printed article can be verified by scanning the purported genuine article to obtain its digital signature. The database is then searched, to establish whether there is a match. If a match is found, the image stored in the database with the matched digital signature is displayed to the user to allow a further visual check that the article is genuine.
    Type: Grant
    Filed: March 9, 2005
    Date of Patent: November 25, 2014
    Assignee: Ingenia Holdings Limited
    Inventor: Russell Paul Cowburn
  • Patent number: 8891881
    Abstract: A method for identifying an optimal image frame is presented. The method includes receiving a selection of an anatomical region of interest in an object of interest. Furthermore, the method includes obtaining a plurality of image frames corresponding to the selected anatomical region of interest. The method also includes determining a real-time indicator corresponding to the plurality of acquired image frames, wherein the real-time indicator is representative of quality of an image frame. In addition, the method includes communicating the real-time indicator to aid in selecting an optimal image frame. Systems and non-transitory computer readable medium configured to perform the method for identifying an optimal image frame are also presented.
    Type: Grant
    Filed: January 25, 2012
    Date of Patent: November 18, 2014
    Assignee: General Electric Company
    Inventors: Mithun Das Gupta, Kajoli Banerjee Krishnan, Pavan Kumar Veerabhadra Annangi, Xiaoming Liu, Sri Kaushik Pavani, Navneeth Subramanian, Jyotirmoy Banerjee
  • Patent number: 8891858
    Abstract: Methods, systems and apparatus for refining image relevance models. In general, one aspect of the subject matter described in this specification can be implemented in methods that include re-training an image relevance model by generating a first re-trained model based on content feature values of first images of a first portion of training images in a set of training images, receiving, from the first re-trained model, image relevance scores for second images of a second portion of the set of training images, removing, from the set of training images, some of the second images identified as outlier images for which the image relevance score received from the first re-trained model is below a threshold score, and generating a second re-trained model based on content feature values of the first images of the first portion and the second images of the second portion that remain following removal of the outlier images.
    Type: Grant
    Filed: July 10, 2012
    Date of Patent: November 18, 2014
    Assignee: Google Inc.
    Inventors: Arcot J. Preetham, Thomas J. Duerig, Charles J. Rosenberg, Yangli Hector Yee, Samy Bengio
  • Patent number: 8891877
    Abstract: A data processing apparatus that executes determining processing, using a plurality of stages, for determining whether or not a partial image sequentially extracted from an image of each frame of a moving image corresponds to a specific pattern, assigns a plurality of discriminators to each stage such that a plurality of partial images are processed in parallel. The data processing apparatus divides an image into a plurality of regions, and, for the image of each region, calculates a passage rate or accumulated passage rate from a ratio between the number of partial images input to a stage and the number of partial images determined to correspond to the specific pattern. The assignment of the discriminators to each stage is changed based on the passage rate or accumulated passage rate of the image processed immediately of a region to which the partial image extracted from the image being processed belongs.
    Type: Grant
    Filed: July 14, 2011
    Date of Patent: November 18, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ryoko Natori, Shinji Shiraga
  • Patent number: 8891816
    Abstract: A parse module calibrates an interior space by parsing objects and words out of an image of the scene and comparing each parsed object with a plurality of stored objects. The parse module further selects a parsed object that is differentiated from the stored objects as the first object and stores the first object with a location description. A search module can detect the same objects from the scene and use them to determine the location of the scene.
    Type: Grant
    Filed: May 3, 2013
    Date of Patent: November 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: James Billingham, Helen Bowyer, Kevin Brown, Edward Jellard, Graham White
  • Patent number: 8885920
    Abstract: An image processing apparatus is provided. A silhouette extractor may extract a silhouette image of a target object from an input depth image. A first calculator may determine a location of at least one limb of the target object and a location of at least one joint connecting the at least one limb by applying a rectangle fitting algorithm with respect to the silhouette image.
    Type: Grant
    Filed: September 22, 2010
    Date of Patent: November 11, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hwa Sup Lim, Byong Min Kang
  • Patent number: 8885911
    Abstract: A computer-based method for the development of an image analysis protocol for analyzing image data, the image data containing images including image objects, in particular biological image objects such as biological cells. The image analysis protocol, once developed, is operable in an image analysis software system to report on one or more measurements conducted on selected ones of the image objects. The development process includes defining target identification settings to identify at least two different target sets of image objects, defining target identification settings to identify at least two different target sets of image objects, and defining one or more measurements to be performed using said pair-wise linking relationship(s).
    Type: Grant
    Filed: October 20, 2011
    Date of Patent: November 11, 2014
    Assignee: GE Healthcare Niagara, Inc.
    Inventors: Xudong Li, Louis Ernest Dagenais, William Mark Durksen, Del Archer
  • Patent number: 8885951
    Abstract: A system and method are provided to identify and extract data from data forms by identifying data containment locations on the form, classifying the data containment locations to identify data containment locations of interest, and performing a match between recognition results and a predefined set of known labels or data formats to classify and return a data containment label, coordinates and the recognition value of interest.
    Type: Grant
    Filed: December 14, 2012
    Date of Patent: November 11, 2014
    Inventors: Tony Cristofano, Vladimir Laskin
  • Patent number: 8885950
    Abstract: Provided is a template matching method and a template matching apparatus, where the degree of matching between a template and the actual image upon template matching is maintained at a high level, without depending on a partial appearance of a lower layer. Proposed as one embodiment, is a method and an apparatus for template matching, where either an area is set in which comparison of the template and the image is not conducted, or a second area is set inside the template where comparison different from comparison conducted in a first comparison area is to be conducted, and the template matching is conducted on the basis either of comparison excluding the non-comparison area, or of comparison using the first and second areas.
    Type: Grant
    Filed: October 6, 2010
    Date of Patent: November 11, 2014
    Assignee: Hitachi High-Technologies Corporation
    Inventors: Wataru Nagatomo, Yuichi Abe, Mitsuji Ikeda
  • Patent number: 8878906
    Abstract: Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map.
    Type: Grant
    Filed: November 28, 2012
    Date of Patent: November 4, 2014
    Assignee: Microsoft Corporation
    Inventors: Jamie D. J. Shotton, Mark J. Finocchio, Richard E. Moore, Alexandru O. Balan, Kyungsuk David Lee
  • Patent number: 8879804
    Abstract: A system, method and computer program product for automatically identifying coordinates of facial features in digital images. The facial images are detected and pupil coordinates are calculated. First, face identification method is applied. Then, the centers of the pupils are identified. The image is rotated, scaled and the portion of the image is cut out so that the pupils are located on the horizontal line and are located at fixed coordinates. Subsequently an original image with facial features can be identified.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: November 4, 2014
    Inventors: Alexey Konoplev, Yury Volkov
  • Patent number: 8873890
    Abstract: A reading machine that operates in various modes includes image correction processing is described. The reading device pre-processes an image for optical character recognition by receiving the image and determining whether text in the image is too large or small for optical character recognition processing by determining that text height falls outside of a range in which optical character recognition software will recognize text in a digitized image. If necessary the image is resized according to whether the text is too large or too small.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: October 28, 2014
    Assignee: K-NFB Reading Technology, Inc.
    Inventors: Raymond C. Kurzweil, Paul Albrecht, Lucy Gibson
  • Patent number: 8873809
    Abstract: The invention relates to a method which uses a series of images sensed by an image sensor including at least one preceding image and one following image to estimate movement. A first estimated movement is initially obtained by estimating the total movement from the preceding image to the following image. Next, an image compensated according to the first estimated movement is obtained from either one of the preceding and following images. Then, a second estimated movement is obtained by estimating dense movement between the compensated image and the other from the preceding and following images. Next, a residual value of global movement is determined. Finally, if the residual value is lower than a threshold value the second estimated movement is provided; otherwise, the preceding steps are repeated.
    Type: Grant
    Filed: January 4, 2011
    Date of Patent: October 28, 2014
    Assignee: Sagem Defense Securite
    Inventors: Frédéric Jacquelin, Joël Budin, Youssef Benchekroun
  • Patent number: 8873861
    Abstract: In one embodiment, a method is disclosed for performing a video processing. The method can extract one or more common video segments. The method can select a common summarization segment based on a first summarization score. The method can extract one or more individual video segments in which the number of segments included therein is not more than a third threshold value that is less than a second threshold value. The method can select an individual summarization segment based on a second summarization score. In addition, the method can integrate the common summarization segment and the individual summarization segment to create the summary video.
    Type: Grant
    Filed: December 5, 2012
    Date of Patent: October 28, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Shigeru Motoi, Koji Yamamoto
  • Patent number: 8873798
    Abstract: A method, non-transitory computer readable medium, and apparatus that tracks an object includes utilizing random projections to represent an object in a region of an initial frame in a transformed space with at least one less dimension. One of a plurality of regions in a subsequent frame with a closest similarity between the represented object and one or more of plurality of templates is identified as a location for the object in the subsequent frame. A learned distance is applied for template matching, and techniques that incrementally update the distance metric online are utilized in order to model the appearance of the object and increase the discrimination between the object and the background. A hybrid template library, with stable templates and hybrid templates that contains appearances of the object during the initial stage of tracking as well as more recent ones is utilized to achieve robustness with respect to pose variation and illumination changes.
    Type: Grant
    Filed: February 7, 2011
    Date of Patent: October 28, 2014
    Assignee: Rochester Institue of Technology
    Inventors: Grigorios Tsagkatakis, Andreas Savakis
  • Patent number: 8872677
    Abstract: A compression method applies a selection rule to input symbols and generates a reduced partial set of symbols. The partial set is checked against a dictionary-index for a match. A match identifies a range of matching symbols in a dictionary. The length of the matching range is iteratively increased by checking previous and next symbols in the input data and the dictionary until a matching range length meets a threshold limit or the length of the matching range cannot be increased further. Compressed data corresponding to the input symbols is provided where input symbols are copied over and symbols in a matched range of data are replaced with a representation of their corresponding start location and length in the dictionary.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 28, 2014
    Assignee: Dialogic Networks (Israel) Ltd.
    Inventors: Oleg Litvak, Amir Ilan
  • Patent number: 8872979
    Abstract: Techniques are presented for analyzing audio-video segments, usually from multiple sources. A combined similarity measure is determined from text similarities and video similarities. The text and video similarities measure similarity between audio-video scenes for text and video, respectively. The combined similarity measure is then used to determine similar scenes in the audio-video segments. When the audio-video segments are from multiple audio-video sources, the similar scenes are common scenes in the audio-video segments. Similarities may be converted to or measured by distance. Distance matrices may be determined by using the similarity matrices. The text and video distance matrices are normalized before the combined similarity matrix is determined. Clustering is performed using distance values determined from the combined similarity matrix.
    Type: Grant
    Filed: May 21, 2002
    Date of Patent: October 28, 2014
    Assignee: Avaya Inc.
    Inventors: Amit Bagga, Jianying Hu, Jialin Zhong
  • Patent number: 8873835
    Abstract: Methods and apparatus for disparity map correction through statistical analysis on local neighborhoods. A disparity map correction technique may be used to correct mistakes in a disparity or depth map. The disparity map correction technique may detect and mark invalid pixel pairs in a disparity map, segment the image, and perform a statistical analysis of the disparities in each segment to identify outliers. The invalid and outlier pixels may then be corrected using other disparity values in the local neighborhood. Multiple iterations of the disparity map correction technique may be performed to further improve the output disparity map.
    Type: Grant
    Filed: November 13, 2012
    Date of Patent: October 28, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Paul Asente, Scott D. Cohen, Brian L. Price, Lesley Ann Northam
  • Patent number: 8873863
    Abstract: The present disclosure relates to a system and a method for finger printing for comics. A system for searching comics according to the present disclosure includes: a fingerprint database storing fingerprints extracted from comics, a comics fingerprint extraction unit extracting fingerprints configured of at least one of box frames, cuts, and speech bubbles included in input comic images, a fingerprint based candidate group search unit searching candidate groups among comics stored in the fingerprint database using the extracted fingerprints, and a similarity measuring unit measuring similarity between the searched candidate groups and the comic images corresponding to the extracted fingerprints.
    Type: Grant
    Filed: October 12, 2012
    Date of Patent: October 28, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jee Hyun Park, Yong Suk Yoon, Sang Kwang Lee, Jung Hyun Kim, Seung Jae Lee, Sung Min Kim, Yong Seok Seo, Jung Ho Lee, Young Ho Suh, Won Young Yoo
  • Patent number: 8873841
    Abstract: Methods and apparatuses are provided for facilitating gesture recognition. A method may include constructing a matrix based at least in part on an input gesture and a template gesture. The method may further include determining whether a relationship determined based at least in part on the constructed matrix satisfies a predefined threshold. In an instance in which the relationship does not satisfy the predefined threshold, the method may also include eliminating the template gesture from further consideration for recognition of the input gesture. In an instance in which the relationship satisfies the predefined threshold, the method may further include determining a rotation matrix based at least in part on the constructed matrix.
    Type: Grant
    Filed: April 21, 2011
    Date of Patent: October 28, 2014
    Assignee: Nokia Corporation
    Inventors: Jun Yang, Hawk-Yin Pang, Zhigang Liu
  • Publication number: 20140314327
    Abstract: Systems and methods for identifying that a non-digital object, specifically a plush toy, has been brought into the presence of a digital device, which can provide content in reaction to the presence of that plush toy.
    Type: Application
    Filed: March 13, 2014
    Publication date: October 23, 2014
    Applicant: Build-A-Bear Workshop, Inc.
    Inventor: Brandon Elliott
  • Patent number: 8867847
    Abstract: A method and system for probe-based pattern matching including an apparatus for synthetic training of a model of a pattern. The apparatus comprises a sensor for obtaining an image of the pattern and a processor for receiving the image of the pattern from the sensor and running a program. In the steps performed by the program a boundary of the pattern in the image is identified. A plurality of positive probes are placed at selected points along the boundary of the pattern and at least one straight segment of the boundary of the pattern is identified. The at least one straight segment of the boundary is extended to provide an imaginary straight segment and a plurality of negative probes are placed at selected points along the imaginary straight segment, where each negative probe has a negative weight.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: October 21, 2014
    Assignee: Cognex Technology and Investment Corporation
    Inventors: William M. Silver, E. John McGarry, Sanjay Nichani, Adam Wagman
  • Patent number: 8867866
    Abstract: A method, system and computer program product for matching images is provided. The images to be matched are represented by feature points and feature vectors and orientations associated with the feature points. First, putative correspondences are determined by using feature vectors. A subset of putative correspondences is selected and the topological equivalence of the subset is determined. The topologically equivalent subset of putative correspondences is used to establish a motion estimation model. An orientation consistency test is performed on the putative correspondences and the corresponding motion estimation transformation that is determined, to avoid an infeasible transformation. A coverage test is performed on the matches that satisfy orientation consistency test. The candidate matches that do not cover a significant portion of one of the images are rejected. The final match images are provided in the order of decreasing matching, in case of multiple images satisfying all the test requirements.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: October 21, 2014
    Assignee: A9.com, Inc.
    Inventors: Mark R. Ruzon, Donald Tanguay
  • Patent number: 8867799
    Abstract: A fingerprint sensing module includes a sensor substrate having a sensing side and a circuit side, an image sensor including conductive traces on the circuit side of the sensor substrate, and a sensor circuit including at least one integrated circuit mounted on the circuit side of the sensor substrate and electrically connected to the image sensor. The sensor substrate may be a flexible substrate. The module may include a velocity sensor on the sensor substrate or on a separate substrate. The module may further include a rigid substrate, and the sensor substrate may be affixed to the rigid substrate.
    Type: Grant
    Filed: April 25, 2012
    Date of Patent: October 21, 2014
    Assignee: Synaptics Incorporated
    Inventor: Fred G. Benkley, III
  • Patent number: 8867798
    Abstract: Digital image data including discrete photographic images of a variety of different subjects, times, and so forth, are collected and analyzed to identify specific features in the photographs. In an embodiment of the invention, distinctive markers are distributed to aid in the identification of particular subject matter. Facial recognition may also be employed. The digital image data is maintained in a database and quarried in response to search requests. The search requests include criteria specifying any feature category or other identifying information, such as date, time, and location that each photograph was taken, associated with each photograph. Candidate images are provided for review by requesters, who may select desired images for purchase or downloading.
    Type: Grant
    Filed: February 25, 2013
    Date of Patent: October 21, 2014
    Assignee: Intellectual Ventures I LLC
    Inventor: Gary Stephen Shuster
  • Patent number: 8861867
    Abstract: A content editor application receives a reference map that indicates which portions of a first image are foreground, and which portions of the first image are background. The content editor compares a coloration of regions in the first image to a coloration of regions in the second image. For regions in the second image that match a coloration of corresponding regions in the first image, or that are within a threshold range of coloration, the content editor uses the reference map to mark regions of the second image that are foreground and to mark which regions of the second image are background. Accordingly, the reference map of the first image can be used to identify whether regions of a second image or subsequent images in a sequence are foreground and which are background.
    Type: Grant
    Filed: August 10, 2012
    Date of Patent: October 14, 2014
    Assignee: Adobe Systems Incorporated
    Inventor: Pavan Kumar Bvn
  • Patent number: 8861801
    Abstract: According to one embodiment, a facial image search system including attribute discrimination module configured to discriminate attribute based on facial feature extracted, a plurality of search modules configured to store facial feature as database in advance, add facial feature extracted to database, calculate degree of similarity between facial feature extracted and facial feature contained in database, setting module configured to generate setting information by associating with any attribute with information indicating search module, and control module configured to identify one or a plurality of search modules based on setting information and attribute and transmit facial feature extracted by feature extraction module to identified search modules.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: October 14, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hiroshi Sukegawa, Osamu Yamaguchi
  • Patent number: 8855429
    Abstract: A method and an apparatus for recognizing characters using an image are provided. A camera is activated according to a character recognition request and a preview mode is set for displaying an image photographed through the camera in real time. An auto focus of the camera is controlled and an image having a predetermined level of clarity is obtained for character recognition from the images obtained in the preview mode. The image for character recognition is character-recognition-processed so as to extract recognition result data. A final recognition character row is drawn that excludes non-character data from the recognition result data. A first word is combined including at least one character of the final recognition character row and a predetermined maximum number of characters. A dictionary database that stores dictionary information on various languages using the first word is searched, so as to provide the user with the corresponding word.
    Type: Grant
    Filed: September 4, 2013
    Date of Patent: October 7, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyun-Soo Kim, Seong-Taek Hwang, Sang-Wook Oh, Sang-Ho Kim, Yun-Je Oh, Hee-Won Jung, Sung-Cheol Kim
  • Patent number: 8854395
    Abstract: A method for storing a pre-designed digital template having a number of digital openings for displaying at least one digital image within each opening. Each opening has stored required image attributes associated with it for finding appropriate images to place in the opening. At least one of the non-primary image files has a satisfactory required image attribute score as computed by the computer system which is based on a required image attribute for an associated opening where an image of the at least one other of the non-primary image files will be displayed. The required image attribute itself is based upon at least one of the image attributes of the primary image file.
    Type: Grant
    Filed: July 30, 2009
    Date of Patent: October 7, 2014
    Assignee: Intellectual Ventures Fund 83 LLC
    Inventors: Raymond W. Ptucha, Laura R. Whitby, William Bogart
  • Patent number: 8855363
    Abstract: In accordance with one embodiment, a method to track persons includes generating a first and second set of facial coefficient vectors by: (i) providing a first and second image containing a plurality of persons; (ii) locating faces of persons in each image; and (iii) generating a facial coefficient vector for each face by extracting from the images coefficients sufficient to locally identify each face, then tracking the persons within the images, the tracking including comparing the first set of facial coefficient vectors to the second set of facial coefficient vectors to determine for each person in the first image if there is a corresponding person in the second image. Optically the method includes using estimated locations in combination with the vector distance between facial coefficient vectors to track persons.
    Type: Grant
    Filed: March 21, 2010
    Date of Patent: October 7, 2014
    Assignee: Rafael Advanced Defense Systems Ltd.
    Inventors: Erez Berkovich, Dror Shapira
  • Patent number: 8849043
    Abstract: A system for automatically selecting a template and a number of secondary images for display with a primary preselected image based on analyzing the primary image's attribute information and comparing the template's required image attributes and secondary image's attribute information. The attribute information is used to evaluate and arithmetically score a compatibility of the images and template so that a best compatibility fit can be obtained when displaying the image.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: September 30, 2014
    Assignee: Intellectual Ventures Fund 83 LLC
    Inventors: Raymond W. Ptucha, Laura R. Whitby, William Bogart
  • Patent number: 8849065
    Abstract: A system for making an image product includes a computer including a processor and a memory, a template stored in the memory, the template including a template graphic and a plurality of openings in the template graphic, an image stored in the memory, and the processor compositing the image into two or more of the plurality of openings, so that two different portions of the image are located in two different openings and the two different portions have the same relative locations in the composition as in the user image.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: September 30, 2014
    Assignee: Kodak Alaris Inc.
    Inventors: Ronald Steven Cok, John Randall Fredlund