Feature Extraction Patents (Class 382/190)
  • Patent number: 8768063
    Abstract: An image processing apparatus includes: face detection means for detecting a subject face contained in each of images continuously captured by capturing means and extracting attribute information on each detected face; score determination means for determining a score for each of the continuously captured images on the basis of the attribute information on the detected face extracted by the face detection means; and representative image selection means for selecting a representative image from the continuously captured images on the basis of the score determined by the score determination means.
    Type: Grant
    Filed: December 18, 2009
    Date of Patent: July 1, 2014
    Assignee: Sony Corporation
    Inventor: Yusuke Sugita
  • Patent number: 8768069
    Abstract: The present invention relates to an image enhancement apparatus for enhancing an input image of a sequence of input images. To provide the ability to increase the resolution of an input image and/or to temporally reduce artifacts and/or noise in an input image, the apparatus comprises a motion compensation unit, a weighted selection unit, a feature analysis unit, an image model unit configured to generate a modelled image by applying an image model on said input image and/or said weighted selection image, a spatio-temporal detail signal generation unit configured to generate a detail signal from said input image and said weighted selection image, and a combination unit configured to generate said enhanced output image from said input image, said detail signal and said modelled image.
    Type: Grant
    Filed: February 17, 2012
    Date of Patent: July 1, 2014
    Assignee: Sony Corporation
    Inventors: Paul Springer, Toru Nishi, Martin Richter, Matthias Brueggemann
  • Publication number: 20140177963
    Abstract: A user touches a touch sensitive display or otherwise provides input comprising “stroke” gestures to trace areas which are to be the subject of post-processing functions. The stroke area is highlighted and can be added to or corrected by additional stroke and “erase” gestures. Pixel objects are detected proximate to the stroke area, with a precision based on the zoom level. Stroke gesture input may be received and pixel object determination may be performed in series or in parallel.
    Type: Application
    Filed: March 15, 2013
    Publication date: June 26, 2014
    Applicant: menschmaschine Publishing GmbH
    Inventors: Friedemann WACHSMUTH, Richard CASE
  • Publication number: 20140177964
    Abstract: Techniques disclosed herein provide for conducting an image search of video frames using a captured image of a display or a screen capture of a media item during playback. Results of the image search may be used to play back a corresponding video from the point in the video at which the captured image was taken, initiate a second-screen user experience, and/or perform other functions. Techniques are also disclosed for building a library of video frames with which image searches may be conducted.
    Type: Application
    Filed: February 27, 2014
    Publication date: June 26, 2014
    Applicant: Unicorn Media, Inc.
    Inventors: Michael Edmund Godlewski, Albert John McGowan, Matthew A. Johnson
  • Patent number: 8761962
    Abstract: Provided are a system and method of controlling an in-vehicle device using augmented reality. A system for controlling an in-vehicle device using augmented reality includes a mobile device configured to identify a vehicle object unit as an image and receive a vehicle control command through implementation of the augmented reality of the image, and a driving control unit configured to transmit a vehicle type information to the mobile device and, upon receiving a command signal from the mobile device, to control the in-vehicle device that corresponds to the command signal. Accordingly, by remotely controlling an in-vehicle device by using augmented reality of a mobile device, user convenience may be improved.
    Type: Grant
    Filed: November 29, 2010
    Date of Patent: June 24, 2014
    Assignee: Hyundai Motor Company
    Inventor: Dong Cheol Seok
  • Patent number: 8761518
    Abstract: According to one embodiment, a pattern inspection apparatus includes a first inspection data creation section, a first delay section, a first recognition section, a first extraction section, a first and a second level difference calculation section, a first and a second determination section, and a first logic OR calculation section. The first extraction section extracts data of a sub-resolution pattern from the first inspection data and the first delay data. The first and second level difference calculation section calculate differences between an average output level of a surrounding region for a target pixel of the extracted data from the first inspection data or the first delay data and an output level of the extracted data. The first and second determination sections determine presence or absence of a defect. The first logic OR calculation section calculates logic OR of determination results of the first and second determination sections.
    Type: Grant
    Filed: July 15, 2011
    Date of Patent: June 24, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hiromu Inoue, Takeshi Fujiwara, Hiroshi Tsukada, Takashi Hirano
  • Patent number: 8761452
    Abstract: A method for fingerprinting a video involving identifying motion within the video and using a measure of the identified motion as a fingerprint. Once videos are fingerprinted, these fingerprints can be used in a method for identifying video. This involves creating a motion fingerprint for unidentified videos; comparing the fingerprints of the known and unknown videos, and identifying whether the unknown video is a copy of the known video based on the step of comparing.
    Type: Grant
    Filed: October 15, 2013
    Date of Patent: June 24, 2014
    Assignee: The University Court of the University of St. Andrews
    Inventor: Martin Bateman
  • Patent number: 8761448
    Abstract: Techniques are disclosed for processing a video stream to reduce platform power by employing a stepped and distributed pipeline process, wherein CPU-intensive processing is selectively performed. The techniques are particularly well-suited for hand-based navigational gesture processing. In one example case, for instance, the techniques are implemented in a computer system wherein initial threshold detection (image disturbance) and optionally user presence (hand image) processing components are proximate to or within the system's camera, and the camera is located in or proximate to the system's primary display. In some cases, image processing and communication of pixel information between various processing stages which lies outside a markered region is suppressed. In some embodiments, the markered region is aligned with, a mouse pad or designated desk area or a user input device such as a keyboard. Pixels evaluated by the system can be limited to a subset of the markered region.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: June 24, 2014
    Assignee: Intel Corporation
    Inventor: Jeremy Burr
  • Patent number: 8761515
    Abstract: A method for creating measurement codes automatically using an electronic device. In the method, a directory tree is created to display a plurality of feature elements. A selected feature element in the directory tree is determined; and output axes of the selected feature element are determined, according to an attribute type and a measurement type of the selected feature element. A marked number of the selected feature element is received; and a reference value, an upper tolerance, and a lower tolerance of the selected feature element are obtained. Measurement codes of the selected feature element are created according to the above-described obtained information, and the measurement codes are stored in a storage device of the electronic device.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: June 24, 2014
    Assignees: Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd., Hon Hai Precision Industry Co., Ltd.
    Inventors: Chih-Kuang Chang, Xin-Yuan Wu, Zheng-Zhi Zhang, Jin-Gang Rao
  • Patent number: 8761964
    Abstract: In a method for controlling an unmanned aerial vehicle (UAV) in a flight space using a computing device, a 3D sample database is created and store in a storage device of the computing device. The computing device includes a depth-sensing camera that captures a 3D scene image of a scene in front of a user, and senses a depth distance between the user and the depth-sensing camera. A 3D person image of the user is detected from the 3D scene image, and gesture information of the user is obtained by comparing the 3D person image with human gesture data stored in the 3D sample database. The method converts the gesture information of the user into one or more flight control commands, and drives a driver of the UAV to control the UAV to fly in a flight space according to the flight control commands.
    Type: Grant
    Filed: November 13, 2012
    Date of Patent: June 24, 2014
    Assignee: Hon Hai Precision Industry Co., Ltd.
    Inventors: Hou-Hsien Lee, Chang-Jung Lee, Chih-Ping Lo
  • Patent number: 8761500
    Abstract: A method for automatically recognizing Arabic text includes building an Arabic corpus comprising Arabic text files written in different writing styles and ground truths corresponding to each of the Arabic text files, storing writing-style indices in association with the Arabic text files, digitizing a line of Arabic characters to form an array of pixels, dividing the line of the Arabic characters into line images, forming a text feature vector from the line images, training a Hidden Markov Model using the Arabic text files and ground truths in the Arabic corpus in accordance with the writing-style indices, and feeding the text feature vector into a Hidden Markov Model to recognize the line of Arabic characters.
    Type: Grant
    Filed: May 12, 2013
    Date of Patent: June 24, 2014
    Assignee: King Abdulaziz City for Science and Technology
    Inventors: Mohammad S. Khorsheed, Hussein K. Al-Omari, Majed Ibrahim Bin Osfoor, Adbulaziz Obaid Alobaid, Hussam Abdulrahman Alfaleh, Arwa Ibrahem Bin Asfour
  • Patent number: 8761498
    Abstract: A computer implemented system for identifying license plates and faces in street-level images is disclosed. The system includes an object detector configured to determine a set of candidate objects in the image, a feature vector module configured to generate a set of feature vectors using the object detector to generate a feature vector for each candidate object in the set of candidate objects, a composite feature vector module to generate a set of composite feature vectors by combining each generated feature vector with a corresponding road or street description of the object in question, and an identifier module configured to identify objects of a particular type using a classifier that takes a set of composite feature vectors as input and returns a list of candidate objects that are classified as being of the particular type as output.
    Type: Grant
    Filed: January 26, 2012
    Date of Patent: June 24, 2014
    Assignee: Google Inc.
    Inventor: Bo Wu
  • Publication number: 20140168414
    Abstract: Methods and systems for determining information about an object are described. In one aspect, a method includes illuminating an object with a plurality of lines of light, the lines being spaced-apart along an axis, and acquiring a sequence of images of the lines of light while rotating the object about the axis. The method further includes, for each image, determining a location of an extremum for each of the lines of light. Furthermore, the method includes establishing a reference line based on the location of the extrema for a first plurality of the lines, calculating a deviation between the extrema of a second plurality of the lines and the reference line, and determining information about the shape of the object based on the calculated deviations.
    Type: Application
    Filed: December 19, 2012
    Publication date: June 19, 2014
    Applicant: TENARIS CONNECTIONS LIMITED
    Inventor: TENARIS CONNECTIONS LIMITED
  • Publication number: 20140169677
    Abstract: This invention, which relates to retrieving an object from a video or a photo where the object matches a hand-drawn sketch, discloses a method for automatically estimating a perceptual bias level with respect to a feature of the sketch. The method allows estimation based on the sketch alone without involving an extra database. In one embodiment, the method comprises using an expectation-maximization tensor voting (EMTV) method to analyze a statistical distribution of the feature. The statistical distribution is analyzed by forming an objective function having the statistical distribution's information parameterized by the perceptual bias level, and then maximizing the objective function according to a set of iterative update rules. In another embodiment, the method for automatically estimating a perceptual bias level is incorporated into a method for retrieving one or more objects from an image or video database where the one or more objects match a hand-drawn sketch.
    Type: Application
    Filed: December 19, 2012
    Publication date: June 19, 2014
    Applicant: Hong Kong Applied Science and Technology Research Institute Company Limited
    Inventor: Hong Kong Applied Science and Technology Research Institute Company Limited
  • Publication number: 20140169679
    Abstract: A video processing apparatus includes a first storage means unit which stores, in correspondence to a viewer, frame feature values to characterize each frame of scenes constituted by a series of frames in a video content viewed by the viewer; a second storage means unit which stores, as scene groups classified by attributes of the scenes, the frame feature values of scenes constituted by the series of frames; an interest level accumulation means unit which compares the frame feature values stored in the first storage means unit with the frame feature values stored in the second storage means unit, and in case the compared frame feature values match, increases a score about the viewer which represents the interest level with respect to the scene group of which the frame feature values match; and a viewer preference determination means unit which determines that the scene groups of the which the scores are higher are the scene groups preferred by the viewer.
    Type: Application
    Filed: July 31, 2012
    Publication date: June 19, 2014
    Inventors: Hiroo Harada, Naotake Fujita
  • Patent number: 8755609
    Abstract: A method of processing a viewport within an image arranged as a matrix of tiles from a container file is provided. The method includes reading data of the viewport starting from a point of origin of the viewport, the viewport having a pixel width and a pixel height, the viewport being a portion of an image stored in a record within the file container; reading record metadata of the record; computing column numbers or row numbers, or both, of the tiles containing the viewport; and launching parallel or asynchronous read requests for each row or each column of the viewport.
    Type: Grant
    Filed: August 9, 2013
    Date of Patent: June 17, 2014
    Assignee: Pixia Corp.
    Inventors: Rahul C. Thakkar, Scott L. Pakula, Rudolf O. Ernst
  • Patent number: 8755603
    Abstract: An information processing apparatus includes an identifying unit, a character recognition unit, an obtaining unit, a correcting unit, and an output unit. The identifying unit identifies a still image included in a moving image. The character recognition unit performs character recognition on the still image identified by the identifying unit. The obtaining unit obtains information about the moving image. The correcting unit corrects, on the basis of the information obtained by the obtaining unit, a character recognition result generated by the character recognition unit. The output unit outputs the character recognition result corrected by the correcting unit in association with the moving image.
    Type: Grant
    Filed: February 17, 2012
    Date of Patent: June 17, 2014
    Assignee: Fuji Xerox Co., Ltd.
    Inventors: Takeshi Nagamine, Tsutomu Abe
  • Patent number: 8755623
    Abstract: Disclosed are an image enhancement method, an image enhancement device, an object detection method, and an object detection device. The image enhancement method comprises steps of (a) letting an input image be a waiting-for-enhancement image and detecting specific objects in the waiting-for-enhancement image; (b) determining, based on an image feature of an object area including the detected specific objects, an image enhancement parameter so that an after-enhancement image enhanced according to the image enhancement parameter points out the image feature; (c) enhancing the waiting-for-enhancement image; (d) detecting the specific objects in the after-enhancement image; and (e) determining whether a predetermined stopping condition is satisfied.
    Type: Grant
    Filed: April 19, 2011
    Date of Patent: June 17, 2014
    Assignee: Ricoh Company, Ltd.
    Inventor: Cheng Du
  • Patent number: 8755568
    Abstract: A hand gesture from a camera input is detected using an image processing module of a consumer electronics device. The detected hand gesture is identified from a vocabulary of hand gestures. The electronics device is controlled in response to the identified hand gesture. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: June 17, 2014
    Assignee: Sony Corporation
    Inventor: Suranjit Adhikari
  • Patent number: 8755083
    Abstract: An image checking device checks a printed image printed out on a printing medium.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: June 17, 2014
    Assignee: Ricoh Company, Limited
    Inventor: Hiroyuki Kawamoto
  • Publication number: 20140161354
    Abstract: A method, apparatus and computer program product are provided for extracting semantic information from user-generated media content to create a video remix which is semantically enriched. An exemplary method comprises extracting media content data and sensor data from a plurality of media content, wherein the sensor data comprises a plurality of data modalities. The method may also include classifying the extracted media content data and the sensor data. The method may further include detecting predefined objects or events utilizing the sensor data to create remix video.
    Type: Application
    Filed: December 6, 2012
    Publication date: June 12, 2014
    Applicant: NOKIA CORPORATION
    Inventors: Igor Danilo Diego Curcio, Sujeet Shyamsundar Mate, Francesco Cricri, Mikko Joonas Roininen, Sailesh Sathish
  • Publication number: 20140161356
    Abstract: Disclosed are systems, devices and techniques that generate a set of media portions associated with a set of message inputs for a multimedia message based on an emoticon or an acronym. A text based message can be received having an emoticon or acronym. The emoticon or acronym is identified in the text based message. A splicing component extracts a set of media content portions from media content, in which the media content portions correspond to the emoticon or acronym received. A multimedia message is generated with the media content portions to convey the text based message as a multimedia message.
    Type: Application
    Filed: December 10, 2012
    Publication date: June 12, 2014
    Applicant: RAWLLIN INTERNATIONAL INC.
    Inventors: Måns Anders Tesch, Johan Magnus Tesch, Vsevolod Kuznetsov
  • Patent number: 8749654
    Abstract: An image is input on a frame unit basis, the input image is sequentially reduced, and an object is detected from the input image and the reduced image at a frame rate according to a reduction ratio of the reduced image to the input image, thereby decreasing an amount of calculations necessary to detect the object from the image.
    Type: Grant
    Filed: October 12, 2010
    Date of Patent: June 10, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Fumitada Nagashima
  • Patent number: 8750616
    Abstract: In an extracting step, the extracting portion obtains a linked component composed of a plurality of mutually linking pixels from a character string region composed of a plurality of characters, and extracts section elements from the character string region, the section elements each being surrounded by a circumscribing figure circumscribing to the linked component. In the first altering step, the first altering portion combines section elements at least having a mutually overlapping part among the extracted section elements so as to prepare a new section element. In the first selecting step, the first selecting portion determines a reference size in advance and selects section elements having a size greater than the reference size, from among the section elements altered in the first altering step.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: June 10, 2014
    Assignee: Sharp Kabushiki Kaisha
    Inventors: Bo Wu, Jianjun Dou, Ning Le, Yadong Wu, Jing Jia
  • Patent number: 8751214
    Abstract: An information processor includes: a character recognizing unit; a recognized character feature obtaining unit; a translation deciding unit; a translating unit; a translated result feature obtaining unit; an output deciding unit; an image receiving unit; and an output unit that, wherein the character recognizing unit recognizes a character in character image of the image data received by the image receiving unit, and the recognized character feature obtaining unit, in a case where a picture image other than the character is recognized, obtains a third feature related to a character included in the picture image.
    Type: Grant
    Filed: September 16, 2008
    Date of Patent: June 10, 2014
    Assignee: Fuji Xerox Co., Ltd.
    Inventor: Masahiro Kato
  • Patent number: 8749555
    Abstract: A processing method of interfacing a 3D image and a camera image is provided. In the processing method, a specific image pattern defined by a user is recognized, the recognized pattern is traced within an image, and a camera image and a 3D image are interfaced based on the tracing result. A 3D object is animated and rendered using a 3D graphic engine. The rendered image of the 3D object and the camera image are integrated and displayed.
    Type: Grant
    Filed: April 7, 2009
    Date of Patent: June 10, 2014
    Assignee: LG Electronics Inc.
    Inventors: Tae Seong Kim, Min Jeong Lee, Hang Shin Cho
  • Patent number: 8750636
    Abstract: Some embodiments allow a video editor to remove unwanted camera motion from a sequence of video images (e.g., video frames). Some embodiments are implemented in a video editing application. Some of these embodiments distinguish unwanted camera motion from the intended underlying motion of a camera (e.g., panning and zooming) and/or motion of objects within the video sequence.
    Type: Grant
    Filed: June 20, 2011
    Date of Patent: June 10, 2014
    Assignee: Apple Inc.
    Inventor: Christophe Souchard
  • Patent number: 8750620
    Abstract: Described embodiments include a system, method, and program product. A described system includes a circuit that determines a substantial correspondence between (x) a perceivable feature included in a border region of a selected digital image and (y) a perceivable feature included in each other digital image of a plurality of digital images. A circuit gathers the determined substantial correspondences. A circuit generates data indicative of a border region-overlap status of the selected digital image. A circuit adds the data to an omitted-coverage list. A circuit iteratively designates a next digital image from the plurality of digital images as the selected digital image until each digital image has been designated. This circuit initiates processing of each of the iteratively designated next digital images. A circuit identifies a possible non-imaged portion of the region of interest. A circuit outputs informational data indicative of the possible non-imaged portion.
    Type: Grant
    Filed: December 7, 2011
    Date of Patent: June 10, 2014
    Assignee: Elwha LLC
    Inventors: Roderick A. Hyde, Jordin T. Kare, Eric C. Leuthardt, Erez Lieberman, Dennis J. Rivet, Elizabeth A. Sweeney, Lowell L. Wood, Jr.
  • Publication number: 20140157212
    Abstract: A method of designing an IC design layout having similar patterns filled with a plurality of indistinguishable dummy features, in a way to distinguish all the patterns, and an IC design layout so designed. To distinguish each pattern in the layout, deviations in size and/or position from some predetermined equilibrium values are encoded into a set of selected dummy features in each pattern at the time of creating dummy features during the design stage. By identifying such encoded dummy features and measuring the deviations from image information provided by, for example, a SEM picture of a wafer or photomask, the corresponding pattern can be located in the IC layout. For quicker and easier identification of the encoded dummy features from a given pattern, a set of predetermined anchor dummy features may be used.
    Type: Application
    Filed: December 3, 2012
    Publication date: June 5, 2014
    Applicant: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Shih-Ming Chang, Tzu-Chin Lin, Jen-Chieh Lo, Yu-Po Tang, Tsong-Hua Ou
  • Publication number: 20140153831
    Abstract: The present invention is a collation/retrieval system collating a product manufactured by or delivered from a producer or a distributor with a product to be collated comprising: a storage unit that stores an image feature of a predetermined collation area of the product determined in advance at a position relative to a reference section common in every product; a to-be-collated product feature extraction unit that receives an image of the product to be collated and detecting the reference section of the product from the received image to extract an image feature of the collation area determined by reference to the reference section; and a collation unit that collates the stored image feature with the image feature of the collation area of the product to be collated.
    Type: Application
    Filed: July 25, 2012
    Publication date: June 5, 2014
    Applicant: NEC CORPORATION
    Inventor: Rui Ishiyama
  • Publication number: 20140153830
    Abstract: In one embodiment, a method includes receiving an image of a tender document; performing optical character recognition (OCR) on the image; extracting an identifier of the tender document from the image based at least in part on the OCR; comparing the extracted identifier with content from one or more data sources; requesting complementary information from at least one of the one or more data sources based at least in part on the extracted identifier; receiving the complementary information; and outputting at least some of the complementary information for display on a mobile device. Exemplary systems and computer program products are also described.
    Type: Application
    Filed: February 7, 2014
    Publication date: June 5, 2014
    Applicant: Kofax, Inc.
    Inventors: Jan W. Amtrup, Stephen Michael Thompson
  • Patent number: 8744193
    Abstract: The image signature extraction device includes an extraction unit and a generation unit. The extraction unit extracts region features from respective sub-regions in an image in accordance with a plurality of pairs of sub-regions in the image, the pairs of sub-regions including at least one pair of sub-regions in which both a combination of shapes of two sub-regions of the pair and relative position between the two sub-regions of the pair differ from those of at least one of other pairs of sub-regions. The generation unit generates an image signature to be used for identifying the image based on the extracted region features of the respective sub-regions, using, for at least one pair of sub-regions, a method different from that used for another pair of sub-regions.
    Type: Grant
    Filed: March 12, 2010
    Date of Patent: June 3, 2014
    Assignee: NEC Corporation
    Inventors: Kota Iwamoto, Ryoma Oami
  • Patent number: 8744131
    Abstract: Provided are a pedestrian-crossing marking detecting method and a pedestrian-crossing marking detecting device, wherein the existence of pedestrian crossing markings and the positions thereof can be detected accurately from within a picked up image, even when detection of the intensity edges of painted sections is difficult. In the pedestrian-crossing mark detecting device (100), a road-surface distance of a predetermined range is calculated with respect to image data picked up from the periphery of the vehicle including the road, using camera installation information or a stereo camera's distance information, and the period of the pedestrian crossing markings is calculated on the basis of the road-surface distance of the predetermined range, and furthermore, a power of frequency is calculated using an even function and odd function of a square wave of the period as the basis function.
    Type: Grant
    Filed: September 22, 2010
    Date of Patent: June 3, 2014
    Assignee: Panasonic Corporation
    Inventors: Takuya Nanri, Hirofumi Nishimura
  • Patent number: 8744176
    Abstract: Techniques for segmenting an object at a self-checkout are provided. The techniques include capturing an image of an object at a self-checkout, dividing the image into one or more blocks, computing a confidence value for each of the one or more blocks, and eliminating one or more blocks from consideration based on the confidence value for each of the one or more blocks, wherein the one or more blocks remaining map to a region of the image containing the object.
    Type: Grant
    Filed: April 17, 2013
    Date of Patent: June 3, 2014
    Assignee: International Business Machines Corporation
    Inventors: Rogerio S. Feris, Charles A. Otto, Sharathchandra Pankanti, Duan D. Tran
  • Patent number: 8743440
    Abstract: A method for classifying a document (3) to be associated with at least one service (Si), including a step in which a scanner (1) having a processor (6) scans (E1) the document (3). The method also includes steps in which the processor (6): develops (E2) at least one structure (?j) representing the document (3), determines (E3) for each service (Si) at least one similitude value (?ij) between the structure (?j) representing the document and a reference structure (Rij) of the same kind and representing the service (Si), deduces (E4) from the similitude value (?ij) the service (Si) with which the document (3) is to be associated, and processes (E5) the document (3) according to the service (Si) thus associated. The invention also relates to a scanner for implementing the method.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: June 3, 2014
    Assignee: Sagemcom Documents SAS
    Inventor: Stéphane Manac'h
  • Patent number: 8744190
    Abstract: A system for efficient image feature extraction comprises a buffer for storing a slice of at least n lines of gradient direction pixel values of a directional gradient image. The buffer has an input for receiving the first plurality n of lines and an output for providing a second plurality m of columns of gradient direction pixel values of the slice to an input of a score network, which comprises comparators for comparing the gradient direction pixel values of the second plurality of columns with corresponding reference values of a reference directional gradient pattern of a shape and adders for providing partial scores depending on output values of the comparators to score network outputs which are coupled to corresponding inputs of an accumulation network having an output for providing a final score depending on the partial scores.
    Type: Grant
    Filed: January 5, 2009
    Date of Patent: June 3, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Norbert Stoeffler, Martin Raubuch
  • Patent number: 8744665
    Abstract: The present invention relates to a control method for the localization and navigation of a mobile robot and a mobile robot using the same. More specifically, the localization and navigation of a mobile robot are controlled using inertial sensors and images, wherein local direction descriptors are employed, the mobile robot is changed in the driving mode thereof according to the conditions of the mobile robot, and errors in localization may be minimized.
    Type: Grant
    Filed: July 28, 2009
    Date of Patent: June 3, 2014
    Assignee: Yujin Robot Co., Ltd.
    Inventors: Kyung Chul Shin, Seong Ju Park, Hee Kong Lee, Jae Young Lee, Hyung O Kim
  • Patent number: 8744144
    Abstract: A feature point generation system capable of generating a feature point that satisfies a preferred condition from a three-dimensional shape model is provided. Image group generation means 31 generates a plurality of images obtained by varying conditions with respect to the three-dimensional shape model. Evaluation means 33 calculates a first evaluation value that decreases steadily as a feature point group is distributed more uniformly on the three-dimensional shape model and a second evaluation value that decreases steadily as extraction of a feature point in an image corresponding to a feature point on the three-dimensional shape model becomes easier, and calculates an evaluation value relating to a designated feature point group as a weighted sum of the respective evaluation values. Feature point arrangement means 32 arranges the feature point group on the three-dimensional shape model so that the evaluation value calculated by the evaluation means 33 is minimized.
    Type: Grant
    Filed: March 12, 2010
    Date of Patent: June 3, 2014
    Assignee: NEC Corporation
    Inventors: Rui Ishiyama, Hidekata Hontani, Fumihiko Sakaue
  • Patent number: 8744195
    Abstract: A perimeter around a detected object in a frame of image data can be generated in a first coordinate system. The perimeter can be converted from the first coordinate system into a second coordinate system having the same aspect ratio as the first coordinate system. A first metadata entry can include dimensions of image data in the second coordinate system. A second metadata entry can provide a location and dimensions of the converted perimeter in the second coordinate space. Additional metadata can indicate matching objects between frames, position of an object relative to other objects in a frame, a probability that an object is correctly detected, and a total number of objects detected across multiple frames of image data.
    Type: Grant
    Filed: August 7, 2013
    Date of Patent: June 3, 2014
    Assignee: Apple Inc.
    Inventors: David William Singer, Courtney Ann Kennedy
  • Publication number: 20140147048
    Abstract: Systems and methods are disclosed herein for ranking the quality of documents, such as documents shared or referenced in postings by users. For a first set of documents quality attributes that are indicative of quality or lack of quality are identified. Ratings of the quality of the first set of documents are received. Classifiers are associated with each document and the ratings and quality attributes for each attribute used to train class-specific models corresponding to the classifiers. Subsequently received documents are then classified and corresponding quality attributes are evaluated using the corresponding class-specific model in order to rank the quality of the document.
    Type: Application
    Filed: November 26, 2012
    Publication date: May 29, 2014
    Applicant: Wal-Mart Stores, Inc.
    Inventors: Fan Yang, Digvijay Singh Lamba
  • Publication number: 20140147049
    Abstract: A corresponding point candidate determiner (108) determines whether plural correlation peaks appear, based on a correlation value calculated by a corresponding point determiner (107). In the case where the corresponding point candidate determiner (108) determines that plural correlation peaks appear, the corresponding point determiner (107) calculates a ratio between the correlation values as represented by the correlation peaks, determines one or more corresponding points, based on the calculated ratio, and notifies the determination result to an initial position setter (106). In the case where the corresponding point determiner (107) searches plural corresponding points, the initial position setter (106) sets an initial search position with respect to each of the corresponding points in a reference image of a layer immediately higher than a target layer.
    Type: Application
    Filed: May 22, 2012
    Publication date: May 29, 2014
    Applicant: KONICA MINOLTA, INC.
    Inventors: Hironori Sumitomo, Osamu Toyama
  • Patent number: 8737697
    Abstract: Facial feature point reliability generating means generates a reliability map of each facial feature point from a facial image. Initial facial feature point position calculating means calculates the position of each facial feature point in the facial image based on the reliability map. Off-position facial feature point judgment means judges whether or not each facial feature point is an off-position facial feature point not satisfying a prescribed condition. Facial feature point difference calculating means calculates the difference between the position of each facial feature point, excluding those judged as the off-position facial feature points, and the position of a corresponding point of the facial feature point. Facial feature point position correcting means corrects the determined positions of the facial feature points based on the results of the judgment by the off-position facial feature point judgment means and the calculation by the facial feature point difference calculating means.
    Type: Grant
    Filed: May 18, 2011
    Date of Patent: May 27, 2014
    Assignee: NEC Corporation
    Inventor: Yusuke Morishita
  • Patent number: 8737726
    Abstract: When photographing images including a person's face at an event, photographers tend to photograph images so that features of the event appear in a region around the person's face. An image data processing device of the present invention extracts image feature information so that an image feature calculated based on pixels in the region around the person's face, which tends to represent features of an event, is reflected more than that calculated based on pixels in a region remote from the person's face, which tends not to represent features of an event. This allows the image data processing device to calculate image feature information reflecting features of an event more than that calculated by a conventional image data processing device. The image data processing device therefore improves classification precision compared to the conventional device when classifying images using the image feature information calculated by the image data processing device.
    Type: Grant
    Filed: January 24, 2012
    Date of Patent: May 27, 2014
    Assignee: Panasonic Corporation
    Inventor: Koichiro Yamaguchi
  • Patent number: 8737730
    Abstract: Color management using a vector-based color difference metric. A color difference map is comprised of color difference vectors for each of a plurality of pixels of an original image. The color difference vector for each pixel includes both a magnitude and a directionality representing a difference for color data in each pixel in the original image, relative to color data in a corresponding mapped pixel in a color mapped image. Pixels in the color difference map having large color differences in color movement relative to nearby pixels are identified in the color difference map, by applying an edge-detection algorithm to the color difference map. For each pixel that is identified in a smooth area in the original image and is identified as having a large color difference in the color difference map, a correction algorithm is applied, so as to provide a corrected color mapped image.
    Type: Grant
    Filed: April 19, 2010
    Date of Patent: May 27, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Rocklin James Sloan
  • Patent number: 8738647
    Abstract: The present invention provides a method and system for image matching. The method includes receiving a query image at a query-server. Further, the method includes, sending a request to one or more image-matching servers of a set of distributed image-matching servers to conduct an image-search corresponding to the query image. Furthermore, the method includes receiving at the query-server, a list of identified matches from the image-matching servers corresponding to the query image, wherein the list of matches is identified based on the image-search at the image-matching servers. Moreover, the method includes selecting one or more matches from the list of identified matches based on a score corresponding to the identified matches.
    Type: Grant
    Filed: February 18, 2009
    Date of Patent: May 27, 2014
    Assignee: A9.com, Inc.
    Inventors: Keshav Menon, Max Delgadillo, Sunil Ramesh, Gd Ramkumar
  • Patent number: 8737745
    Abstract: Scene-based people metering for audience measurement is disclosed. An example method disclosed herein to perform people metering for audience measurement comprises segmenting image frames depicting a location in which an audience is expected to be present to form a sequence of scenes, for a first scene in the sequence of scenes, identifying a key frame representing a first sequence of image frames corresponding to the first scene, and processing the key frame to identify an audience depicted in the first scene.
    Type: Grant
    Filed: March 27, 2012
    Date of Patent: May 27, 2014
    Assignee: The Nielsen Company (US), LLC
    Inventor: Kevin Keqiang Deng
  • Patent number: 8737771
    Abstract: Disclosed are a method and a system for adding annotations into an input medium file. The method comprises a step of creating annotation detection models based on training samples formed by existing media files having annotations; a step of extracting coexistence coefficients of any two annotations based on appearance frequencies of the annotations in the training samples; a step of inputting the input medium file; a step of extracting sense-of-vision features from the input medium file; a step of obtaining initial annotations of the input medium file; a step of acquiring candidate annotations based on the initial annotations and the coexistence coefficients of the annotations in the training samples; and a step of selecting a final annotation set from the candidate annotations based on the sense-of-vision features of the input medium file and the coexistence coefficients by using the annotation detection models.
    Type: Grant
    Filed: January 12, 2011
    Date of Patent: May 27, 2014
    Assignee: Ricoh Company, Ltd.
    Inventors: Yuan Liu, Tao Li, Yinghui Xu, Yi Chen, Lei Zhang
  • Patent number: 8738678
    Abstract: The value of a median or other rank of interest in a dataset is efficiently determined. Each active bit of the dataset is serially processed to compute one bit of the output value from each bit of the input dataset. If any sample in the dataset has an active bit that differs from the determined output value for that bit, then that sample can be marked as no longer in consideration. After an active bit has been processed, the data for that bit may be discarded or subsequently ignored. These techniques allow the rank value to be efficiently determined using pipelined logic in a configurable gate array (CGA) or the like. Further implementations may be enhanced to compute clipped means, to identify “next highest” or “next lowest” values, to reduce quantization errors through less-significant bit interpolation, to simultaneously process multiple values in a common pipeline, or for any other purpose.
    Type: Grant
    Filed: September 14, 2011
    Date of Patent: May 27, 2014
    Assignee: Raytheon Company
    Inventor: Darin S. Williams
  • Publication number: 20140140623
    Abstract: Techniques for searching in an image for a particular block of pixels that represents a feature are described herein. The techniques may include generating feature quality information indicating a quality of the feature with respect to blocks of pixels of the image. The feature quality information may be utilized to locate a block of pixels in a subsequent image that corresponds to the feature. For example, the feature quality information may be utilized to determine whether a block of pixels that has a threshold amount of similarity to the feature actually corresponds to the feature.
    Type: Application
    Filed: November 21, 2012
    Publication date: May 22, 2014
    Applicant: GRAVITY JACK, INC.
    Inventors: Benjamin William Hamming, Shawn David Poindexter
  • Patent number: 8731305
    Abstract: Map data are overlaid on satellite imagery. A road segment within the map data is identified, and the satellite imagery indicates that the road segment is at a different geographic position than a geographic position indicated by the map data. The endpoints of the road segment in the map data are aligned with the corresponding positions of the endpoints in the satellite imagery. A road template is applied at an endpoint of the road segment in the satellite imagery, and the angle of the road template that matches the angle of the road segment indicated by the satellite imagery is determined by optimizing a cost function. The road template is iteratively shifted along the road segment in the satellite imagery. The geographic position of the road segment within the map data is updated responsive to the positions and angles of the road template.
    Type: Grant
    Filed: March 9, 2011
    Date of Patent: May 20, 2014
    Assignee: Google Inc.
    Inventor: Anup Mantri