Directional Codes And Vectors (e.g., Freeman Chains, Compasslike Codes) Patents (Class 382/197)
-
Publication number: 20150030242Abstract: A method and system is provided for combining information from a plurality of source images to form a fused image. The fused image is generated by the combination of the source images based on both local features and global features computed from the source images. Local features are computed for local regions in each source image. For each source image, the computed local features are further processed to form a local weight matrix. Global features are computed for the source images. For each source image, the computed global features are further processed to form a global weight vector. For each source image, its corresponding local weight matrix and its corresponding global weight vector are combined to form a final weight matrix. The source images are then weighted by the final weight matrices to generate the fused image.Type: ApplicationFiled: July 26, 2013Publication date: January 29, 2015Inventor: Rui Shen
-
Publication number: 20150030238Abstract: A system may be configured as an image recognition machine that utilizes an image feature representation called local feature embedding (LFE). LFE enables generation of a feature vector that captures salient visual properties of an image to address both the fine-grained aspects and the coarse-grained aspects of recognizing a visual pattern depicted in the image. Configured to utilize image feature vectors with LFE, the system may implement a nearest class mean (NCM) classifier, as well as a scalable recognition algorithm with metric learning and max margin template selection. Accordingly, the system may be updated to accommodate new classes with very little added computational cost. This may have the effect of enabling the system to readily handle open-ended image classification problems.Type: ApplicationFiled: July 29, 2013Publication date: January 29, 2015Applicant: Adobe Systems IncorporatedInventors: Jianchao Yang, Guang Chen, Jonathan Brandt, Hailin Jin, Elya Shechtman, Aseem Omprakash Agarwala
-
Patent number: 8942489Abstract: A vector graphics classification engine and associated method for classifying vector graphics in a fixed format document is described herein and illustrated in the accompanying figures. The vector graphics classification engine defines a pipeline for categorizing vector graphics parsed from the fixed format document as font, text, paragraph, table, and page effects, such as shading, borders, underlines, and strikethroughs. Vector graphics that are not otherwise classified are designated as basic graphics. By sequencing the detection operations in a selected order, misclassification is minimized or eliminated.Type: GrantFiled: January 23, 2012Date of Patent: January 27, 2015Assignee: Microsoft CorporationInventors: Milan Sesum, Milos Raskovic, Drazen Zaric, Milos Lazarevic, Aljosa Obuljen
-
Patent number: 8938125Abstract: A motion estimation method is provided, which includes following steps: dividing a first frame to be estimated into a plurality of area units, in which each of the area units includes a plurality of blocks; and assigning a set of motion vector values to each of the area units, in which the set of motion vector values includes a plurality of predetermined motion vector values, and each of the predetermined motion vector values is assigned to at least one block in each of the area units.Type: GrantFiled: June 8, 2012Date of Patent: January 20, 2015Assignee: Novatek Microelectronics Corp.Inventors: Yen-Sung Chen, Tsui-Chin Chen, Jiande Jiang, Yu-Tsung Hu
-
Patent number: 8923609Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.Type: GrantFiled: April 2, 2013Date of Patent: December 30, 2014Assignee: Behavioral Recognition Systems, Inc.Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
-
Patent number: 8923635Abstract: An image processing apparatus includes a first path information calculating unit, a second path information calculating unit, and a path selecting unit. The first path information calculating unit calculates first path information which is information representing a first path for separating areas from an image. The second path information calculating unit calculates second path information representing a second path for separating the areas from the image, the second path being the reverse of the first path. The path selecting unit selects one of the first path information calculated by the first path information calculating unit and the second path information calculated by the second path information calculating unit.Type: GrantFiled: September 9, 2010Date of Patent: December 30, 2014Assignee: Fuji Xerox Co., Ltd.Inventor: Eiichi Tanaka
-
Patent number: 8913835Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A set of key video frames are selected based on the determined video frame clusters.Type: GrantFiled: August 3, 2012Date of Patent: December 16, 2014Assignee: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Patent number: 8908976Abstract: An image information processing apparatus comprising: an extraction unit that extracts an object from a photographed image; a calculation unit that calculates an orientation of the object as exhibited in the image; and a provision unit that provides a tag to the image according to the orientation of the object.Type: GrantFiled: April 15, 2011Date of Patent: December 9, 2014Assignee: Panasonic Intellectual Property Corporation of AmericaInventors: Ryouichi Kawanishi, Tsutomu Uenoyama, Tomohiro Konuma
-
Publication number: 20140355880Abstract: Technologies are generally presented for employing enhanced expectation maximization (EEM) in image retrieval and authentication. Using uniform distribution as initial condition, the EEM may converge iteratively to a global optimality. If a realization of the uniform distribution is used as the initial condition, the process may also be repeatable. In some examples, a positive perturbation scheme may be used to avoid boundary overflow, often occurring with the conventional EM algorithms. To reduce computation time and resource consumption, a histogram of one dimensional Gaussian Mixture Model (GMM) with two components and wavelet decomposition of an image may be employed.Type: ApplicationFiled: March 8, 2012Publication date: December 4, 2014Inventors: Guorong Xuan, Yun-Qing Shi
-
Publication number: 20140341474Abstract: The techniques and systems described herein are directed to isolating part-centric motion in a visual scene and stabilizing (e.g., removing) motion in the visual scene that is associated with camera-centric motion and/or object-centric motion. By removing the motion that is associated with the camera-centric motion and/or the object-centric motion, the techniques are able to focus motion feature extraction mechanisms (e.g., temporal differencing) on the isolated part-centric motion. The extracted motion features may then be used to recognize and/or detect the particular type of object and/or estimate a pose or position of a particular type of object.Type: ApplicationFiled: May 16, 2013Publication date: November 20, 2014Applicant: Microsoft CorporationInventors: Piotr Dollar, Charles Lawrence Zitnick, III, Dennis I. Park
-
Patent number: 8891878Abstract: Scale-invariant features are extracted from an image. The features are projected to a lower dimensional random projection matrix by multiplying the features by a matrix of random entries. The matrix of random projections is quantized to produce a matrix of quantization indices, which form a query vector for searching a database of images to retrieve metadata related to the image.Type: GrantFiled: June 15, 2012Date of Patent: November 18, 2014Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Shantanu Rane, Petros T Boufounos, Mu Li
-
Publication number: 20140334737Abstract: Methods for characterizing two-dimensional concept drawings are disclosed. The concept drawings comprise cross-sections intersecting at cross-hairs. The method comprises: determining, for each cross-section: a plane on which the cross-section is located, the plane having a normal vector in a three-dimensional coordinate system; and, for each cross-hair on the cross-section, a tangent vector in the three-dimensional coordinate system which is tangent to the cross-section at the cross-hair. For each cross-hair comprising ith and jth intersecting cross-sections, one or more constraints are satisfied, the constraints comprising: the normal vector ni of the plane on which the ith cross-section is located is at least approximately orthogonal to the normal vector nj of the plane on which the jth cross-section is located; and the tangent vector tij to the ith cross-section at the cross-hair is at least approximately orthogonal to the tangent vector tji to the jth cross-section at the cross-hair.Type: ApplicationFiled: May 8, 2014Publication date: November 13, 2014Inventors: Cloud Yunfei SHAO, Adrien BOUSSEAU, Alla SHEFFER, Karansher SINGH
-
Publication number: 20140334719Abstract: In an object identification device, each score calculator extracts a feature quantity from the image, and calculates a score using the extracted feature quantity and a model of the specified object. The score represents a reliability that the specified object is displayed in the image. A score-vector generator generates a score vector having the scores as elements thereof. A cluster determiner determines, based on previously determined clusters in which the score vector is classifiable, one of the clusters to which the score vector belongs as a target cluster. An object identifier identifies whether the specified object is displayed in the image based on one of the identification conditions. The one of the identification conditions is previously determined for the target cluster determined by the cluster determiner.Type: ApplicationFiled: May 7, 2014Publication date: November 13, 2014Applicant: DENSO CORPORATIONInventors: Kazuhito TAKENAKA, Takashi BANDO, Hossein TEHRANI NIKNEJAD, Masumi EGAWA
-
Patent number: 8879125Abstract: An image processing apparatus of the present disclosure includes: a step detecting unit which detects a first step pixel and a second step pixel in the binary image and step directions of the first step pixel and the second step pixel; and an enlargement processing unit which inverts pixel values of pixels from the first pixel value to the second pixel value in a pixel area corresponding to the first step pixel in the enlarged image, and inverts pixel values of pixels from the second pixel value to the first pixel value in a pixel area corresponding to the second step pixel. The pixels of which pixel values are inverted are located at positions corresponding to its step direction. The number of the pixels of which pixel values are inverted corresponds to its step length in the binary image.Type: GrantFiled: June 19, 2013Date of Patent: November 4, 2014Assignee: KYOCERA Document Solutions Inc.Inventors: Seiki Satomi, Toshiaki Mutsuo
-
Publication number: 20140321755Abstract: The size of a feature descriptor is reduced with the accuracy of object identification maintained. A local feature descriptor extracting apparatus includes a feature point detecting unit configured to detect feature points in an image, a local region acquiring unit configured to acquire a local region for each of the feature points, a subregion dividing unit configured to divide each local region into a plurality of subregions, a subregion feature vector generating unit configured to generate a feature vector with a plurality of dimensions for each of the subregions in each local region, and a dimension selecting unit configured to select dimensions from the feature vector in each subregion so as to reduce a correlation between the feature vectors in proximate subregions based on positional relations among the subregions in each local region and output elements of the selected dimensions as a feature descriptor of the local region.Type: ApplicationFiled: November 15, 2012Publication date: October 30, 2014Applicant: NEC CORPORATIONInventors: Kota Iwamoto, Ryota Mase
-
Patent number: 8873112Abstract: A signal value representing at least one of a plurality of types of optical characteristics are calculated for each pixel from the read signal obtained and output by reading light reflected by a document placed on a document table and a document table cover while the document is covered with the cover. It is determined, based on the signal value calculated, whether or not a target pixel is a pixel in a document region. A document region is detected from the determination result.Type: GrantFiled: September 11, 2012Date of Patent: October 28, 2014Assignee: Canon Kabushiki KaishaInventors: Tetsuya Suwa, Minako Kato, Yugo Mochizuki, Takashi Nakamura, Masao Kato
-
Publication number: 20140314323Abstract: Methods and systems of recognizing images may include an apparatus having a hardware module with logic to, for a plurality of vectors in an image, determine a first intermediate computation based on even pixels of an image vector, and determine a second intermediate computation based on odd pixels of an image vector. The logic can also combine the first and second intermediate computations into a Hessian matrix computation.Type: ApplicationFiled: June 30, 2014Publication date: October 23, 2014Inventors: Yong Zhang, Ravishankar Iyer, Rameshkumar G. Illikkal, Donald K. Newell, Jianping Zhou
-
Publication number: 20140314324Abstract: Most of large-scale image retrieval systems are based on Bag-of-Visual-Words model. However, traditional Bag-of-Visual-Words model does not well capture the geometric context among local features in images, which plays an important role in image retrieval. In order to fully explore geometric context of all visual words in images, efficient global geometric verification methods have been attracting lots of attention. Unfortunately, current existing global geometric verification methods are either computationally expensive to ensure real-time response. To solve the above problems, a novel geometric coding algorithm is used to encode the spatial context among local features for large scale partial duplicate image retrieval. With geometric square coding and geometric fan coding, our geometric coding scheme encodes the spatial relationships of local features into three geo-maps, which are used for global verification to remove spatially inconsistent matches.Type: ApplicationFiled: November 9, 2012Publication date: October 23, 2014Inventors: Qi Tian, Wengang Zhou, Houqiang Li, Yijuan Lu
-
Patent number: 8866820Abstract: A difference of coordinate values stored adjacent to each other is compressed by means of a statistical coding system when reading out outline font data storing coordinate values necessary for drawing a contour of a character in order of drawing the contour in a clockwise or counterclockwise direction and also a category of a line connecting a pair of coordinates simultaneously, followed by compressing the coordinate values of the outline font data. A value of a result of subtracting “A?1” from a difference of coordinate values is determined to be a difference value of coordinates if the difference of coordinate value is equal to or greater than a certain value A, and a code expressing the difference value of “0” is added in front of the codes of difference values that are smaller than the value A in the case of a category of line connecting adjacent coordinates to each other being a straight line.Type: GrantFiled: February 28, 2007Date of Patent: October 21, 2014Assignee: Fujitsu LimitedInventors: Kohei Terazono, Yoshiyuki Okada, Masashi Takechi
-
Patent number: 8867843Abstract: The present invention discloses a method of image denoising and method of generating motion vector data structure thereof. The method comprises: providing an image sequential capturing module to capture and to receive the plurality of images; generating a global motion vector based on the plurality of images in accordance with a first algorithm; reducing each image as reduced images; dividing each of the first reduced images into a plurality of first areas and generating a first local motion vector based on each of the first areas in accordance with a second algorithm, and via the similar way for generating a second local motion vector; finally, obtaining motion vector data in the plurality of images according to the global motion vector, each of the first local motion vectors and each of the second local motion vectors.Type: GrantFiled: March 5, 2013Date of Patent: October 21, 2014Assignee: Altek Semiconductor CorporationInventors: Chia-Yu Wu, Shih-Yuan Peng
-
Patent number: 8867842Abstract: According to one embodiment, an image processing system includes a decoder, a corresponding area detector and an image corrector. The decoder is configured to decode an input image signal obtained by encoding a plurality of images viewed from a plurality of viewing points different from each other, to generate a first image signal, a second image signal, and a motion vector for referring to the first image from the second image. The corresponding area detector is configured to detect a corresponding area in the second image, the corresponding area corresponding to a target block in the first image. The image corrector is configured to mix each pixel in the target block with each pixel in the corresponding area according to a degree of similarity between the target block and the corresponding area, to generate a third image signal.Type: GrantFiled: March 20, 2012Date of Patent: October 21, 2014Assignee: Kabushiki Kaisha ToshibaInventor: Atsushi Mochizuki
-
Patent number: 8861054Abstract: Provided are an image processing apparatus, an image processing method, a computer-readable medium storing a computer program and an image processing system improving edge detection precision at the time of flare occurrence. The image processing apparatus includes an input unit for taking image data including a document region, a first candidate detector for detecting a first candidate of an edge point constituting a boundary line of the document region by scanning binarized image of the image data with a predetermined pattern along a line, a second candidate detector for detecting a second candidate of an edge point based on differential value of pixels adjoining each other, and an edge point determination unit for determining the second candidate as an edge point when the second candidate is positioned more inside of the document region than the first candidate, and otherwise determines the first candidate as an edge point.Type: GrantFiled: July 6, 2012Date of Patent: October 14, 2014Assignee: PFU LimitedInventors: Akira Iwayama, Toshihide Suzumura
-
Publication number: 20140301636Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.Type: ApplicationFiled: June 23, 2014Publication date: October 9, 2014Inventors: Marian Stewart Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
-
Patent number: 8849040Abstract: An image combining apparatus obtains information of an object area upon photographing of images, executes a preparation operation preceding to the photographing on the basis of the obtained information, measures an elapsed time from the preparation operation to a photographing instruction, divides each of the photographed images into a plurality of areas, detects a motion vector of each divided area, weights the detected motion vector by using the elapsed time and the information of the object area, and calculates a position adjustment vector by using the weighted motion vector.Type: GrantFiled: May 31, 2012Date of Patent: September 30, 2014Assignee: Canon Kabushiki KaishaInventor: Yasushi Ohwa
-
Patent number: 8849039Abstract: A method of comparing two object poses, wherein each object pose is expressed in terms of position, orientation and scale with respect to a common coordinate system, the method comprising: calculating a distance between the two object poses, the distance being calculated using the distance function: d sRt ? ( X , Y ) = d s 2 ? ( X , Y ) ? s 2 + d r 2 ? ( X , Y ) ? r 2 + d t 2 ? ( X , Y ) ? t 2 .Type: GrantFiled: February 28, 2012Date of Patent: September 30, 2014Assignee: Kabushiki Kaisha ToshibaInventors: Minh-Tri Pham, Oliver Woodford, Frank Perbet, Atsuto Maki, Bjorn Stenger, Roberto Cipolla
-
Patent number: 8849041Abstract: The disclosure relates to recognizing data such as items or entities in content. In some aspects, content may be received and feature information, such as face recognition data and voice recognition data may be generated. Scene segmentation may also be performed on the content, grouping the various shots of the video content into one or more shot collections, such as scenes. For example, a decision lattice representative of possible scene segmentations may be determined and the most probable path through the decision lattice may be selected as the scene segmentation. Upon generating the feature information and performing the scene segmentation, one or more items or entities that are present in the scene may be identified.Type: GrantFiled: June 4, 2012Date of Patent: September 30, 2014Assignee: Comcast Cable Communications, LLCInventors: Jan Neumann, Evelyne Tzoukermann, Amit Bagga, Oliver Jojic, Bageshree Shevade, David Houghton, Corey Farrell
-
Patent number: 8842918Abstract: To improve the precision of a motion vector of a pixel included in an image by appropriately performing region division of the image. A plurality of images is obtained, any of the plurality of the obtained images is analyzed and a feature point of the image is extracted. A feature point of the image are added to the corners of the image and at least one feature point is added to any of positions on four sides formed by the feature points located at the corners of the image. Then, based on the extracted feature point and the added feature points, a motion vector of a pixel included in the image with respect to another image included in the plurality of images is determined.Type: GrantFiled: October 15, 2013Date of Patent: September 23, 2014Assignee: Canon Kabushiki KaishaInventors: Manabu Yamazoe, Naoki Sumi
-
Publication number: 20140270508Abstract: The present invention is an automated and extensible system for the analysis and retrieval of images based on region-of-interest (ROI) analysis of one or more true objects depicted by an image. The system uses an ROI database that is a relational or analytical database containing searchable vectors that represent the images stored in a repository. Entries in the database are created by an image locator and ROI classifier that work to locate images within the repository and extract relevant information that will be stored in the ROI database. The ROI classifier analyzes objects in an image identify actual features of the true object. Graphical searches are performed by the collaborative workings of an image retrieval module, an image search requestor and an ROI query module. The image search requestor is an abstraction layer that translates user or agent search requests into the language understood by the ROI query.Type: ApplicationFiled: May 29, 2014Publication date: September 18, 2014Applicant: GOOGLE INC.Inventors: Jamie E. Retterath, Robert A. Laumeyer
-
Publication number: 20140270538Abstract: A moving image processing apparatus cumulatively sums for each reference block and in a predetermined sequence, values representing differences between corresponding pixels in a first block of a reduced image of a given image and a reference block within a search range in a reduced reference image; detects a motion vector of the first block, based on a calculation result; compares the amounts of increase among intervals of the summing process when the evaluation value is calculated for the reference block represented by the motion vector; and based on the comparison, determines a sequence to be used when the evaluation value of the reference block is calculated by cumulatively summing the values that represent differences between corresponding pixels in a second block in the given image and corresponding to the first block, and in a reference block within a search range in the reference image indicated by the motion vector.Type: ApplicationFiled: February 17, 2014Publication date: September 18, 2014Applicant: FUJITSU LIMITEDInventor: Hidetoshi Matsumura
-
Publication number: 20140270491Abstract: In one embodiment, image detection is improved or accelerated using an approximate range query to classify images. A controller is trained on a set of training feature vectors. The training feature vectors represent an image. The feature vectors are normalized to a uniform length. The controller defines a matching space that includes the set of training feature vectors. The controller is configured to identify whether an input vector for a tested image falls within the matching space based on a range query. When the input vector falls within the matching space, the tested image substantially matches the portion of the image used to train the controller.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: NAVTEQ B.V.Inventor: Victor Lu
-
Patent number: 8830539Abstract: A signal value representing at least one of a plurality of types of optical characteristics are calculated for each pixel from the read signal obtained and output by reading light reflected by a document placed on a document table and a document table cover while the document is covered with the cover. It is determined, based on the signal value calculated, whether or not a target pixel is a pixel in a document region. A document region is detected from the determination result.Type: GrantFiled: September 11, 2012Date of Patent: September 9, 2014Assignee: Canon Kabushiki KaishaInventors: Tetsuya Suwa, Minako Kato, Yugo Mochizuki, Takashi Nakamura, Masao Kato
-
Publication number: 20140247994Abstract: Automated analysis of video data for determination of human behavior includes segmenting a video stream into a plurality of discrete individual frame image primitives which are combined into a visual event that may encompass an activity of concern as a function of a hypothesis. The visual event is optimized by setting a binary variable to true or false as a function of one or more constraints. The visual event is processed in view of associated non-video transaction data and the binary variable by associating the visual event with a logged transaction if associable, issuing an alert if the binary variable is true and the visual event is not associable with the logged transaction, and dropping the visual event if the binary variable is false and the visual event is not associable.Type: ApplicationFiled: May 13, 2014Publication date: September 4, 2014Applicant: International Business Machines CorporationInventors: Lei Ding, Quanfu Fan, Sharathchandra U. Pankanti
-
Publication number: 20140241635Abstract: A method and/or system for screenshot orientation detection may include performing an initial optical character recognition (OCR) and/or an initial face recognition technique on a screenshot of an application. A determination of whether the screenshot orientation is correct may be made based on, for example, the initial OCR and/or the initial face recognition technique. In an event when the screenshot orientation is not correct, a determination of a correct screenshot orientation may be made. In this regard, the screenshot may be rotated (e.g., by a predetermined number of degrees). A subsequent OCR and/or a subsequent face recognition technique may be performed on the rotated screenshot. A determination may be made whether the screenshot orientation of the rotated screenshot is correct based on, for example, the subsequent OCR and/or the subsequent face recognition technique.Type: ApplicationFiled: February 25, 2013Publication date: August 28, 2014Applicant: Google Inc.Inventor: Google Inc.
-
Patent number: 8818075Abstract: The invention relates to a device and method for estimating defects potentially present in an object comprising an outer surface, wherein the method comprises the steps of: a) illuminating the outer surface of the object with an inductive wave field at a predetermined frequency; b) measuring an induced wave field ({right arrow over (H)}) at the outer surface of the object; c) developing from the properties of the object's material a coupling matrix T associated with a depth Z of the object from the outer surface; d) solving the matrix system ( [ H ? 0 ? 0 ? ] = T · J ? ) to determine a vector ({right arrow over (J)}) at depth Z; e) extracting a sub-vector ({right arrow over (J)}S) from the vector ({right arrow over (J)}) corresponding to a potential defect on the object at depth Z; and f) quantitatively estimating the potential defect from the sub-vector ({right arrow over (J)}S) at depth Z, wherein the method is performed using a computer or processor.Type: GrantFiled: December 28, 2010Date of Patent: August 26, 2014Assignee: Centre National de la Recherche Scientifique (CNRS)Inventors: Dominique Placko, Pierre-Yves Joubert, Alain Rivollet
-
Publication number: 20140233859Abstract: An electronic device and a method of determining brightness gradients of an electronic device are provided. The electronic device includes a memory configured to store a digital image, and a processor configured to process the digital image, wherein the processor is configured to recognize at least one object of the digital image, to determine a descriptor used to recognize the object, to determine the descriptor by using at least one of a location, a direction, and a scale of a feature point on the digital image, to determine brightness gradients of pixels located within an area surrounding the feature point, and to determine the brightness gradients of the pixels based on two or more non-orthogonal fixed directions.Type: ApplicationFiled: February 12, 2014Publication date: August 21, 2014Applicant: Samsung Electronics Co., Ltd.Inventors: Ik-Hwan CHO, Kyu-Sung CHO, Oleksiy Seriovych PANFILOV, Gennadiy Yaroslavovich KIS
-
Publication number: 20140226906Abstract: Methods and apparatus are provided for image matching. A first image is received via an external input. One or more feature points are extracted from the first image. One or more descriptors are generated for the first image based on the one or more feature points. The first image is matched with a second image by comparing the one or more descriptors for the first image with one or more descriptors for the second image.Type: ApplicationFiled: February 14, 2013Publication date: August 14, 2014Applicant: Samsung Electronics Co., Ltd.Inventor: Woo-Sung KANG
-
Publication number: 20140226907Abstract: A system and method are disclosed for detecting when an image provided by a user for inclusion in a web page may be distracting to viewers of the web page. In one implementation, a computer system determines a measure of visual complexity of an image (e.g., a measure based on how often neighboring pixels in the image differ in intensity by at least a given percentage, etc.). The image is then included in a web page only when the determined measure of visual complexity is below a threshold.Type: ApplicationFiled: February 14, 2013Publication date: August 14, 2014Applicant: Google Inc.Inventor: David Kosslyn
-
Patent number: 8798375Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for labeling images. In one aspect, a method includes automatically identifying an object in an image using a deep model-based and data-driven hybrid architecture.Type: GrantFiled: May 10, 2013Date of Patent: August 5, 2014Assignee: Google Inc.Inventors: Edward Y. Chang, Zhiyu Wang, Dingyin Xia
-
Patent number: 8798374Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.Type: GrantFiled: August 26, 2009Date of Patent: August 5, 2014Assignees: The Regents of the University of California, The Research Foundation of State University of New YorkInventors: Marian Stewart Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
-
Patent number: 8787676Abstract: An image processing apparatus includes a receiving unit, a path calculation unit, and a separation unit. The receiving unit receives an image including at least a character image. The path calculation unit calculates a separation path in the image received by the receiving unit. The separation path is a line segment that separates a character image from the image. The separation unit separates the image received by the receiving unit into plural character images using a separation path calculated by the path calculation unit. The path calculation unit calculates a separation path within a predetermined range including a portion of a character image in the image so that a cumulative value of luminance values of pixels along the separation path satisfies a predetermined condition.Type: GrantFiled: February 17, 2011Date of Patent: July 22, 2014Assignee: Fuji Xerox, Co., Ltd.Inventor: Eiichi Tanaka
-
Patent number: 8787677Abstract: Function approximation performed when a raster image is converted into a vector image is performed in a simple manner with high accuracy, without using feedback. When anchor points are extracted from a coordinate point sequence obtained from the raster image, and function approximation is performed on the coordinate point sequence between anchor points, an appropriate point among coordinate points defined in a unit approximation section that is partitioned by anchor points is selected, and after setting the direction of the corresponding coordinate point as a tangential direction, correction is performed such that the position of a control point obtained from a tangent line does not intersect another control point.Type: GrantFiled: August 24, 2011Date of Patent: July 22, 2014Assignee: Canon Kabushiki KaishaInventor: Hiroshi Oto
-
Patent number: 8787678Abstract: Disclosed is method of visual search for objects that include straight lines. A two-step process is used, which includes detecting straight line segments in an image. The lines are generally characterized by their length, midpoint location, and orientation. Hypotheses that a particular straight line segment belongs to a known object are generated and tested. The set of hypotheses is constrained by spatial relationships in the known objects. The speed and robustness of the method and apparatus disclosed makes it immediately applicable to many computer vision applications.Type: GrantFiled: November 8, 2010Date of Patent: July 22, 2014Assignee: Reognition RoboticsInventor: Simon Melikian
-
Publication number: 20140198988Abstract: An image processing device includes a control unit which controls a read out of a reference image which is referred to when performing motion compensation on an image, with a range based on a maximum value of a motion amount in a vertical direction of a motion vector of the image as a target; and a motion compensation processing unit which performs motion compensation on the image using the motion vector and the reference image which is read out according to the control by the control unit.Type: ApplicationFiled: November 26, 2013Publication date: July 17, 2014Applicant: SONY CORPORATIONInventor: Toshinori IHARA
-
Publication number: 20140198989Abstract: A method for determining values which are suitable for distortion correction of an image, including the following steps: a step of splitting a vector field, which is suitable for distortion correction of the image, into a sum of vector products, and a step of determining terms of the vector products as suitable values for distortion correction of the image.Type: ApplicationFiled: February 28, 2012Publication date: July 17, 2014Inventor: Stefan Weber
-
Publication number: 20140193081Abstract: When encoding a set of texture data elements 30 for use in a graphics processing system, the direction along which the data values of the set of texture data elements in question exhibit the greatest variance in the colour space is estimated by using one or more infinite planes 41 to divide the texture data elements in the colour space. For each such plane, texture data element values on each side of the plane are added up to give respective sum points 48, 49, and the vector 50 between these two sum points determined. The direction in the data space of one of the determined vectors 50 is then used to derive endpoint colour values to use when encoding the set of texture data elements.Type: ApplicationFiled: July 2, 2013Publication date: July 10, 2014Applicant: ARM LIMITEDInventor: Jorn Nystad
-
Publication number: 20140193080Abstract: The present invention discloses a method of image denoising and method of generating motion vector data structure thereof. The method comprises: providing an image sequential capturing module to capture and to receive the plurality of images; generating a global motion vector based on the plurality of images in accordance with a first algorithm; reducing each image as reduced images; dividing each of the first reduced images into a plurality of first areas and generating a first local motion vector based on each of the first areas in accordance with a second algorithm, and via the similar way for generating a second local motion vector; finally, obtaining motion vector data in the plurality of images according to the global motion vector, each of the first local motion vectors and each of the second local motion vectors.Type: ApplicationFiled: March 5, 2013Publication date: July 10, 2014Applicant: ALTEK SEMICONDUCTOR CORPORATIONInventors: CHIA-YU WU, SHIH-YUAN PENG
-
Patent number: 8774524Abstract: An image region extracting unit binarizes image data inputted from an image inputting unit and labels the binarized image data based upon a feature value of a pixel to be segmentized into a plurality of image regions. A contour connection determining unit 104 determines whether or not connection points connecting contour lines of the respective image regions extracted by a contour extracting unit each other exist. A contour modifying unit, when it is determined that the connection points exist by the contour connection determining unit, connects the contour lines each other at the connection points to modify the contour line.Type: GrantFiled: March 9, 2011Date of Patent: July 8, 2014Assignee: Canon Kabushiki KaishaInventor: Yuuichi Tsunematsu
-
Patent number: 8768066Abstract: An image processing method and an apparatus using the same are provided. The method includes the following steps: deriving a global motion vector between a first image and a second image and providing the first image, the second image and the global motion vector to a first application process, wherein the first image is the previous image of the second image; deriving a first compensated image and a second compensated image by performing a lens distortion compensation process on the first image and the second image respectively; deriving a compensated global motion vector corresponding to the first compensated image and the second compensated image by transforming and correcting the global motion vector; and providing the first compensated image, the second compensated image, and the compensated global motion vector to a second application process.Type: GrantFiled: March 20, 2012Date of Patent: July 1, 2014Assignee: Altek CorporationInventors: Chia-Yu Wu, Hong-Long Chou, Chia-Chun Tseng
-
Patent number: 8761517Abstract: Automated analysis of video data for determination of human behavior includes segmenting a video stream into a plurality of discrete individual frame image primitives which are combined into a visual event that may encompass an activity of concern as a function of a hypothesis. The visual event is optimized by setting a binary variable to true or false as a function of one or more constraints. The visual event is processed in view of associated non-video transaction data and the binary variable by associating the visual event with a logged transaction if associable, issuing an alert if the binary variable is true and the visual event is not associable with the logged transaction, and dropping the visual event if the binary variable is false and the visual event is not associable.Type: GrantFiled: May 23, 2013Date of Patent: June 24, 2014Assignee: International Business Machines CorporationInventors: Lei Ding, Quanfu Fan, Sharathchandra U. Pankanti
-
Publication number: 20140169680Abstract: Examples disclosed herein relate to image object recognition based on a feature vector with context information. A processor may create an expanded feature vector related to a first area of an image including context information related to the first area. The processor may determine the presence of an object in the image based on the feature vector and output information about the determined object.Type: ApplicationFiled: December 18, 2012Publication date: June 19, 2014Inventor: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.