Neural Networks Patents (Class 382/156)
-
Publication number: 20140177946Abstract: Disclosed herein is an apparatus and method for detecting a person from an input video image with high reliability by using gradient-based feature vectors and a neural network. The human detection apparatus includes an image preprocessing unit for modeling a background image from an input image. A moving object area setting unit sets a moving object area in which motion is present by obtaining a difference between the input image and the background image. A human region detection unit extracts gradient-based feature vectors for a whole body and an upper body from the moving object area, and detects a human region in which a person is present by using the gradient-based feature vectors for the whole body and the upper body as input of a neural network classifier. A decision unit decides whether an object in the detected human region is a person or a non-person.Type: ApplicationFiled: August 5, 2013Publication date: June 26, 2014Applicant: Electronics and Telecommunicatidons Research InstituteInventors: Kil-Taek LIM, Yun-Su CHUNG, Byung-Gil HAN, Eun-Chang CHOI, Soo-In LEE
-
Patent number: 8761514Abstract: A character recognition apparatus and method based on a character orientation are provided, in which an input image is binarized, at least one character area is extracted from the binarized image, a slope value of the extracted at least one character area is calculated, the calculated slope value is set as a character feature value, and a character is recognized by using a neural network for recognizing a plurality of characters by receiving the set character feature value. Accordingly, the probability of wrongly recognizing a similar character decreases, and a recognition ratio of each character increases.Type: GrantFiled: February 28, 2011Date of Patent: June 24, 2014Assignee: Samsung Electronics Co., LtdInventors: Jeong-Wan Park, Sang-Wook Oh, Do-Hyeon Kim, Hee-Bum Ahn
-
Patent number: 8744190Abstract: A system for efficient image feature extraction comprises a buffer for storing a slice of at least n lines of gradient direction pixel values of a directional gradient image. The buffer has an input for receiving the first plurality n of lines and an output for providing a second plurality m of columns of gradient direction pixel values of the slice to an input of a score network, which comprises comparators for comparing the gradient direction pixel values of the second plurality of columns with corresponding reference values of a reference directional gradient pattern of a shape and adders for providing partial scores depending on output values of the comparators to score network outputs which are coupled to corresponding inputs of an accumulation network having an output for providing a final score depending on the partial scores.Type: GrantFiled: January 5, 2009Date of Patent: June 3, 2014Assignee: Freescale Semiconductor, Inc.Inventors: Norbert Stoeffler, Martin Raubuch
-
Patent number: 8731300Abstract: A wordspotting system and method are disclosed for processing candidate word images extracted from handwritten documents. In response to a user inputting a selected query string, such as a word to be searched in one or more of the handwritten documents, the system automatically generates at least one computer-generated image based on the query string in a selected font or fonts. A model is trained on the computer-generated image(s) and is thereafter used in the scoring the candidate handwritten word images. The candidate or candidates with the highest scores and/or documents containing them can be presented to the user, tagged, or otherwise processed differently from other candidate word images/documents.Type: GrantFiled: November 16, 2012Date of Patent: May 20, 2014Assignee: Xerox CorporationInventors: Jose A. Rodriguez Serrano, Florent C. Perronnin
-
Patent number: 8724866Abstract: Described herein is a framework for automatically classifying a structure in digital image data are described herein. In one implementation, a first set of features is extracted from digital image data, and used to learn a discriminative model. The discriminative model may be associated with at least one conditional probability of a class label given an image data observation Based on the conditional probability, at least one likelihood measure of the structure co-occurring with another structure in the same sub-volume of the digital image data is determined. A second set of features may then be extracted from the likelihood measure.Type: GrantFiled: December 8, 2010Date of Patent: May 13, 2014Assignee: Siemens Medical Solutions USA, Inc.Inventors: Dijia Wu, Le Lu, Jinbo Bi, Yoshihisa Shinagawa, Marcos Salganicoff
-
Patent number: 8693765Abstract: The invention includes a method for recognizing shapes using a preprocessing mechanism that decomposes a source signal into basic components called atoms and a recognition mechanism that is based on the result of the decomposition performed by the preprocessing mechanism. In the method, the preprocessing mechanism includes at least one learning phase culminating in a set of signals called kernels, the kernels being adapted to minimize a cost function representing the capacity of the kernels to correctly reconstruct the signals from the database while guaranteeing a sparse decomposition of the source signal while using a database of signals representative of the source to be processed and a coding phase for decomposing the source signal into atoms, the atoms being generated by shifting of the kernels according to their index, each of the atoms being associated with a decomposition coefficient. The invention also includes a shape recognition system for implementing the method.Type: GrantFiled: August 13, 2009Date of Patent: April 8, 2014Assignee: Commissariat a l'Energie Atomique et aux Energies AlternativesInventors: David Mercier, Anthony Larue
-
Patent number: 8687879Abstract: Provides quantitative data about a two or more dimensional image. Classifies and counts number of entities an image contains. Each entity comprises a structure, or some other type of identifiable portion having definable characteristics. The entities located within an image may have different shape, color, texture, etc., but still belong to the same classification. Alternatively, entities comprising a similar color/texture may be classified as one type while entities comprising a different color/texture may be classified as another type. May quantify image data according to set of changing criteria and derive one or more classifications for entities in image. I.e., provides a way for a computer to determine what kind of entities (e.g., entities) are in image and counts total number of entities visually identified in image. Information utilized during a training process may be stored and applied across different images.Type: GrantFiled: October 6, 2011Date of Patent: April 1, 2014Inventors: Carl W. Cotman, Charles F. Chubb, Yoshiyuki Inagaki, Brian Cummings
-
Publication number: 20140086480Abstract: There is provided a signal processing apparatus including a learning unit that learns a plurality of base signals of which coefficients become sparse, for each of features of signals, such that the signals are represented by a linear operation of the plurality of base signals.Type: ApplicationFiled: September 10, 2013Publication date: March 27, 2014Applicant: SONY CORPORATIONInventors: Jun LUO, Liqing ZHANG, Haohua ZHAO, Weizhi XU, Zhenbang SUN, Wei SHI, Takefumi NAGUMO
-
Publication number: 20140086479Abstract: There is provided a signal processing apparatus including a learning unit that learns a plurality of base signals of which coefficients become sparse, using a cost function including a term showing a correspondence between the coefficients, such that signals are represented by a linear operation of the plurality of base signals.Type: ApplicationFiled: September 10, 2013Publication date: March 27, 2014Applicant: SONY CORPORATIONInventors: Jun LUO, Liqing ZHANG, Haohua ZHAO, Weizhi XU, Zhenbang SUN, Wei SHI, Takefumi NAGUMO
-
Patent number: 8649613Abstract: A classifier training system trains unified classifiers for categorizing videos representing different categories of a category graph. The unified classifiers unify the outputs of a number of separate initial classifiers trained from disparate subsets of a training set of media items. The training process divides the training set into a number of bags, and applies a boosting algorithm to the bags, thus enhancing the accuracy of the unified classifiers.Type: GrantFiled: November 3, 2011Date of Patent: February 11, 2014Assignee: Google Inc.Inventors: Thomas Leung, Yang Song, John Zhang
-
Patent number: 8644624Abstract: Embodiments include a scene classification system and method. In one embodiment, a method includes forming a first plurality of image features from an input image, processing the first plurality of image features in the first scene classifier.Type: GrantFiled: July 28, 2009Date of Patent: February 4, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Li Tao, Yeong-Taeg Kim
-
Patent number: 8644599Abstract: A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications.Type: GrantFiled: May 20, 2013Date of Patent: February 4, 2014Assignee: Edge 3 Technologies, Inc.Inventor: Tarek El Dokor
-
Publication number: 20140016858Abstract: Apparatus and methods for detecting salient features. In one implementation, an image processing apparatus utilizes latency coding and a spiking neuron network to encode image brightness into spike latency. The spike latency is compared to a saliency window in order to detect early responding neurons. Salient features of the image are associated with the early responding neurons. A dedicated inhibitory neuron receives salient feature indication and provides inhibitory signal to the remaining neurons within the network. The inhibition signal reduces probability of responses by the remaining neurons thereby facilitating salient feature detection within the image by the network. Salient feature detection can be used for example for image compression, background removal and content distribution.Type: ApplicationFiled: July 12, 2012Publication date: January 16, 2014Inventor: Micah Richert
-
Patent number: 8630482Abstract: A bit code converter transforms a learning feature vector using a transformation matrix updated by a transformation matrix update unit, and converts the transformed learning feature vector into a bit code. When the transformation matrix update unit substitutes a substitution candidate for an element of the transformation matrix, a cost function calculator fixes the substitution candidate that minimizes a cost function as the element. The transformation matrix update unit selects the element while sequentially changing the elements, and the cost function calculator fixes the selected element every time the transformation matrix update unit selects the element, thereby finally fixing the optimum transformation matrix.Type: GrantFiled: February 27, 2012Date of Patent: January 14, 2014Assignee: Denso IT Laboratory, Inc.Inventors: Mitsuru Ambai, Yuichi Yoshida
-
Patent number: 8626686Abstract: Simulated neural circuitry is trained using sequences of images representing moving objects, such that the simulated neural circuitry recognizes objects by having the presence of lower level object features that occurred in temporal sequence in the images representing moving objects trigger the activation of higher level object representations. Thereafter, an image of an object that includes lower level object features is received, the trained simulated neural circuitry activates a higher level representation of the object in response to the lower level object features from the image, and the object is recognized using the trained simulated neural circuitry.Type: GrantFiled: June 20, 2007Date of Patent: January 7, 2014Assignee: Evolved Machines, Inc.Inventor: Paul A. Rhodes
-
Patent number: 8625884Abstract: Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert.Type: GrantFiled: August 18, 2009Date of Patent: January 7, 2014Assignee: Behavioral Recognition Systems, Inc.Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
-
Patent number: 8625885Abstract: Systems and methods for automated pattern recognition and object detection. The method can be rapidly developed and improved using a minimal number of algorithms for the data content to fully discriminate details in the data, while reducing the need for human analysis. The system includes a data analysis system that recognizes patterns and detects objects in data without requiring adaptation of the system to a particular application, environment, or data content. The system evaluates the data in its native form independent of the form of presentation or the form of the post-processed data.Type: GrantFiled: February 28, 2011Date of Patent: January 7, 2014Assignee: Intelliscience CorporationInventors: Robert M. Brinson, Jr., Nicholas Levi Middleton, Bryan Glenn Donaldson
-
Patent number: 8620084Abstract: Shape recognition is performed based on determining whether one or more ink strokes is not part of a shape or a partial shape. Ink strokes are divided into segments and the segments analyzed employing a relative angular distance histogram. The histogram analysis yields stable, incremental, and discriminating featurization results. Neural networks may also be employed along with the histogram analysis to determine complete shapes from partial shape entries and autocomplete suggestions provided to users for conversion of the shape into a known object.Type: GrantFiled: June 26, 2008Date of Patent: December 31, 2013Assignee: Microsoft CorporationInventors: Alexander Kolmykov-Zotov, Sashi Raghupathy, Xin Wang
-
Publication number: 20130343641Abstract: A system and method for labelling aerial images. A neural network generates predicted map data. The parameters of the neural network are trained by optimizing an objective function which compensates for noise in the map images. The function compensates both omission noise and registration noise.Type: ApplicationFiled: June 21, 2013Publication date: December 26, 2013Inventors: Volodymyr Mnih, Geoffrey E. Hinton
-
Patent number: 8611675Abstract: Techniques are described herein for generating and displaying a confusion matrix wherein a data item belonging to one or more actual classes is predicted into a class. The classes in which the data item may be predicted (the “predicted classes”) are ranked according to a score that in one embodiment indicates the confidence of the prediction. If the data item is predicted into a class that is one of the top K ranked predicted classes, then the prediction is considered accurate and an entry is created in a cell of a confusion matrix indicating the accurate prediction. If the data item is not predicted into a class that is not one of the top K ranked predicted classes, then the prediction is considered inaccurate and an entry is created in a cell of a confusion matrix indicating the inaccurate prediction.Type: GrantFiled: December 22, 2006Date of Patent: December 17, 2013Assignee: Yahoo! Inc.Inventors: Jyh-Herng Chow, Byron Dom, Dao-I Lin
-
Patent number: 8577130Abstract: Described herein is a technology for facilitating deformable model-based segmentation of image data. In one implementation, the technology includes receiving training image data (202) and automatically constructing a hierarchical structure (204) based on the training image data. At least one spatially adaptive boundary detector is learned based on a node of the hierarchical structure (206).Type: GrantFiled: March 15, 2010Date of Patent: November 5, 2013Assignee: Siemens Medical Solutions USA, Inc.Inventors: Maneesh Dewan, Yiqiang Zhan, Xiang Sean Zhou, Zhao Yi
-
Patent number: 8571298Abstract: An system and method for tallying objects presented for purchase preferably images the objects with a machine vision system while the objects are still, or substantially still. Images of the objects may be used to recognize the objects and to collect information about each object, such as the price. A pre-tally list may be generated and displayed to a customer showing the customer the cost of the recognized objects. A prompt on a customer display may be given to urge a customer to re-orient unrecognized objects to assist the machine vision system with recognizing such unrecognized objects. A tallying event, such as removing a recognized object from the machine vision system's field of view, preferably automatically tallies recognized objects so it is not necessary for a cashier to scan or otherwise input object information into a point of sale system.Type: GrantFiled: December 11, 2009Date of Patent: October 29, 2013Assignee: DATALOGIC ADC, Inc.Inventors: Alexander M. McQueen, Craig D. Cherry
-
Publication number: 20130266214Abstract: A method for training an image processing neural network without human selection of features may include providing a training set of images labeled with two or more classifications, providing an image processing toolbox with image transforms that can be applied to the training set, generating a random set of feature extraction pipelines, where each feature extraction pipeline includes a sequence of image transforms randomly selected from the image processing toolbox and randomly selected control parameters associated with the sequence of image transforms. The method may also include coupling a first stage classifier to an output of each feature extraction pipeline and executing a genetic algorithm to conduct genetic modification of each feature extraction pipeline and train each first stage classifier on the training set, and coupling a second stage classifier to each of the first stage classifiers in order to increase classification accuracy.Type: ApplicationFiled: April 5, 2013Publication date: October 10, 2013Applicant: Brighham Young UniversityInventors: Kirt Dwayne Lillywhite, Dah-Jye Lee
-
Patent number: 8548231Abstract: First order predicate logics are provided, extended with a bilattice based uncertainty handling formalism, as a means of formally encoding pattern grammars, to parse a set of image features, and detect the presence of different patterns of interest implemented on a processor. Information from different sources and uncertainties from detections, are integrated within the bilattice framework. Automated logical rule weight learning in the computer vision domain applies a rule weight optimization method which casts the instantiated inference tree as a knowledge-based neural network, to converge upon a set of rule weights that give optimal performance within the bilattice framework. Applications are in (a) detecting the presence of humans under partial occlusions and (b) detecting large complex man made structures in satellite imagery (c) detection of spatio-temporal human and vehicular activities in video and (c) parsing of Graphical User Interfaces.Type: GrantFiled: March 16, 2010Date of Patent: October 1, 2013Assignee: Siemens CorporationInventors: Vinay Damodar Shet, Maneesh Kumar Singh, Claus Bahlmann, Visvanathan Ramesh, Stephen P. Masticola, Jan Neumann, Toufiq Parag, Michael A. Gall, Roberto Antonio Suarez
-
Patent number: 8543525Abstract: The present invention refers to the problem of the automatic detection of events in sport field, in particular Goal/NoGoal events by signalling to the mach management, which can autonomously take the final decision upon the event. The system is not invasive for the field structures, neither it requires to interrupt the game or to modify the rules thereof, but it only aims at detecting objectively the event occurrence and at providing support in the referees' decisions by means of specific signalling of the detected events.Type: GrantFiled: February 28, 2007Date of Patent: September 24, 2013Assignee: Consiglio Nazionale Delle Richerche (CNR)Inventors: Arcangelo Distante, Ettore Stella, Massimiliano Nitti, Liborio Capozzo, Tiziana Rita D'Orazio, Massimo Ianigro, Nicola Mosca, Marco Leo, Paolo Spagnolo, Pier Luigi Mazzeo
-
Patent number: 8529446Abstract: In a method for determining a parameter in an automatic study and data management system, data is gathered in a knowledge database, and a parameter is determined based the data gathered in the knowledge database. The data is correlated to at least one of a configuration and implementation of a previous clinical study. The parameter is usable for configuring a future clinical study.Type: GrantFiled: May 31, 2007Date of Patent: September 10, 2013Assignee: Siemens AktiengesellschaftInventors: Markus Schmidt, Siegfried Schneider, Gudrun Zahlmann
-
Publication number: 20130216126Abstract: A user emotion detection method for a handwriting input electronic device is provided. The method includes steps of: obtaining at least one handwriting input characteristic parameter; determining a user emotion parameter by an artificial neural network of the handwriting input electronic device according to the handwriting input characteristic value and at least one associated linkage value; displaying the user emotion parameter on a touch display panel of the handwriting input electronic device; receiving a user feedback parameter; determining whether to adjust the at least one associated linkage value and if yes, adjusting the at least one associated linkage value according to the user feedback parameter to construct and adjust the artificial neural network.Type: ApplicationFiled: December 17, 2012Publication date: August 22, 2013Applicant: WISTRON CORPORATIONInventor: Wistron Corporation
-
Patent number: 8515160Abstract: A bio-inspired actionable intelligence method and system is disclosed. The actionable intelligence method comprises recognizing entities in an imagery signal, detecting and classifying anomalous entities, and learning new hierarchal relationships between different classes of entities. A knowledge database is updated after each new learning experience to aid in future searches and classification. The method can accommodate incremental learning via Adaptive Resonance Theory (ART).Type: GrantFiled: December 17, 2008Date of Patent: August 20, 2013Assignee: HRL Laboratories, LLCInventors: Deepak Khosla, Suhas E. Chelian
-
Patent number: 8509537Abstract: A wordspotting system and method are disclosed. The method includes receiving a keyword and, for each of a set of typographical fonts, synthesizing a word image based on the keyword. A keyword model is trained based on the synthesized word images and the respective weights for each of the set of typographical fonts. Using the trained keyword model, handwritten word images of a collection of handwritten word images which match the keyword are identified. The weights allow a large set of fonts to be considered, with the weights indicating the relative relevance of each font for modeling a set of handwritten word images.Type: GrantFiled: August 5, 2010Date of Patent: August 13, 2013Assignee: Xerox CorporationInventors: Florent C. Perronnin, Thierry Lehoux, Francois Ragnet
-
Patent number: 8509523Abstract: A plurality of features determined from at least a portion of an image containing information about an object are processed with an inclusive neural network, and with a plurality of exclusive neural networks, so as to provide a plurality of inclusive probability values representing probabilities that the portion of the image corresponds to at least one of at least two different classes of objects, and for each exclusive neural network, so as to provide first and second exclusive probability values representing probabilities that the portion of the image respectively corresponds. or not. to at least one class of objects. The plurality of inclusive probability values, and the first and second exclusive probability values from each of the exclusive neural networks, provide for identifying whether the portion of the image corresponds, or not, to any of the at least two different classes of objects.Type: GrantFiled: November 1, 2011Date of Patent: August 13, 2013Assignee: TK Holdings, Inc.Inventor: Gregory G. Schamp
-
Patent number: 8509561Abstract: A technique for determining a characteristic of a face or certain other object within a scene captured in a digital image including acquiring an image and applying a linear texture model that is constructed based on a training data set and that includes a class of objects including a first subset of model components that exhibit a dependency on directional lighting variations and a second subset of model components which are independent of directional lighting variations. A fit of the model to the face or certain other object is obtained including adjusting one or more individual values of one or more of the model components of the linear texture model. Based on the obtained fit of the model to the face or certain other object in the scene, a characteristic of the face or certain other object is determined.Type: GrantFiled: February 27, 2008Date of Patent: August 13, 2013Assignees: DigitalOptics Corporation Europe Limited, National University of IrelandInventors: Mircea Ionita, Peter Corcoran, Ioana Bacivarov
-
Publication number: 20130188863Abstract: A method for context-aware text recognition employing two neuromorphic computing models, auto-associative neural network and cogent confabulation. The neural network model performs the character recognition from input image and produces one or more candidates for each character in the text image input. The confabulation models perform the context-aware text extraction and completion, based on the character recognition outputs and the word and sentence knowledge bases.Type: ApplicationFiled: December 17, 2012Publication date: July 25, 2013Inventors: Richard Linderman, Qinru Qiu, Qing Wu, Morgan Bishop
-
Patent number: 8494257Abstract: Data set generation and data set presentation for image processing are described. The processing determines a location for each of one or more musical artifacts in the image and identifies a corresponding label for each of the musical artifacts, generating a training file that associates the identified labels and determined locations of the musical artifacts with the image, and presenting the training file to a neural network for training.Type: GrantFiled: February 13, 2009Date of Patent: July 23, 2013Assignee: Museami, Inc.Inventors: Robert Taub, George Tourtellot
-
Patent number: 8494256Abstract: The present invention relates to an image processing apparatus and method, a learning apparatus and method, and a program which allow reliable evaluation of whether or not the subject appears sharp. A subject extraction unit 21 uses an input image to generate a subject map representing a region including the subject in the input image, and supplies the subject map to a determination unit 22. The determination unit 22 uses the input image and the subject map from the subject extraction unit 21 to determine the blur extent of the region of the subject on the input image, and calculates the score of the input image on the basis of the blur extent. This score is regarded as an index for evaluating the degree to which the subject appears sharp in the input image. The present invention can be applied to an image capture apparatus.Type: GrantFiled: August 26, 2009Date of Patent: July 23, 2013Assignee: Sony CorporationInventors: Kazuki Aisaka, Masatoshi Yokokawa, Jun Murayama
-
Publication number: 20130163858Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.Type: ApplicationFiled: July 10, 2012Publication date: June 27, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Kye Kyung KIM, Woo Han YUN, Hye Jin KIM, Su Young CHI, Jae Yeon LEE, Mun Sung HAN, Jae Hong KIM, Joo Chan SOHN
-
Patent number: 8467599Abstract: A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications.Type: GrantFiled: August 31, 2011Date of Patent: June 18, 2013Assignee: Edge 3 Technologies, Inc.Inventor: Tarek El Dokor
-
Patent number: 8463025Abstract: A cell phone having distributed artificial intelligence services is provided. The cell phone includes a neural network for performing a first pass of object recognition on an image to identify objects of interest therein based on one or more criterion. The cell phone also includes a patch generator for deriving patches from the objects of interest. Each of the patches includes a portion of a respective one of the objects of interest. The cell phone additionally includes a transmitter for transmitting the patches to a server for further processing in place of an entirety of the image to reduce network traffic.Type: GrantFiled: April 26, 2011Date of Patent: June 11, 2013Assignee: NEC Laboratories America, Inc.Inventors: Iain Melvin, Koray Kavukcuoglu, Akshat Aranya, Bing Bai
-
Patent number: 8442820Abstract: The present invention provides a combined lip reading and voice recognition multimodal interface system, which can issue a navigation operation instruction only by voice and lip movements, thus allowing a driver to look ahead during a navigation operation and reducing vehicle accidents related to navigation operations during driving.Type: GrantFiled: December 1, 2009Date of Patent: May 14, 2013Assignees: Hyundai Motor Company, Kia Motors CorporationInventors: Dae Hee Kim, Dai-Jin Kim, Jin Lee, Jong-Ju Shin, Jin-Seok Lee
-
Patent number: 8422768Abstract: A method for processing images consisting of pixels generated by an image sensor with a view to supplying input data to a simulated or wired neural process. The method includes reading pixels pixel-by-pixel in real time and constructing prototype vectors during the pixel-by-pixel reading process on the basis of the values read, the prototype vectors constituting the input data of the neural process.Type: GrantFiled: November 12, 2009Date of Patent: April 16, 2013Assignee: Institut Franco-Allemand de Recherches de Saint-LouisInventors: Pierre Raymond, Alexander Pichler
-
Patent number: 8422767Abstract: The present invention discloses a system and method of transforming a sample of content data by utilizing known samples in a learning best to best determine coefficients for a linear combination of non-linear filter functions and applying the coefficients to the content data in an operational phase.Type: GrantFiled: April 23, 2008Date of Patent: April 16, 2013Assignee: Gabor LigetiInventor: Gabor Ligeti
-
Patent number: 8401313Abstract: An image processing method is provided for an image processing apparatus which executes processing by allocating a plurality of weak discriminators to form a tree structure having branches corresponding to types of objects so as to detect objects included in image data. Each weak discriminator calculates a feature amount to be used in a calculation of an evaluation value of the image data, and discriminates whether or not the object is included in the image data by using the evaluation value. The weak discriminator allocated to a branch point in the tree structure further selects a branch destination using at least some of the feature amounts calculated by weak discriminators included in each branch destination.Type: GrantFiled: October 30, 2008Date of Patent: March 19, 2013Assignee: Canon Kabushiki KaishaInventors: Takahisa Yamamoto, Masami Kato, Yoshinori Ito, Katsuhiko Mori
-
Patent number: 8385631Abstract: A calculation processing apparatus, which executes calculation processing based on a network composed by hierarchically connecting a plurality of processing nodes, assigns a partial area of a memory to each of the plurality of processing nodes, stores a calculation result of a processing node in a storable area of the partial area assigned to that processing node, and sets, as storable areas, areas that store the calculation results whose reference by all processing nodes connected to the subsequent stage of that processing node is complete. The apparatus determines, based on the storage states of calculation results in partial areas of the memory assigned to the processing node designated to execute the calculation processing of the processing nodes, and to processing nodes connected to the previous stage of the designated processing node, whether or not to execute a calculation of the designated processing node.Type: GrantFiled: June 8, 2011Date of Patent: February 26, 2013Assignee: Canon Kabushiki KaishaInventors: Takahisa Yamamoto, Masami Kato, Yoshinori Ito
-
Patent number: 8385662Abstract: Clustering algorithms such as k-means clustering algorithm are used in applications that process entities with spatial and/or temporal characteristics, for example, media objects representing audio, video, or graphical data. Feature vectors representing characteristics of the entities are partitioned using clustering methods that produce results sensitive to an initial set of cluster seeds. The set of initial cluster seeds is generated using principal component analysis of either the complete feature vector set or a subset thereof. The feature vector set is divided into a desired number of initial clusters and a seed determined from each initial cluster.Type: GrantFiled: April 30, 2009Date of Patent: February 26, 2013Assignee: Google Inc.Inventors: Sangho Yoon, Jay Yagnik, Mei Han, Vivek Kwatra
-
Patent number: 8379969Abstract: The invention relates to a method for matching an object model to a three-dimensional point cloud, wherein the point cloud is generated from two images by means of a stereo method and a clustering method is applied to the point cloud in order to identify points belonging to respectively one cluster, wherein model matching is subsequently carried out, with at least one object model being superposed on at least one cluster and an optimum position of the object model with respect to the cluster being determined, and wherein a correction of false assignments of points is carried out by means of the matched object model. A classifier, trained by means of at least one exemplary object, is used to generate an attention map from at least one of the images. A number and/or a location probability of at least one object, which is similar to the exemplary object, is determined in the image using the attention map, and the attention map is taken into account in the clustering method and/or in the model matching.Type: GrantFiled: April 8, 2010Date of Patent: February 19, 2013Assignee: Pilz GmbH & Co. KGInventors: Bjoern Barrois, Lars Krueger, Christian Woehler
-
Patent number: 8380011Abstract: Described is a technology in which a low resolution image is processed into a high-resolution image, including by a two interpolation passes. In the first pass, missing in-block pixels, which are the pixels within a block formed by four neighboring original pixels, are given values by gradient diffusion based upon interpolation of the surrounding original pixels. In the second interpolation pass, missing on-block pixels, which are the pixels on a block edge formed by two adjacent original pixels, are given values by gradient diffusion based upon interpolation of the values of those adjacent original pixels and the previously interpolated values of their adjacent in-block pixels. Also described is a difference projection process that varies the values of the interpolated pixels according to a computed difference projection.Type: GrantFiled: September 30, 2008Date of Patent: February 19, 2013Assignee: Microsoft CorporationInventors: Yonghua Zhang, Zhiwei Xiong, Feng Wu
-
Publication number: 20130034298Abstract: Contact-less remote-sensing crack detection and/quantification methodologies are described, which are based on three-dimensional (3D) scene reconstruction, image processing, and pattern recognition. The systems and methodologies can utilize depth perception for detecting and/or quantifying cracks. These methodologies can provide the ability to analyze images captured from any distance and using any focal length or resolution. This adaptive feature may be especially useful for incorporation into mobile systems, such as unmanned aerial vehicles (UAV) or mobile autonomous or semi-autonomous robotic systems such as wheel-based or track-based radio controlled robots, as utilizing such structural inspection methods onto those mobile platforms may allow inaccessible regions to be properly inspected for cracks.Type: ApplicationFiled: August 6, 2012Publication date: February 7, 2013Applicant: UNIVERSITY OF SOUTHERN CALIFORNIAInventors: Mohammad R. JAHANSHAHI, Sami MASRI
-
Publication number: 20130011050Abstract: The invention relates to forming an image using binary pixels. Binary pixels are pixels that have only two states, a white state when the pixel is exposed and a black state when the pixel is not exposed. The binary pixels have color filters on top of them, and the setup of color filters may be initially unknown. A setup making use of a statistical approach may be used to determine the color filter setup to produce correct output images. Subsequently, the color filter information may be used with the binary pixel array to produce images from the input images that the binary pixel array records.Type: ApplicationFiled: December 23, 2009Publication date: January 10, 2013Applicant: Nokia CorporationInventors: Tero P. Rissa, Tuomo Maki-Marttunen, Matti Viikinkoski
-
Patent number: 8345984Abstract: Systems and methods are disclosed to recognize human action from one or more video frames by performing 3D convolutions to capture motion information encoded in multiple adjacent frames and extracting features from spatial and temporal dimensions therefrom; generating multiple channels of information from the video frames, combining information from all channels to obtain a feature representation for a 3D CNN model; and applying the 3D CNN model to recognize human actions.Type: GrantFiled: June 11, 2010Date of Patent: January 1, 2013Assignee: NEC Laboratories America, Inc.Inventors: Shuiwang Ji, Wei Xu, Ming Yang, Kai Yu
-
Patent number: 8345963Abstract: Disclosed herein are systems and methods for facilitating the usage of an online workforce to remotely monitor security-sensitive sites and report potential security breaches. In some embodiments, cameras are configured to monitor critical civilian infrastructure, such as water supplies and nuclear reactors. The cameras are operatively connected to a central computer or series of computers, and images captured by the cameras are transmitted to the central computer. After initially registering with the central computer, Guardians “log on” to a central website hosted by the central computer and monitor the images, thereby earning compensation. Site owners compensate the operator of the computer system for this monitoring service, and the operator in turn compensates Guardians based on, for example, (i) the amount of time spent monitoring, and/or (ii) the degree of a given Guardian's responsiveness to real or fabricated security breaches.Type: GrantFiled: November 8, 2011Date of Patent: January 1, 2013Assignee: Facebook, Inc.Inventors: Daniel E. Tedesco, James A. Jorasch, Geoffrey M. Gelman, Jay S. Walker, Stephen C. Tulley, Vincent M. O'Neil, Dean P. Alderucci
-
Patent number: 8345962Abstract: A method and system for training a neural network of a visual recognition computer system, extracts at least one feature of an image or video frame with a feature extractor; approximates the at least one feature of the image or video frame with an auxiliary output provided in the neural network; and measures a feature difference between the extracted at least one feature of the image or video frame and the approximated at least one feature of the image or video frame with an auxiliary error calculator. A joint learner of the method and system adjusts at least one parameter of the neural network to minimize the measured feature difference.Type: GrantFiled: November 25, 2008Date of Patent: January 1, 2013Assignee: NEC Laboratories America, Inc.Inventors: Kai Yu, Wei Xu, Yihong Gong