Learning Systems Patents (Class 382/155)
  • Patent number: 8724891
    Abstract: An apparatus and method for detection of abnormal motion in video stream, having a training phase for defining normal motion and a detection phase for detecting abnormal motions in the video stream is provided. Motion is detected according to motion vectors and motion features extracted from video frames.
    Type: Grant
    Filed: August 31, 2004
    Date of Patent: May 13, 2014
    Assignees: Ramot at Tel-Aviv University Ltd., Nice Systems Ltd.
    Inventors: Nahum Kiryati, Tamar Riklin-Raviv, Yan Ivanchenko, Shay Rochel, Igal Dvir, Daniel Harari
  • Patent number: 8724866
    Abstract: Described herein is a framework for automatically classifying a structure in digital image data are described herein. In one implementation, a first set of features is extracted from digital image data, and used to learn a discriminative model. The discriminative model may be associated with at least one conditional probability of a class label given an image data observation Based on the conditional probability, at least one likelihood measure of the structure co-occurring with another structure in the same sub-volume of the digital image data is determined. A second set of features may then be extracted from the likelihood measure.
    Type: Grant
    Filed: December 8, 2010
    Date of Patent: May 13, 2014
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Dijia Wu, Le Lu, Jinbo Bi, Yoshihisa Shinagawa, Marcos Salganicoff
  • Patent number: 8724890
    Abstract: A method is provided for training and using an object classifier to identify a class object from a captured image. A plurality of still images is obtained from training data and a feature generation technique is applied to the plurality of still images for identifying candidate features from each respective image. A subset of features is selected from the candidate features using a similarity comparison technique. Identifying candidate features and selecting a subset of features is iteratively repeated a predetermined number of times for generating a trained object classifier. An image is captured from an image capture device. Features are classified in the captured image using the trained object classifier. A determination is made whether the image contains a class object based on the trained object classifier associating an identified feature in the image with the class object.
    Type: Grant
    Filed: April 6, 2011
    Date of Patent: May 13, 2014
    Assignee: GM Global Technology Operations LLC
    Inventors: Dan Levi, Aharon Bar Hillel
  • Patent number: 8718380
    Abstract: A shape of an object is represented by a set of points inside and outside the shape. A decision function is learned from the set of points an object. Feature points in the set of points are selected using the decision function, or a gradient of the decision function, and then a local descriptor is determined for each feature point.
    Type: Grant
    Filed: February 14, 2011
    Date of Patent: May 6, 2014
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Fatih Porikli, Hien Nguyen
  • Patent number: 8718358
    Abstract: The present invention is a filler metal installation position checking method and a filler metal installation position checking system for confirming difference between installation position and designed position of a filler metal embedded in a wall surface. A standard surface target 16 having known actual dimension and position to a wall surface 12 is provided to the wall surface 12 where a filler metal 14 is installed. The wall surface 12 is photographed together with the standard surface target 16 to create image data 18. Using actual position and dimension of the standard surface target 16 and position and dimension of the standard surface target 16 on an image displayed by the image data 18, the image data 18 is converted into corrected image data 20 displaying an image corrected in a manner that the image is photographed from front of the wall surface.
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: May 6, 2014
    Assignee: Hitachi, Ltd.
    Inventors: Hiroshi Yokoyama, Yuichi Yamamoto, Shinichi Ebata
  • Patent number: 8705851
    Abstract: A method for training a pattern recognition algorithm including the steps of identifying the known location of the pattern that includes repeating elements within a fine resolution image, using the fine resolution image to train a model associated with the fine image, using the model to examine the fine image resolution image to generate a score space, examining the score space to identify a repeating pattern frequency, using a coarse image that is coarser than the finest image resolution image to train a model associated with the coarse image, using the model associated with the coarse image to examine the coarse image thereby generating a location error, comparing the location error to the repeating pattern frequency and determining if the coarse image resolution is suitable for locating the pattern within a fraction of one pitch of the repeating elements.
    Type: Grant
    Filed: January 3, 2013
    Date of Patent: April 22, 2014
    Assignee: Cognex Corporation
    Inventors: Simon Barker, Adam Wagman, Aaron Wallack, David J Michael
  • Patent number: 8705849
    Abstract: A system for object recognition in which a multi-dimensional scanner generates a temporal sequence of multi-dimensional output data of a scanned object. That data is then coupled as an input signal to a trainable dynamic system. The system exemplified by a general-purpose recurrent neural network is previously trained to generate an output signal representative of the class of the object in response to a temporal sequence of multi-dimensional data.
    Type: Grant
    Filed: November 24, 2008
    Date of Patent: April 22, 2014
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventor: Danil V. Prokhorov
  • Patent number: 8699767
    Abstract: Described is a system for optimizing rapid serial visual presentation (RSVP). A similarity metric is computed for RSVP images, and the images are sequenced according to the similarity metrics. The sequenced images are presented to a user, and neural signals are received to detect a P300 signal. A neural score for each image is computed, and the system is optimized to model the neural scores. The images are resequenced according a predictive model to output a sequence prediction which does not cause a false P300 signal. Additionally, the present invention describes computing a set of motion surprise maps from image chips. The image chips are labeled as static or moving and prepared into RSVP datasets. Neural signals are recorded in response to the RSVP datasets, and an EEG score is computed from the neural signals. Each image chip is then classified as containing or not containing an item of interest.
    Type: Grant
    Filed: December 21, 2010
    Date of Patent: April 15, 2014
    Assignee: HRL Laboratories, LLC
    Inventors: Deepak Khosla, David J. Huber, Rajan Bhattacharyya
  • Patent number: 8693765
    Abstract: The invention includes a method for recognizing shapes using a preprocessing mechanism that decomposes a source signal into basic components called atoms and a recognition mechanism that is based on the result of the decomposition performed by the preprocessing mechanism. In the method, the preprocessing mechanism includes at least one learning phase culminating in a set of signals called kernels, the kernels being adapted to minimize a cost function representing the capacity of the kernels to correctly reconstruct the signals from the database while guaranteeing a sparse decomposition of the source signal while using a database of signals representative of the source to be processed and a coding phase for decomposing the source signal into atoms, the atoms being generated by shifting of the kernels according to their index, each of the atoms being associated with a decomposition coefficient. The invention also includes a shape recognition system for implementing the method.
    Type: Grant
    Filed: August 13, 2009
    Date of Patent: April 8, 2014
    Assignee: Commissariat a l'Energie Atomique et aux Energies Alternatives
    Inventors: David Mercier, Anthony Larue
  • Publication number: 20140079314
    Abstract: An adequate solution for computer vision applications is arrived at more efficiently and, with more automation, enables users with limited or no special image processing and pattern recognition knowledge to create reliable vision systems for their applications. Computer rendering of CAD models is used to automate the dataset acquisition process and labeling process. In order to speed up the training data preparation while maintaining the data quality, a number of processed samples are generated from one or a few seed images.
    Type: Application
    Filed: September 18, 2012
    Publication date: March 20, 2014
    Inventors: Yury Yakubovich, Ivo Moravec, Yang Yang, Ian Clarke, Lihui Chen, Eunice Poon, Mikhail Brusnitsyn, Arash Abadpour, Dan Rico, Guoyi Fu
  • Patent number: 8666122
    Abstract: A biometric sample training device, a biometric sample quality assessment device, a biometric fusion recognition device, an integrated biometric fusion recognition system and example processes in which each may be used are described. Wavelets and a boosted classifier are used to assess the quality of biometric samples, such as facial images. The described biometric sample quality assessment approach provides accurate and reliable quality assessment values that are robust to various degradation factors, e.g., such as pose, illumination, and lighting in facial image biometric samples. The quality assessment values allow biometric samples of different sample types to be combined to support complex recognition techniques used by, for example, biometric fusion devices, resulting in improved accuracy and robustness in both biometric authentication and biometric recognition.
    Type: Grant
    Filed: February 6, 2013
    Date of Patent: March 4, 2014
    Assignee: Lockheed Martin Corporation
    Inventors: Weizhong Yan, Frederick W. Wheeler, Peter H. Tu, Xiaoming Liu
  • Patent number: 8666148
    Abstract: Techniques are disclosed relating to automatically adjusting images. In one embodiment, an image may be automatically adjusted based on a regression model trained with a database of raw and adjusted images. In one embodiment, an image may be automatically adjusted based on a model trained by both a database of raw and adjusted images and a small set of images adjusted by a different user. In one embodiment, an image may be automatically adjusted based on a model trained by a database of raw and adjusted images and predicted differences between a user's adjustment to a small set of images and a predicted adjustment based on the database of raw and adjusted images.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: March 4, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Sylvain Paris, Frederic P. Durand, Vladimir Leonid Bychkovsky, Eric Chan
  • Patent number: 8660371
    Abstract: In one embodiment, there is provided a method for an Optical Character Recognition (OCR) system. The method comprises: recognizing an input character based on a plurality of classifiers, wherein each classifier generates an output by comparing the input character with a plurality of trained patterns; grouping the plurality of classifiers based on a classifier grouping criterion; and combining the output of each of the plurality of classifiers based on the grouping.
    Type: Grant
    Filed: May 6, 2010
    Date of Patent: February 25, 2014
    Assignee: ABBYY Development LLC
    Inventor: Diar Tuganbaev
  • Patent number: 8655018
    Abstract: A method, system and computer program product for detecting presence of an object in an image are disclosed. According to an embodiment, a method for detecting a presence of an object in an image comprises: receiving multiple training image samples; determining a set of adaptive features for each training image sample, the set of adaptive features matching the local structure of each training image sample; integrating the sets of adaptive features of the multiple training image samples to generate an adaptive feature pool; determining a general feature based on the adaptive feature pool; and examining the image using a classifier determined based on the general feature to detect the presence of the object.
    Type: Grant
    Filed: January 19, 2012
    Date of Patent: February 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Rogerio S. Feris, Arun Hampapur, Ying-Li Tian
  • Patent number: 8649594
    Abstract: A method for assessing events detected by a surveillance system includes assessing the likelihood that the events correspond to events being monitored from feedback in response to a condition set by a user. Classifiers are created for the events from the feedback. The classifiers are applied to allow the surveillance system improve its accuracy when processing new video data.
    Type: Grant
    Filed: June 3, 2010
    Date of Patent: February 11, 2014
    Assignee: Agilence, Inc.
    Inventors: Wei Hua, Juwei Lu, Jinman Kang, Jon Cook, Haisong Gu
  • Patent number: 8648816
    Abstract: An information processing apparatus includes: a recognition section which recognizes the shape of an object being in contact with an operation screen of an operating section; a pressure detecting section which detects the pressure of the object on the operation screen; a threshold value setting section which sets a threshold value of the pressure, which is a value for determining a pressure operation on the operation screen, on the basis of the shape of the object recognized by the recognition section; and a determination section which determines whether or not a pressure operation has been performed on the operation screen on the basis of the pressure detected by the pressure detecting section and the threshold value set by the threshold value setting section.
    Type: Grant
    Filed: March 2, 2010
    Date of Patent: February 11, 2014
    Assignee: Sony Corporation
    Inventors: Fuminori Homma, Tatsushi Nashida
  • Patent number: 8649613
    Abstract: A classifier training system trains unified classifiers for categorizing videos representing different categories of a category graph. The unified classifiers unify the outputs of a number of separate initial classifiers trained from disparate subsets of a training set of media items. The training process divides the training set into a number of bags, and applies a boosting algorithm to the bags, thus enhancing the accuracy of the unified classifiers.
    Type: Grant
    Filed: November 3, 2011
    Date of Patent: February 11, 2014
    Assignee: Google Inc.
    Inventors: Thomas Leung, Yang Song, John Zhang
  • Patent number: 8648863
    Abstract: A method for a computer system for inspecting an animation sequence having a series of specified poses of an animated character includes receiving a mathematical performance style model associated with the animated character, receiving the series of specified poses of the animated character, each specified pose comprising respective values for the plurality of animation variables, determining an associated quality factor for each specified pose in response to the respective values for the plurality of animation variables and in response to the mathematical performance style model associated with the animated character, and providing feedback to a user in response to an associated quality factor for at least one specified pose.
    Type: Grant
    Filed: May 20, 2008
    Date of Patent: February 11, 2014
    Assignee: Pixar
    Inventors: John Anderson, Andrew P. Witkin, Robert Cook
  • Publication number: 20140037195
    Abstract: An approach is described for automatically tagging a single image or multiple images. The approach, in one example embodiment, is based on a graph-based framework that exploits both visual similarity between images and tag correlation within individual images. The problem is formulated in the context of semi-supervised learning, where a graph modeled as a Gaussian Markov Random Field (MRF) is solved by minimizing an objective function (the image tag score function) using an iterative approach. The iterative approach, in one embodiment, comprises: (1) fixing tags and propagating image tag likelihood values from labeled images to unlabeled images, and (2) fixing images and propagating image tag likelihood based on tag correlation.
    Type: Application
    Filed: August 3, 2012
    Publication date: February 6, 2014
    Applicant: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan Brandt
  • Patent number: 8644624
    Abstract: Embodiments include a scene classification system and method. In one embodiment, a method includes forming a first plurality of image features from an input image, processing the first plurality of image features in the first scene classifier.
    Type: Grant
    Filed: July 28, 2009
    Date of Patent: February 4, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Li Tao, Yeong-Taeg Kim
  • Patent number: 8640038
    Abstract: Systems and methods receive, from a building, an inventory identifying a plurality of electronic devices associated with a building automation system. The systems and methods compare the inventory to an additional inventory associated with an additional building automation system of an additional building to determine that the inventory is the same as or similar to the additional inventory, wherein the additional inventory identifies a plurality of additional electronic devices. The systems and methods further identify an automation scene that coordinates operation of at least a portion of the plurality of additional electronic devices and provide the automation scene to the building for implementation in the building automation system.
    Type: Grant
    Filed: July 12, 2013
    Date of Patent: January 28, 2014
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Andrew Reeser, Shawn M. Call, Stacy L. Kennedy, Lee C. Drinan, Lisa Ann Frey, Kevin Payne, Michael Jacob
  • Patent number: 8639026
    Abstract: A background model learning system for lighting change adaptation for video surveillance is provided. The system includes a background model estimation unit that estimates a background model for a scene of interest; a foreground map construction unit that constructs a reference foreground map of the time instance; and a lighting change processing unit that revises the reference foreground map by reducing false foreground regions resulting from lighting changes. The revised foreground map is then sent back to both the background model estimation unit and the lighting change processing unit as feedbacks for model learning rate tuning in background model estimation and map integration in lighting change processing, respectively, for the next time instance.
    Type: Grant
    Filed: October 28, 2010
    Date of Patent: January 28, 2014
    Assignee: QNAP Systems, Inc.
    Inventor: Horng-Horng Lin
  • Patent number: 8634923
    Abstract: An apparatus includes: an input configured to receive information indicative of sensed light locations; memory coupled to the input and storing indicia of receptive fields forming a mosaic, each of the receptive fields corresponding to an electrode, the mosaic including first and receptive fields having first and second shapes that are different, the memory further storing instructions; a processor coupled to the input and the memory and configured to read and execute the instructions to: analyze the information indicative of sensed light locations; determine, for each of respective ones of the sensed light locations, one or more receptive fields that include the corresponding sensed light location; and produce excitation indicia; the apparatus further including an output coupled to the processor and configured to be coupled to a retinal implant and to convey the excitation indicia toward the retinal implant.
    Type: Grant
    Filed: August 25, 2010
    Date of Patent: January 21, 2014
    Assignee: Salk Institute for Biological Studies
    Inventors: Tatyana O. Sharpee, Charles F. Stevens
  • Patent number: 8630478
    Abstract: Disclosed are methods and apparatus for automatic optoelectronic detection and inspection of objects, based on capturing digital images of a two-dimensional field of view in which an object to be detected or inspected may be located, analyzing the images, and making and reporting decisions on the status of the object. Decisions are based on evidence obtained from a plurality of images for which the object is located in the field of view, generally corresponding to a plurality of viewing perspectives. Evidence that an object is located in the field of view is used for detection, and evidence that the object satisfies appropriate inspection criteria is used for inspection. Methods and apparatus are disclosed for capturing and analyzing images at high speed so that multiple viewing perspectives can be obtained for objects in continuous motion.
    Type: Grant
    Filed: September 20, 2012
    Date of Patent: January 14, 2014
    Assignee: Cognex Technology and Investment Corporation
    Inventor: William M. Silver
  • Publication number: 20140010439
    Abstract: A method of detecting a predefined set of characteristic points of a face from an image of the face includes a step of making the shape and/or the texture of a hierarchy of statistical models of face parts converge over real data supplied by the image of the face.
    Type: Application
    Filed: February 21, 2012
    Publication date: January 9, 2014
    Applicant: FITTINGBOX
    Inventors: Ariel Choukroun, Sylvain Le Gallou
  • Patent number: 8625884
    Abstract: Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: January 7, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Patent number: 8625885
    Abstract: Systems and methods for automated pattern recognition and object detection. The method can be rapidly developed and improved using a minimal number of algorithms for the data content to fully discriminate details in the data, while reducing the need for human analysis. The system includes a data analysis system that recognizes patterns and detects objects in data without requiring adaptation of the system to a particular application, environment, or data content. The system evaluates the data in its native form independent of the form of presentation or the form of the post-processed data.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: January 7, 2014
    Assignee: Intelliscience Corporation
    Inventors: Robert M. Brinson, Jr., Nicholas Levi Middleton, Bryan Glenn Donaldson
  • Patent number: 8625886
    Abstract: Methods and system employing the same for finding repeated structure for data extraction from document images are provided. A reference record and one or more reference fields thereof are identified from a document image. One or more candidate fields are generated for each of the reference fields. One or more best candidate records from the candidate fields are selected using a probabilistic model and an optimal record set is determined from the best candidate records.
    Type: Grant
    Filed: February 8, 2011
    Date of Patent: January 7, 2014
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Evgeniy Bart, Prateek Sarkar, Eric Saund
  • Patent number: 8620028
    Abstract: Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.
    Type: Grant
    Filed: March 6, 2012
    Date of Patent: December 31, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis Gene Urech, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Lon William Risinger, Kishor Adinath Saitwal, Ming-Jung Seow, David Marvin Solum, Gang Xu, Tao Yang
  • Publication number: 20130343640
    Abstract: Via intuitive interactions with a user, robots may be trained to perform tasks such as visually detecting and identifying physical objects and/or manipulating objects. In some embodiments, training is facilitated by the robot's simulation of task-execution using augmented-reality techniques.
    Type: Application
    Filed: September 17, 2012
    Publication date: December 26, 2013
    Applicant: Rethink Robotics, Inc.
    Inventors: Christopher J. Buehler, Michael R. Siracusa
  • Publication number: 20130343639
    Abstract: An automatic handwriting morphing and modification system and method for digitally altering the handwriting of a user while maintaining the overall appearance and style of the user's handwriting. Embodiments of the system and method do not substitute or replace characters or words but instead morph and modify the user's handwritten strokes to retain a visual correlation between the original user's handwriting and the morphed and modified version of the user's handwriting. Embodiments of the system and method input the user's handwriting and a set of morph rules that determine what the handwritten strokes of the user can look more like after processing. Morphs, which are a specific type or appearance of a handwritten stroke, are selected based on the target handwriting. The selected morphs are applied using geometric tuning, semantic tuning, or both. The result is a morphed and modified version of the user's handwriting.
    Type: Application
    Filed: June 20, 2012
    Publication date: December 26, 2013
    Applicant: Microsoft Corporation
    Inventors: Hrvoje Benko, Benoit Barabe
  • Patent number: 8612286
    Abstract: Techniques for creating a training technique for an individual are provided. The techniques include obtaining video of one or more events and information from a transaction log that corresponds to the one or more events, wherein the one or more events relate to one or more actions of an individual, classifying the one or more events into one or more event categories, comparing the one or more classified events with an enterprise best practices model to determine a degree of compliance, examining the one or more classified events to correct one or more misclassifications, if any, and revise the one or more event categories with the one or more corrected misclassifications, if any, and using the degree of compliance to create a training technique for the individual.
    Type: Grant
    Filed: October 31, 2008
    Date of Patent: December 17, 2013
    Assignee: International Business Machines Corporation
    Inventors: Russell Patrick Bobbitt, Quanfu Fan, Arun Hampapur, Frederik Kjeldsen, Sharathchandra Umapathirao Pankanti, Akira Yanagawa, Yun Zhai
  • Patent number: 8611675
    Abstract: Techniques are described herein for generating and displaying a confusion matrix wherein a data item belonging to one or more actual classes is predicted into a class. The classes in which the data item may be predicted (the “predicted classes”) are ranked according to a score that in one embodiment indicates the confidence of the prediction. If the data item is predicted into a class that is one of the top K ranked predicted classes, then the prediction is considered accurate and an entry is created in a cell of a confusion matrix indicating the accurate prediction. If the data item is not predicted into a class that is not one of the top K ranked predicted classes, then the prediction is considered inaccurate and an entry is created in a cell of a confusion matrix indicating the inaccurate prediction.
    Type: Grant
    Filed: December 22, 2006
    Date of Patent: December 17, 2013
    Assignee: Yahoo! Inc.
    Inventors: Jyh-Herng Chow, Byron Dom, Dao-I Lin
  • Patent number: 8605995
    Abstract: An eigenprojection matrix (#14) is generated by a projection computation (#12) using a local relationship from a studying image group (#10) including a pair of a high quality image and a low quality image to create a projection core tensor (#16) defining a correspondence between the low quality image and an intermediate eigenspace and a correspondence between the high quality image and the intermediate eigenspace. A first sub-core tensor is created (#24) from the projection core tensor based on a first setting, and an inputted low quality image (#20) is projected (#30) based on the eigenprojection matrix and the first sub-core tensor to calculate a coefficient vector in the intermediate eigenspace. The coefficient vector is projected (#34) based on a second sub-core tensor (#26) created by a second setting from the projection core tensor and based on the eigenprojection matrix to obtain a high quality image (#36).
    Type: Grant
    Filed: July 26, 2010
    Date of Patent: December 10, 2013
    Assignee: FUJIFILM Corporation
    Inventor: Hirokazu Kameyama
  • Publication number: 20130322739
    Abstract: Techniques are disclosed relating to automatically adjusting images. In one embodiment, an image may be automatically adjusted based on a regression model trained with a database of raw and adjusted images. In one embodiment, an image may be automatically adjusted based on a model trained by both a database of raw and adjusted images and a small set of images adjusted by a different user. In one embodiment, an image may be automatically adjusted based on a model trained by a database of raw and adjusted images and predicted differences between a user's adjustment to a small set of images and a predicted adjustment based on the database of raw and adjusted images.
    Type: Application
    Filed: August 2, 2013
    Publication date: December 5, 2013
    Applicant: Adobe Systems Incorporated
    Inventors: Sylvain P. Paris, Frederic P. Durand, Vladimir L. Bychkovsky, Eric Chan
  • Patent number: 8600120
    Abstract: Systems and methods are provided for control of a personal computing device based on user face detection and recognition techniques.
    Type: Grant
    Filed: March 6, 2008
    Date of Patent: December 3, 2013
    Assignee: Apple Inc.
    Inventors: Jeff Gonion, Duncan Robert Kerr
  • Publication number: 20130315476
    Abstract: Techniques are disclosed relating to modifying an automatically predicted adjustment. In one embodiment, the automatically predicted adjustment may be adjusted, for example, based on a rule. The automatically predicted adjustment may be based on a machine learning prediction. A new image may be globally adjusted based on the modified automatically predicted adjustment.
    Type: Application
    Filed: August 2, 2013
    Publication date: November 28, 2013
    Applicant: Adobe Systems Incorporated
    Inventors: Sylvain P. Paris, Jen-Chan Chien, Vladimir L. Bychkovsky
  • Patent number: 8594410
    Abstract: An image-based biomarker is generated using image features obtained through object-oriented image analysis of medical images. The values of a first subset of image features are measured and weighted. The weighted values of the image features are summed to calculate the magnitude of a first image-based biomarker. The magnitude of the biomarker for each patient is correlated with a clinical endpoint, such as a survival time, that was observed for the patient whose medical images were analyzed. The correlation is displayed on a graphical user interface as a scatter plot. A second subset of image features is selected that belong to a second image-based biomarker such that the magnitudes of the second image-based biomarker for the patients better correlate with the clinical endpoints observed for those patients. The second biomarker can then be used to predict the clinical endpoint of other patients whose clinical endpoints have not yet been observed.
    Type: Grant
    Filed: January 18, 2011
    Date of Patent: November 26, 2013
    Assignee: Definiens AG
    Inventors: Guenter Schmidt, Gerd Binnig, Ralf Schoenmeyer, Arno Schaepe
  • Patent number: 8577130
    Abstract: Described herein is a technology for facilitating deformable model-based segmentation of image data. In one implementation, the technology includes receiving training image data (202) and automatically constructing a hierarchical structure (204) based on the training image data. At least one spatially adaptive boundary detector is learned based on a node of the hierarchical structure (206).
    Type: Grant
    Filed: March 15, 2010
    Date of Patent: November 5, 2013
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Maneesh Dewan, Yiqiang Zhan, Xiang Sean Zhou, Zhao Yi
  • Patent number: 8577152
    Abstract: A method and apparatus that classify an image. The method extracts a feature vector from the image, wherein the feature vector includes a plurality of first features. The extracting of each of the first features includes: acquiring a difference between sums or mean values of pixels of the plurality of first areas in the corresponding combination to obtain a first difference vector in the direction of the first axis, and obtaining a second difference vector in the direction of the second axis. A first projection difference vector is acquired with a second projection difference vector. A sum of magnitudes of the first projection difference vector and the second projection difference vector as the first feature is obtained; and the image according to the extracted feature vector is classified.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: November 5, 2013
    Assignee: Sony Corporation
    Inventors: Lun Zhang, Weiguo Wu
  • Patent number: 8565518
    Abstract: In an image processing device and method, program, and recording medium of the present invention, high frequency components of a low quality image and a high quality image included in a studying image set are extracted, and an eigenprojection matrix and a projection core tensor of the high frequency components are generated in a studying step. In a restoration step, a first sub-core tensor and a second sub-core tensor are generated based on the eigenprojection matrix and the projection core tensor of the high frequency components, and a tensor projection process is applied to the high frequency components of an input image to generate a high quality image of the high frequency components. The high quality image of the high frequency components is added to an enlarged image obtained by enlarging the input image to the same size as an output image.
    Type: Grant
    Filed: July 26, 2010
    Date of Patent: October 22, 2013
    Assignee: FUJIFILM Corporation
    Inventor: Hirokazu Kameyama
  • Patent number: 8548230
    Abstract: A tentative eigenprojection matrix (#12-a) is provisionally generated from a studying image group (#10) including a pair of a high quality image and a low quality image, and a tentative projection core tensor (#12-b) defining a correspondence between the low quality image and an intermediate eigenspace and a correspondence between the high quality image and the intermediate eigenspace is created. A first sub-core tensor is created from the tentative projection core tensor based on a first setting (#12-c), and the studying image group is projected based on the tentative eigenprojection matrix and the first sub-core tensor (#15-a) to calculate an intermediate eigenspace coefficient vector (#15-b).
    Type: Grant
    Filed: July 26, 2010
    Date of Patent: October 1, 2013
    Assignee: FUJIFILM Corporation
    Inventor: Hirokazu Kameyama
  • Patent number: 8542905
    Abstract: Described are methods and apparatuses, including computer program products, for determining model uniqueness with a quality metric of a model of an object in a machine vision application. Determining uniqueness involves receiving a training image, generating a model of an object based on the training image, generating a modified training image based on the training image, determining a set of poses that represent possible instances of the model in the modified training image, and computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
    Type: Grant
    Filed: December 29, 2010
    Date of Patent: September 24, 2013
    Assignee: Cognex Corporation
    Inventors: Xiaoguang Wang, Lowell Jacobson
  • Patent number: 8542913
    Abstract: A technique for determining a characteristic of a face or certain other object within a scene captured in a digital image including acquiring an image and applying a linear texture model that is constructed based on a training data set and that includes a class of objects including a first subset of model components that exhibit a dependency on directional lighting variations and a second subset of model components which are independent of directional lighting variations. A fit of the model to the face or certain other object is obtained including adjusting one or more individual values of one or more of the model components of the linear texture model. Based on the obtained fit of the model to the face or certain other object in the scene, a characteristic of the face or certain other object is determined.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: September 24, 2013
    Assignees: DigitalOptics Corporation Europe Limited, National University of Ireland
    Inventors: Mircea Ionita, Ioana Bacivarov, Peter Corcoran
  • Patent number: 8538203
    Abstract: A method for interpolation includes receiving an input image having a plurality of pixels. The edge direction proximate a first pixel of the input image is estimated using a first technique from a plurality of discrete potential directions. An edge direction is selected based upon the estimating the edge direction proximate the first pixel of the input image using a second technique. The pixels proximate the first pixel are interpolated based upon the selected edge direction. The pixels proximate the first pixel are interpolated based upon another technique. An output image is determined pixels having more pixels than the plurality of pixels.
    Type: Grant
    Filed: November 30, 2007
    Date of Patent: September 17, 2013
    Assignee: Sharp Laboratories of America, Inc.
    Inventor: Hao Pan
  • Patent number: 8538071
    Abstract: The Target Separation Algorithms (TSAs) are used to improve the results of Automated Target Recognition (ATR). The task of the TSAs is to separate two or more closely spaced targets in Regions of Interest (ROIs), to separate targets from objects like trees, buildings, etc., in a ROI, or to separate targets from clutter and shadows. The outputs of the TSA separations are inputs to ATR, which identify the type of target based on a template database. TSA may include eight algorithms. These algorithms may use average signal magnitude, support vector machines, rotating lines, and topological grids for target separation in ROI. TSA algorithms can be applied together or separately in different combinations depending on case complexity, required accuracy, and time of computation.
    Type: Grant
    Filed: March 18, 2009
    Date of Patent: September 17, 2013
    Assignee: Raytheon Company
    Inventor: Gary I. Asnis
  • Patent number: 8538100
    Abstract: In the image acquisition mode, desired past images are registered in an image table. A past image selected from the past images registered in the table is displayed as a reference image together with a live image in a predetermined form. Selecting another image registered in the image table at an arbitrary timing will display another past image upon replacing the reference image.
    Type: Grant
    Filed: January 29, 2008
    Date of Patent: September 17, 2013
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Medical Systems Corporation
    Inventor: Yoshitaka Mine
  • Publication number: 20130236090
    Abstract: A dictionary of atoms for coding data is learned by first selecting samples from a set of samples. Similar atoms in the dictionary are clustered, and if a cluster has multiple atoms, the atoms in that cluster are merged into a single atom. The samples can be acquired online.
    Type: Application
    Filed: March 12, 2012
    Publication date: September 12, 2013
    Inventors: Fatih Porikli, Nikhil Rao
  • Patent number: 8529446
    Abstract: In a method for determining a parameter in an automatic study and data management system, data is gathered in a knowledge database, and a parameter is determined based the data gathered in the knowledge database. The data is correlated to at least one of a configuration and implementation of a previous clinical study. The parameter is usable for configuring a future clinical study.
    Type: Grant
    Filed: May 31, 2007
    Date of Patent: September 10, 2013
    Assignee: Siemens Aktiengesellschaft
    Inventors: Markus Schmidt, Siegfried Schneider, Gudrun Zahlmann
  • Patent number: 8527439
    Abstract: In a pattern identification method in which input data is classified into predetermined classes by sequentially executing a combination of a plurality of classification processes, at least one of the classification processes includes a mapping step of mapping the input data in an N (N?2) dimensional feature space as corresponding points, a determination step of determining whether or not to execute the next classification process based on the corresponding points, and selecting step of selecting a classification process to be executed next based on the corresponding points when it is determined in the determination step that the next classification process should be executed.
    Type: Grant
    Filed: September 5, 2008
    Date of Patent: September 3, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kan Torii, Katsuhiko Mori, Yusuke Mitarai, Hiroshi Sato, Yuji Kaneda, Takashi Suzuki