Abstract: An image processing sensor system functions as a standalone unit to capture images and process the resulting signals to detect objects or events of interest. The processing significantly improves selectivity and specificity of detection objects and events in a series of motions that may precede a patient who is at elevated risk of falling.
Abstract: Cascaded object detection techniques are described. In one or more implementations, cascaded coarse-to-dense object detection techniques are utilized to detect objects in images. In a first stage, coarse features are extracted from an image, and non-object regions are rejected. Then, in one or more subsequent stages, dense features are extracted from the remaining non-rejected regions of the image to detect one or more objects in the image.
Type:
Grant
Filed:
November 15, 2013
Date of Patent:
February 23, 2016
Assignee:
Adobe Systems Incorporated
Inventors:
Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li
Abstract: A system and method for receiving an image of a product's packaging and extracting information (e.g., a set of facts) associated with a product from the image. The extracted information associated with the product may be added to a product profile if a confidence score associated with the extracted information is greater than or equal to a threshold.
Type:
Grant
Filed:
January 6, 2014
Date of Patent:
February 16, 2016
Assignee:
Amazon Technologies, Inc.
Inventors:
Derek Cole Singer, Sunil Ramesh, Sebastian Lehmann, Andrea C. Steves, Nathan John Condie
Abstract: Disclosed are various embodiments for adjusting the encoding of a video signal into a video stream based on user attention. A video signal is encoded into a video stream. A temporary lapse of attention by a user of the interactive application is predicted. The encoding of the video signal into the video stream is adjusted from an initial state to a conservation state in response to predicting the temporary lapse of attention by the user. The conservation state is configured to conserve one or more resources used for the video stream relative to the initial state.
Abstract: An image processing apparatus includes a character recognition unit configured to perform character recognition of a character region where characters exist in an image to generate character code, a detection unit configured to detect a region of the image where a feature change in the image is small, and a placement unit configured to place data obtained from the character code in the detected region.
Abstract: An apparatus and method of controlling a mobile terminal by detecting a face or an eye in an input image are provided. The method includes performing face recognition on an input image facing and being captured by an image input unit equipped on the front face of the mobile terminal; determining, based on the face recognition, user state information that includes whether a user exists, a direction of the user's face, a distance from the mobile terminal, and/or a position of the user's face; and performing a predetermined function of the mobile terminal according to the user state information. According to the method, functions of the mobile terminal may be controlled without direct inputs from the user.
Type:
Grant
Filed:
May 2, 2013
Date of Patent:
January 19, 2016
Assignee:
Samsung Electronics Co., Ltd
Inventors:
Byung-Jun Son, Hong-Il Kim, Tae-Hwa Hong
Abstract: The recognition of text in an acquired image is improved by using general and type-specific heuristics that can determine the likelihood that a portion of the text is truncated at an edge of an image, frame, or screen. Truncated text can be filtered such that the user is not provided with an option to perform an undesirable task, such as to dial an incorrect number or connect to an incorrect Web address, based on recognizing an incomplete text string. The general and type-specific heuristics can be combined to improve confidence, and the image data can be pre-processed on the device before processing with an optical character recognition (OCR) engine. Multiple frames can be analyzed to attempt to recognize words or characters that might have been truncated in one or more of the frames.
Type:
Grant
Filed:
September 24, 2014
Date of Patent:
January 19, 2016
Assignee:
Amazon Technologies, Inc.
Inventors:
Matthew Joseph Cole, Yue Liu, David Paul Ramos, Avnish Sikka
Abstract: Apparatus and methods for processing inputs by one or more neurons of a network. The neuron(s) may generate spikes based on receipt of multiple inputs. Latency of spike generation may be determined based on an input magnitude. Inputs may be scaled using for example a non-linear concave transform. Scaling may increase neuron sensitivity to lower magnitude inputs, thereby improving latency encoding of small amplitude inputs. The transformation function may be configured compatible with existing non-scaling neuron processes and used as a plug-in to existing neuron models. Use of input scaling may allow for an improved network operation and reduce task simulation time.
Abstract: An image editing apparatus includes an input unit configured to input a moving image stream that includes a plurality of frames, an extracting unit configured to extract a specific subject from at least one of frames included in the input moving image stream, and an image processor configured to determine whether or not to perform a mask processing to the specific subject included in the at least one frame of the input moving image stream according to a predetermined output condition which is defined based on at least an output resolution of the moving image stream to be output, and performs the mask processing to the specific subject based on the determination result.
Abstract: This disclosure generally relates to systems and methods that facilitate employing exemplar Histogram of Oriented Gradients Linear Discriminant Analysis (HOG-LDA) models along with Localizer Hidden Markov Models (HMM) to train a classification model to classify actions in videos by learning poses and transitions between the poses associated with the actions in a view of a continuous state represented by bounding boxes corresponding to where the action is located in frames of the video.
Abstract: An apparatus is provided including a camera; and a processing circuitry configured to: capture an image of a scanning device by using the camera; and output a first scannable item based on the captured image of the scanning device.
Abstract: An image processing device configured to detect a detection target, which is all or a part of a predetermined main body on an image has a detection target detection unit that detects an estimated detection target that the image processing device assumes to be the detection target from the image, a heterogeneous target determination unit that determines whether the estimated detection target detected by the detection target detection unit is an estimated heterogeneous target that the image processing device assumes to be a heterogeneous target, which is all or a part of a main body different in class from the main body, and a detection target determination unit that determines whether the estimated detection target detected by the detection target detection unit is the detection target based on a determination result of the heterogeneous target determination unit.
Abstract: A method for determining a salient region of an image is disclosed. For a plurality of different saliency cue functions, a single saliency value is calculated for each pixel in a plurality of adjacent pixels in an image using the saliency cue function, wherein one of the saliency cue functions is based on whether the pixel is in a region of the image whose colors contrast with the region's background and another of the saliency cue functions is based on a foreground and background color models of the image. A classifier is used to calculate a combined single saliency value for each pixel based on the single saliency values for the pixel. The salient region of the pixels is determined with a subwindow search based on the combined single saliency values.
Type:
Grant
Filed:
April 24, 2014
Date of Patent:
December 1, 2015
Assignee:
Google Inc.
Inventors:
Luca Bertelli, Dennis Strelow, Sally A. Goldman
Abstract: An image processing method for identifying a region in an input image by character recognition, the region coinciding with a predetermined search condition, includes receiving the search condition, the search condition including assignments of plural format character strings, each format character string including an assignment of a character type or a specific character for each character of a recognition target, extracting a character string region becoming a candidate from the input image, calculating a similarity between a character recognition result and the plural format character strings with respect to each group of plural character string regions, the character recognition result being of each character string region included in each group, and determining the group coinciding with the search condition among the groups of plural character string regions according to the calculated similarity.
Abstract: A camera according to the present invention, which is capable of continuous shooting before and after a still image shot according to photographer's operation, comprises: an imaging section converting an object image into image data; a still image shooting section obtaining image data of the still image according to release operation; a continuous shooting section obtaining the image data by continuous shooting before and after the obtaining of the still image in the still image shooting section; an image processing section performing image processing which is different from that of the image data obtained by the still image shooting section and changed sequentially, on the image data obtained by the continuous shooting section; and a recording section recording the image data image-processed by the image processing section.
Abstract: A volume identification system identifies a set of unlabeled spatio-temporal volumes within each of a set of videos, each volume representing a distinct object or action. The volume identification system further determines, for each of the videos, a set of volume-level features characterizing the volume as a whole. In one embodiment, the features are based on a codebook and describe the temporal and spatial relationships of different codebook entries of the volume. The volume identification system uses the volume-level features, in conjunction with existing labels assigned to the videos as a whole, to label with high confidence some subset of the identified volumes, e.g., by employing consistency learning or training and application of weak volume classifiers. The labeled volumes may be used for a number of applications, such as training strong volume classifiers, improving video search (including locating individual volumes), and creating composite videos based on identified volumes.
Abstract: Technologies are generally presented for employing enhanced expectation maximization (EEM) in image retrieval and authentication. Using uniform distribution as initial condition, the EEM may converge iteratively to a global optimality. If a realization of the uniform distribution is used as the initial condition, the process may also be repeatable. In some examples, a positive perturbation scheme may be used to avoid boundary overflow, often occurring with the conventional EM algorithms. To reduce computation time and resource consumption, a histogram of one dimensional Gaussian Mixture Model (GMM) with two components and wavelet decomposition of an image may be employed.
Abstract: A method and system for detection of video segments in compressed digital video streams is presented. The compressed digital video stream is examine to determine synchronization points, and the compressed video signal is analyzed following detection of the synchronization points to create video fingerprints that are subsequently compared against a library of stored fingerprints.
Type:
Grant
Filed:
January 20, 2014
Date of Patent:
September 29, 2015
Assignee:
RPX Corporation
Inventors:
Rainer W. Lienhart, Charles A. Eldering
Abstract: There is provided an image processing device including an image acquisition part acquiring an image; a depth acquisition part acquiring a depth in association with a pixel included in the image; a target object detection part detecting a region of a predetermined target object in the image; a target object detection distance selection part selecting the depth corresponding to the pixel included in the detected region as a target object detection distance; a local maximum distance selection part selecting the depth having a local maximum frequency in a frequency distribution of the depths as a local maximum distance; and a determination part determining whether the image is a target object image obtained by shooting the target object depending on whether a degree of closeness between a value of the target object detection distance and a value of the local maximum distance is higher than a predetermined value.
Abstract: Systems, methods, and articles of manufacture for automatic target recognition. A hypothesis about a target's classification, position and orientation relative to a LADAR sensor that generates range image data of a scene including the target is simulated and a synthetic range image is generated. The range image and synthetic range image are then electronically processed to determine whether the hypothesized model and position and orientation are correct. If the score is sufficiently high then the hypothesis is declared correct, otherwise a new hypothesis is formed according to a search strategy.
Type:
Grant
Filed:
July 15, 2013
Date of Patent:
September 8, 2015
Assignee:
The United States of America as Represented by the Secretary of the Navy
Abstract: Apparatuses and methods for prefetching data are disclosed. A method may include receiving a read request at a data storage device, determining a meta key in an address map that includes a logical block address (LBA) of the read request, wherein the meta key includes a beginning LBA and a size field corresponding to a number of consecutive sequential LBAs stored on the data storage device, calculating a prefetch operation to prefetch data based on addresses included in the meta key, and reading data corresponding to the prefetch operation and the read request. An apparatus may include a processor configured to receive a read request, determine a first meta key and a second meta key in an address map, calculate a prefetch operation based on addresses included in the first meta key and the second meta key, and read data corresponding to the prefetch operation and the read request.
Abstract: Disclosed are various embodiments that facilitate translation of applications. An image is obtained, and text shown within the image is recognized. Translated text is generated by translating the text from one language to another. The translated text is incorporated into the image. The image is then sent to another computing device.
Abstract: The recognition of user input to a computing device is enhanced. The user input is either speech, or handwriting data input by the user making screen-contacting gestures, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed screen-contacting gestures that are made by the user, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed non-screen-contacting gestures that are made by the user.
Abstract: A signal processing circuit supplies a gradation signal specifying a gradation to be displayed on pixels comprising; a conversion unit that extracts a extraction signal specifying a gradation to be displayed on a predetermined number of pixels including a certain pixel for each RGBW, from the video signal specifying a gradation to be displayed on a pixel for each RGBW, a storage unit that stores a predetermined number of coefficients for each RGBW, a first selection unit that selects a single color signal specifying a gradation to be displayed on a block with regard to a display color of a certain pixel, from the extraction signal, a second selection unit that acquires a predetermined number of coefficients corresponding to the display color of a certain pixel and a calculation unit that generates a gradation signal based on the outputs from the first selection unit and the second selection unit.
Abstract: An image generation device includes a video image generation means for generating a video image of a target photographic subject within a photographic subject on the basis of a plurality of captured images that are acquired by capturing images of the photographic subject in time series, and a background image generation means for generating a background image of the target photographic subject on the basis of the captured images.
Abstract: A method for processing data includes receiving a depth map of a scene containing at least an upper body of a humanoid form. The depth map is processed so as to identify a head and at least one arm of the humanoid form in the depth map. Based on the identified head and at least one arm, and without reference to a lower body of the humanoid form, an upper-body pose, including at least three-dimensional (3D) coordinates of shoulder joints of the humanoid form, is extracted from the depth map.
Abstract: A proof information processing apparatus adds a plurality of types of annotative information to a proof image by use of a plurality of input modes for inputting respective different types of annotative information. A proof information processing method is carried out by using the proof information processing apparatus. A recording medium stores a program for performing the functions of the proof information processing apparatus. An electronic proofreading system includes the proof information processing apparatus and a remote server. At least one of input modes including a text input mode, a stylus input mode, a color information input mode, and a speech input mode is selected depending on characteristics of an image in a region of interest which is indicated.
Abstract: Embodiments describe an image retrieval apparatus. The image retrieval apparatus includes an unlabelled image selector for selecting one or more unlabelled image(s) from an image database; and a main learner for training in each feedback round of the image retrieval, estimating relevance of images in the image database and a user's intention, and determining retrieval results, wherein the main learner makes use of the unlabelled image(s) selected by the unlabelled image selector in the estimation. In addition, the image retrieval apparatus may also include an active selector for selecting, in each feedback round and according to estimation results of the main learner, one or more unlabelled image(s) from the image database for the user to label.
Abstract: A method, computer readable storage device, and apparatus for determining the distance a computing device is located from a user's face. An image of an individual is obtained. A first pupil location and a second pupil location are identified based on the obtained image. A first distance between the identified first and second pupil location is determined. A second distance between the individual and the computing device is determined based on the determined first distance between the identified first and second pupil locations.
Type:
Grant
Filed:
February 15, 2013
Date of Patent:
May 26, 2015
Assignee:
GOOGLE INC.
Inventors:
Richard Gossweiler, Gregory Sean Corrado
Abstract: A spreadsheet application associates data obtained from a captured image with a spreadsheet. For example, one or more images of physical data may be captured and translated into electronic data that is automatically associated with one or more spreadsheets. The formatting and underlying formulas of the data included within the captured image may be represented within a spreadsheet (e.g. highlighted data remains highlighted within the electronic spreadsheet). The data may also be compared with existing electronic data. For example, differences between the data in the captured image with the data in an existing spreadsheet may be used to update the existing spreadsheet. A display of a captured image may be also be augmented using data that is obtained from the captured image. For example, a chart may be created and displayed using data that is obtained from the captured image.
Type:
Grant
Filed:
January 24, 2011
Date of Patent:
May 26, 2015
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Amy Lin, Shahar Prish, Sherman Der, John Campbell
Abstract: Detecting a pattern in an image by receiving the image of a pattern and storing the image in a memory, where the pattern is composed of shapes that have geometrical properties that are invariant under near projective transforms. In some embodiments the process detects shapes in the image using the geometrical properties of the shapes, determines the alignment of the various shapes, and, corresponds or matches the shapes in the image with the shapes in the pattern. This pattern detection process may be used for calibration or distortion correction in optical devices.
Type:
Grant
Filed:
January 24, 2012
Date of Patent:
May 19, 2015
Assignee:
Cognitech, Inc.
Inventors:
Leonid I. Rudin, Pablo Musé, Pascal Monasse
Abstract: This invention relates to a secured identification medium and a method for securing such a medium. The secured identification medium comprises an integrated circuit and, printed on one side, identification information (Ip) about the holder of the medium. It further comprises a set of characteristic attributes Att(Ipi) of the identification information, generated from a capture (Ipi) of the identification information and an extraction algorithm. The set of characteristic attributes of the printed analog image is stored in the integrated circuit and is designed to be compared, during an authentication stage, with a second set of characteristic attributes of the same printed image on the medium.
Abstract: The present invention provides a method, system and/or a digital camera providing a geometrical transformation of deformed images of documents comprising text, by text line tracking, resulting in an image comprising parallel text lines. The transformed image is provided as an input to an OCR program either running in a computer system or in a processing element comprised in said digital camera.
Type:
Grant
Filed:
May 19, 2006
Date of Patent:
May 19, 2015
Assignee:
LUMEX AS
Inventors:
Hans Christian Meyer, Mats Stefan Carlin, Knut Tharald Fosseide
Abstract: In a method and apparatus for identifying an embossed character, light of one color is directed in one direction across the embossed character to illuminate certain character parts and light of another color is directed in another direction across the embossed character to illuminate other character parts. Image data for the two colors are captured and are subjected to separate image processing to detect edges highlighted by the directed light. The processed images are combined and supplemented with OCR analysis before being compared with predicted characters. Based on the comparison, a determination is made as to the probable identity of the character.
Abstract: A noninvasive, quantitative imaging technique is presented for detecting and diagnosing liver disease, such as cirrhosis. The technique includes: capturing scan data from a subject using computed tomography or another type of imaging method and extracting image data representing the liver from the scan data. Various measures of the liver may be obtained from image data and then used to compute random variables of a statistical model, where the model is predictive of a medical condition of the liver and comprised of random variables that are indicative of at least one of a shape or texture of the liver. Output from the statistical model provides an indication of an undesirable condition of the liver.
Type:
Grant
Filed:
January 9, 2012
Date of Patent:
May 19, 2015
Assignee:
The Regents of The University of Michigan
Inventors:
Grace L. Su, Stewart Wang, Hannu Huhdanpaa
Abstract: Provided are an age estimation apparatus, an age estimation method, and an age estimation program capable of obtaining a recognition result closely matching the result perceived by human. An age estimation apparatus 10 for estimating an age of a person on image data includes a dimension compressor 11 for applying dimension compression to the image data to output low dimensional data; and an identification device 12 for estimating an age of a person on the basis of a learning result using a feature amount contained in the low dimensional data, wherein a parameter used for the dimension compression by the dimension compressor 11 and the feature amount used for age estimation by the identification device 12 are set on the basis of a result of an evaluation of a generalization capability using a weighting function that shows a degree of seriousness of an age estimation error for every age, and learning of the identification device 12 is performed on the basis of the weighting function.
Type:
Grant
Filed:
April 14, 2010
Date of Patent:
May 19, 2015
Assignees:
NEC Solution Innovators, Ltd., TOKYO INSTITUTE OF TECHNOLOGY
Abstract: An image-classification apparatus includes: a first feature extraction unit to acquire a feature value of each of block images obtained by segmenting an input image; an area-segmentation unit to assign each of the block images to any one of K areas based on the feature value; a second feature extraction unit to acquire, based on an area-segmentation result, a feature vector whose elements including, the number of adjacent spots, each including two block images adjacent to each other in the input image, for each combination of the areas whereto the two block images are assigned; or a ratio of the number of block images assigned to each of the K areas to all the number of block images adjacent to a block image assigned to each of the K areas; and a classification unit to classify to which of a plurality of categories the input image belongs.
Abstract: A system for performing an image categorization procedure includes an image manager with a keypoint generator, a support region filter, an orientation filter, and a matching module. The keypoint generator computes initial descriptors for keypoints in a test image. The support region filter and the orientation filter perform respective filtering procedures upon the initial descriptors to produce filtered descriptors. The matching module compares the filtered descriptors to one or more database image sets for categorizing said test image. A processor of an electronic device typically controls the image manager to effectively perform the image categorization procedure.
Abstract: A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player.
Type:
Grant
Filed:
June 1, 2011
Date of Patent:
May 5, 2015
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Stephen Latta, Relja Markovic, Kevin Geisner, A. Dylan Vance, Brian Scott Murphy, Matt Coohill
Abstract: According to one embodiment, an electronic apparatus includes a line recognition module, a character recognition module and a generator. The line recognition module recognizes lines in a handwritten document. The character recognition module recognizes character codes corresponding to handwritten characters in a first line and a second line which follows the first line. The generator generates, if the first and second lines satisfy a condition, document data using first character codes corresponding to the first line and second character codes corresponding to the second line, the formed document data including either one of the first character codes at a position of the second line or including at least one of the second character codes at a position of the first line.
Abstract: An image recognition method and an image recognition system can be applied to fetal ultrasound images. The image recognition method includes: (a) adjusting the image with a filter operator to decrease noise and to homogenize an image expression level of the pixel units within an individual object structure; (b) analyzing the image by a statistic information function, determining a foreground object pixel unit and a background pixel unit according to a max information entropy state of the statistic information function; and (c) searching by a profile setting value and recognizing a target object image among the foreground object pixel unit. The image recognition method can not only increase the efficiency of identifying the object of interests within the image and measuring the object of interests, but also improve the precision of measurements of the object of interests.
Type:
Grant
Filed:
March 14, 2013
Date of Patent:
April 28, 2015
Assignee:
National Taiwan University of Science and Technology
Abstract: An image management device clusters acquired images (S201) and generates blocks by grouping the images (S202). Next, the image management device calculates an intra-block importance degree of each cluster in each generated block (S204), calculates cluster importance degrees by accumulating the calculated intra-block importance degrees of each cluster (S205), and calculates an image importance degree based on the calculated cluster importance degrees (S206).
Type:
Grant
Filed:
November 16, 2011
Date of Patent:
April 28, 2015
Assignee:
Panasonic Intellectual Property Corporation of America
Abstract: An embodiment generally relates to systems and methods for estimating heart rates of individuals using non-contact imaging. A processing module can process multi-spectral video images of individuals and detect skin blobs within different images of the multi-spectral video images. The skin blobs can be converted into time series signals and processed with a band pass filter. Further, the time series signals can be processed to separate pulse signals from unnecessary signals. The heart rate of the individual can be estimated according to the resulting time series signal processing.
Abstract: A method, system and apparatus are shown for identifying non-language speech sounds in a speech or audio signal. An audio signal is segmented and feature vectors are extracted from the segments of the audio signal. The segment is classified using a hidden Markov model (HMM) that has been trained on sequences of these feature vectors. Post-processing components can be utilized to enhance classification. An embodiment is described in which the hidden Markov model is used to classify a segment as a language speech sound or one of a variety of non-language speech sounds. Another embodiment is described in which the hidden Markov model is trained using discriminative learning.
Abstract: Input information of a multidimensional array is divided into a plurality of divided areas, accumulated information is generated by calculating accumulated values at respective element positions of the input information from a corresponding reference location for each of the plurality of divided areas, and the generated accumulated information is held in a memory for each divided area. Calculation using the accumulated information is executed for a predetermined processing range. The input information is divided into the plurality of divided areas so that two neighboring divided areas have an overlapping area, and the overlapping area has a size at least in which the whole processing range fits.
Abstract: A method of detection of numbered captions in a document includes receiving a document including a sequence of document pages and identifying illustrations on pages of the document. For each identified illustration, associated text is identified. An imitation page is generated for each of the identified illustrations, each imitation page comprising a single illustration and its associated text. For a sequence of the imitation pages, a sequence of terms is identified. Each term is derived from a text fragment of the associate text of a respective imitation page. The terms of a sequence complying with at least one predefined numbering scheme which defines a form and an incremental state of the terms in a sequence. The terms of the identified sequence of terms are construed as being at least a part of a numbered caption for a respective illustration in the document.
Abstract: Methods, systems, and apparatus are disclosed which enable flexible insertion of forensic watermarks into a digital content signal using a common customization function. The common customization function flexibly employs a range of different marking techniques that are applicable to a wide range of forensic marking schemes. These customization functions are also applicable to pre-processing and post-processing operations that may be necessary for enhancing the security and transparency of the embedded marks, as well as improving the computational efficiency of the marking process. The common customization function supports a well-defined set of operations specific to the task of forensic mark customization that can be carried out with a modest and preferably bounded effort on a wide range of devices. This is accomplished through the use of a generic transformation technique for use as a “customization” step for producing versions of content forensically marked with any of a multiplicity of mark messages.
Abstract: A system that incorporates teachings of the present disclosure may include, for example, sampling a variable effect distribution of viewing preference data to determine a first set of effects comprising a plurality of first distortion type effects associated with a first distortion type of a first image and to determine a second set of effects comprising a plurality of second distortion type effects associated with the second distortion type of a second image, calculating a preference estimate from a logistic regression model of the viewing preference data according to the first set of effects and the second set of effects, wherein the preference estimate comprises a probability that the first image is preferred over the second image, and selecting one of the first distortion type or the second distortion type according to the preference estimate. Other embodiments are disclosed.
Type:
Grant
Filed:
September 13, 2013
Date of Patent:
April 14, 2015
Assignee:
AT&T Intellectual Property I, LP
Inventors:
Amy R. Reibman, Kenneth Shirley, Chao Tian
Abstract: A camera system (10) is provided for generating an image presegmented into regions (106a-b) of interest and of no interest, having an evaluation unit (20) which is designed to divide the raw image into part regions (106a-b) to calculate a contrast value for each part region (106a-b) and to decide with reference to the contrast value whether the respective part region (106a-b) is a region of interest (106a) or a region of no interest (106b). In this respect, the evaluation unit (20) has a preprocessing unit (22) which is implemented on an FPGA, which respectively accesses the pixels of a part region (106a-b) and generates summed values (a), b) for the respective part region (106a-b) and has a structure recognition unit (24) which calculates the contrast value of the part region (106a-b) from its summed values (a, b) without accessing pixels of the part region (106a-b).
Abstract: A system including an image capturing unit configured to capture an image of at least one medical device monitoring a patient, a database including images of a plurality of medical devices, where each image corresponds to a particular medical device, and a data collection server configured to receive the at least one image, receive patient identification data corresponding to the patient, and identify the medical device in the image by comparing the received image with the images stored in the database and matching the received image with the images stored in the database.
Type:
Grant
Filed:
February 15, 2013
Date of Patent:
April 7, 2015
Assignee:
Covidien LP
Inventors:
David Fox, Robert T. Boyer, William A. Jordan, II