Patents Issued in December 1, 2011
-
Publication number: 20110293142Abstract: Method for improving the visibly of objects and recognizing objects in a set of images recorded by one or more cameras, the images of said set of images being made from mutual different geometric positions, the method comprising the steps or recording a set or subset of images by means of one camera which is moved rather freely and which makes said images during its movement, thus providing an array of subsequent images, estimating the camera movement between subsequent image recordings, also called ego-motion hereinafter, based on features of those recorded images, registering the camera images using a synthetic aperture method, recognizing said objects.Type: ApplicationFiled: December 1, 2009Publication date: December 1, 2011Inventors: Wannes van der Mark, Peter Tettelaar, Richard Jacobus Maria den Hollander
-
Publication number: 20110293143Abstract: A method includes generating a kinetic parameter value for a VOI in a functional image of a subject based on motion corrected projection data using an iterative algorithm, including determining a motion correction for projection data corresponding to the VOI based on the VOI, motion correcting the projection data corresponding to the VOI to generate the motion corrected projection data, and estimating the at least one kinetic parameter value based on the motion corrected projection data or image data generated with the motion corrected projection data. In another embodiment, a method includes registering functional image data indicative of tracer uptake in a scanned patient with image data from a different imaging modality, identifying a VOI in the image based on the registered images, generating at least one kinetic parameter for the VOI, and generating a feature vector including the at least one generated kinetic parameter and at least one bio- marker.Type: ApplicationFiled: January 12, 2010Publication date: December 1, 2011Applicant: KONINKLIJKE PHILIPS ELECTRONICS N.V.Inventors: Manoj V. Narayanan, Jens-Christoph Georgi, Frank O. Thiele, Ralph Brinks, Michael Perkuhn
-
Publication number: 20110293144Abstract: Systems and methods for rendering an entertainment animation. The system can comprise a user input unit for receiving a non-binary user input signal; an auxiliary signal source for generating an auxiliary signal; a classification unit for classifying the non-binary user input signal with reference to the auxiliary signal; and a rendering unit for rendering the entertainment animation based on classification results from the classification unit.Type: ApplicationFiled: August 20, 2009Publication date: December 1, 2011Applicant: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCHInventors: Susanto Rahardja, Farzam Farbiz, Zhiyong Huang, Ee Ping Ong, Corey Mason Manders, Ti Eu Chan, Bryan Jyh Herng Chong
-
Publication number: 20110293145Abstract: Provided are a driving support device, a driving support method, and a program, in which the driver can more intuitively and accurately determine the distance to another vehicle in the side rear. A driving support device (100) is provided with an image-capturing section (110) for capturing side rear images of the vehicle, a distance measuring section (120) for measuring the distance between the vehicle and the another vehicle, and a vehicle detection section (130) for detecting the another vehicle in the captured images, wherein a superimposed image generating section (140) calculates the distance of units of the vehicle length units on the basis of information concerning the vehicle length stored in a vehicle length information storage section (150) and the distance to the another vehicle detected by the distance measuring section (120), and generates a superimposed image on the basis of the calculated distance.Type: ApplicationFiled: April 15, 2010Publication date: December 1, 2011Applicant: PANASONIC CORPORATIONInventors: Tateaki Nogami, Yuki Waki, Kazuhiko Iwai
-
Publication number: 20110293146Abstract: Methods and systems for estimating peak location on a sampled surface (e.g., a correlation surface generated from pixilated images) utilize one or more processing techniques to determine multiple peak location estimates for at least one sampled data set at a resolution smaller than the spacing of the data elements. Estimates selected from the multiple peak location estimates are combined (e.g., a group of estimates is combined by determining a weighted average of the estimates selected for the group) to provide one or more refined estimates. In example embodiments, multiple refined estimates are combined to provide an estimate of overall displacement (e.g., of an image or other sampled data representation of an object).Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Inventor: Thomas J. Grycewicz
-
Publication number: 20110293147Abstract: According to one embodiment, a movement detection apparatus includes an arithmetic module, an edge-storing filter, a determination module, and a control module. The arithmetic module calculates a difference signal between an input image signal and an image signal of the previous frame. The filter performs smoothing processing for a signal falling within a level range provided as threshold value, among difference signals calculated by the arithmetic module. The determination module determines levels of a movement component and a noise component of the signal output from the filter. The control module controls a level range supplied as threshold value to the filter in accordance with an amplitude level of the noise component overlying the input image signal.Type: ApplicationFiled: December 9, 2010Publication date: December 1, 2011Inventors: Toshiaki Utsumi, Tetsuo Sakurai
-
Publication number: 20110293148Abstract: A non-transitory computer-readable medium storing a content determination program causes a computer to perform processes of detecting persons appearing in chronologically photographed images, detecting a position of each of the persons, calculating a moving velocity of each of the persons, setting a group including a part of the persons on the basis of the moving velocities and the positions, acquiring attribute information of the group on the basis of a person image corresponding to the each of the part of the persons included in the group, and determining, on the basis of a correspondence relationship between the attribute information of the group and attribute information of content images stored in a storage unit, one of the content images to be projected to a position which each of the part of the persons of the group recognizes.Type: ApplicationFiled: May 17, 2011Publication date: December 1, 2011Applicant: FUJITSU LIMITEDInventor: Sachio KOBAYASHI
-
Publication number: 20110293149Abstract: Certain embodiments of the present invention provide a method and apparatus for identifying vascular structure in an image including: receiving at least one image including a vascular network; identifying at least one seed point corresponding to the vascular network; identifying automatically at least a portion of the vascular network to form an original vascular identification based at least in part on the at least one seed point; and allowing a dynamic user interaction with the vascular identification to form an iterative vascular identification. In an embodiment, the iterative vascular identification is formable in real-time. In an embodiment, the iterative vascular identification is displayable in real-time. In an embodiment, the iterative vascular identification is formable without re-identifying substantially unaltered portions of the vascular identification.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Applicant: GENERAL ELECTRIC COMPANYInventors: Renaud Capolunghi, Laurent Launay, Laurent Stefani, Ruben Laramontalvo
-
Publication number: 20110293150Abstract: Certain embodiments of the present invention provide a method and apparatus for identifying and segmenting vascular structure in an image including: receiving at least one image including a vascular network; identifying at least one seed point corresponding to the vascular network; identifying automatically at least a portion of the vascular network to form an original vascular identification based at least in part on the at least one seed point; and allowing a dynamic user interaction with the vascular identification to form an iterative vascular identification. In an embodiment, the iterative vascular identification is formable in real-time. In an embodiment, the iterative vascular identification is displayable in real-time. In an embodiment, the iterative vascular identification is formable without re-identifying substantially unaltered portions of the vascular identification.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Applicant: GENERAL ELECTRIC COMPANYInventors: Renaud Capolunghi, Laurent Launay, Laurent Stefani, Ruben Laramontalvo
-
Publication number: 20110293151Abstract: The invention concerns a new method for quantifying particulate contaminants with increased reliability and that makes it possible to detect all particulate sizes according to standard ISO 14644-9. According to the invention, blowing/suction is done on a sampled surface S?, followed by a combination of analysis through optical counting of the suctioned particulate contaminants with a unit size below a value X2 and digital computation between two digital photos taken of the surface S? before and after blowing/suction, respectively, to quantify the particulates with a unit size above a value X1.Type: ApplicationFiled: June 24, 2009Publication date: December 1, 2011Applicant: Commissariat a L'Energie Atomique et aux Energies AlternativesInventor: Isabelle Tovena-Pecault
-
Publication number: 20110293152Abstract: A medical imaging system and an image processing method for producing an optimized image from an input image are provided. The medical imaging system comprises: a parameter accumulator configured to accumulate a preset number of basic parameters; a parameter determiner configured to produce new reference parameters based on current reference parameters and the accumulated basic parameters to replace the current reference parameters with the new reference parameters; an image processor configured to process an input image to generate an optimized image according to an image processing algorithm based on the reference parameters sent from the parameter determiner; and a controller configured to control overall operation of the medical imaging system.Type: ApplicationFiled: May 9, 2011Publication date: December 1, 2011Inventors: Doo Hyun CHOI, Jae Yoon Shim
-
Publication number: 20110293153Abstract: The invention relates to a method for quantitative determination of test results from diagnosis methods with the aid of an optoelectronic evaluation appliance, and to the evaluation appliance itself, characterised in that the digital pixel information per colour level or grey level is represented in its intensity in the microprocessor as one column per pixel, wherein the column height corresponds to the intensity, and these columns are displayed alongside one another on one plane, such that the intensity distribution is displayed over the test area as a surface contour or surface profile, the height profile of which corresponds to the intensity profile of the colour intensity received by the CCD. Fields of application for the invention are test methods in biochemical laboratories, such as medical diagnosis, forensic medicine, foodstuff diagnosis, molecular biology, biochemistry, gene technology and all other related fields, as well as patient monitoring for home users or in pharmacies.Type: ApplicationFiled: November 16, 2009Publication date: December 1, 2011Applicant: OPTRICON GMBHInventors: Volker Plickert, Lutz Melchior, Wilko Hein, Thorsten Jödicke
-
Publication number: 20110293154Abstract: A method for characterizing a sample by imaging fluorescence microscopy includes detecting fluorescence intensity in a time-resolved fashion after switching off excitation radiation to establish a decay function, representing the decay of the fluorescence intensity over time, for a multiplicity of pixels, comparing the decay functions associated with the pixels to at least one reference decay function to establish an error value for one or more pixels, the error value associated with a pixel being a measure for a deviation of the decay function associated with the pixel from the reference decay function, and generating an image of the sample using the error values.Type: ApplicationFiled: January 14, 2010Publication date: December 1, 2011Applicant: Eberhard Karis Universitat TubingenInventors: Alfred Meixner, Frank Schleifenbaum
-
Publication number: 20110293155Abstract: As an illustration of generating a motion map, although the cardiac CT is described for selecting an optimal phase, the disclosure is not limited to the cardiac CT. For the cardiac CT, the cardiac phase map is efficiently generated based upon helical scan data, and the optimal phase is selected within a reasonable time. At the same time, the optimal phase is accurately determined based upon complementary rays as indexes for minimal movement so as to select the projection data for minimizing artifacts in reconstructed cardiac images. The helically scanned data reflect motion within the same cardiac cycle or over the continuous cardiac cycles. The application of the complementary ray technique to the helically scanned data is accomplished by three-dimensionally determining a pair of the complementary rays in order to take into account motion within the same cardiac cycle or over the continuous cardiac cycles.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Applicants: TOSHIBA MEDICAL SYSTEMS CORPORATION, KABUSHIKI KAISHA TOSHIBAInventors: Satoru NAKANISHI, Be-Shan S. CHIANG
-
Publication number: 20110293156Abstract: A computer for aiding determination of Obstructive Sleep Apnea (OSA) includes a storage device storing with a medical image and a central processing unit (CPU). The CPU executes a method for aiding determination of OSA. The method for aiding determination of OSA includes the following steps. The medical image is obtained. An upper airway model is established. A narrowest cross-section and a nasopharyngeal boundary cross-section are defined in the airway model. A cross-sectional area of the narrowest cross-section and a cross-sectional area of the nasopharyngeal boundary cross-section are calculated. A stenosis rate is calculated according to the cross-sectional area of the narrowest cross-section and the cross-sectional area of the nasopharyngeal boundary cross-section. The stenosis rate is provided. In addition, in the method for aiding determination of OSA, a respiratory flow field simulation may be further performed to obtain and provide a flow field pressure distribution of the upper airway model.Type: ApplicationFiled: October 27, 2010Publication date: December 1, 2011Applicant: NATIONAL APPLIED RESEARCH LABORATORIESInventors: Hung Ta Hsiao, Sheng Chuan Wang, Franco Lin, Lung Cheng Lee, Chih Min Yao, Ning Hung Chen, Chung Chih Yu
-
Publication number: 20110293157Abstract: A segmentation method comprises clustering spatial, intensity and volumetric shape index to automatically segment a medical lesion. The algorithm has the following steps: (1) calculating volumetric shape index (SI) for each voxel in the image; (2) combining the SI features with the intensity range and the spatial position (x, y, z) to form a 5-dimensional feature vector set; (3) grouping the 5-dimensional feature vector set into clusters; (4) employing a modified expectation-maximization algorithm (EM) considering not only spatial but also shape features on an intensity mode map from the clustering algorithm to merge the neighbouring regions or modes. The joint spatial-intensity-shape feature provides rich information for the segmentation of the anatomic structures of interest, such as lesions or tumours.Type: ApplicationFiled: July 2, 2009Publication date: December 1, 2011Applicant: Medicsight PLCInventors: Xujiong Ye, Gregory Gibran Slabaugh, Gareth Richard Beddoe, Xinyu Lin, Abdel Douri
-
Publication number: 20110293158Abstract: A method and an image-reconstruction apparatus are disclosed for reconstructing image data on the basis of measurement data from an imaging system. In at least one embodiment of the process, initial image data, initially reconstructed from the measurement data, is optimized in an iterative optimization method utilizing a substantially edge-maintaining noise regularization term and an additional sparsity regularization term.Type: ApplicationFiled: May 19, 2011Publication date: December 1, 2011Applicant: SIEMENS AKTIENGESELLSCHAFTInventor: Stefan Popescu
-
Publication number: 20110293159Abstract: A method is disclosed for reconstruction of image data of an object under examination from measurement data, with the measurement data having been captured during a relative rotational movement between a radiation source of a computed tomography system and the object under examination. In at least one embodiment, first image data is computed by the measurement data being modified to obtain a specific gray value characteristic of the first image data to be reconstructed and the first image data is computed by way of an iterative algorithm using the modified measurement data. Second image data is also computed by a series of chronologically-consecutive images being reconstructed and processing being carried out on the series of images to reduce temporal noise. Finally a combination of the first and the second image data is carried out.Type: ApplicationFiled: May 19, 2011Publication date: December 1, 2011Applicant: SIEMENS AKTIENGESELLSCHAFTInventors: Herbert Bruder, Rainer Raupach, Karl Stierstorfer
-
Publication number: 20110293160Abstract: A method is disclosed for reconstructing image data of an examination object from measured data, wherein the measured data was captured previously during a relative rotary motion between a radiation source of a computed tomography system and the examination object. In at least one embodiment, the measured data is modified to achieve a particular grayscale characteristic of the image data to be reconstructed. The image data is calculated by way of an iterative algorithm using the modified measured data, wherein no arithmetic step for reducing noise is employed in the iterations.Type: ApplicationFiled: May 19, 2011Publication date: December 1, 2011Applicant: SIEMENS AKTIENGESELLSCHAFTInventors: Herbert Bruder, Martin Petersilka, Rainer Raupach, Karl Schwarz
-
Publication number: 20110293161Abstract: Techniques for background subtraction in computed tomography include determining voxels in a slice of interest in a three dimensional computed tomography scan of the interior of a body based on a first set of measurements of radiation transmitted through the body. Based on the first set of measurements, a first background image for radiation transmitted through the body in a first direction is determined without the effects of the voxels in the slice of interest. A current image is determined based on a different current measurement of radiation transmitted through the body in the first direction. A first difference is determined between the current image and the first background image. The result is a high contrast image in the slice of interest even from a single current projection image.Type: ApplicationFiled: May 31, 2011Publication date: December 1, 2011Applicant: UNIVERSITY OF MARYLAND, BALTIMOREInventors: Byong Yong Yi, X. Cedric Yu, Jin Zhang, Giovanni Lasio
-
Publication number: 20110293162Abstract: An image data subtraction system enhances visualization of vessels subject to movement using an imaging system. The imaging system acquires data representing first and second anatomical image sets comprising multiple temporally sequential individual mask (without contrast agent) and fill images (with contrast agent) of vessels respectively, during multiple heart cycles. An image data processor automatically identifies temporally corresponding pairs of images comprising a mask image and a contrast enhanced image and for the corresponding pairs, automatically determines a shift of a contrast enhanced image relative to a mask image to compensate for motion induced image mis-alignment. The image data processor automatically shifts a contrast enhanced image relative to a mask image in response to the determined shift.Type: ApplicationFiled: January 24, 2011Publication date: December 1, 2011Applicant: SIEMENS MEDICAL SOLUTIONS USA, INC.Inventor: Michael Pajeau
-
Publication number: 20110293163Abstract: A system identifies a stent in an image using luminance density and anatomical information. An X-ray imaging system automatically detects and indicates location of an invasive anatomical device in an image. An interface acquires, data representing X-ray images of patient vessels and data identifying a particular vessel containing a medical device. An image data processor employs a model of anatomical vessels to select a region of interest in a vessel identified by the acquired data and automatically determines a location of the medical device in an acquired image by determining at least a portion of an outline of the medical device by detecting a luminance transition in the acquired image using an image edge detector. A display processor initiates generation of data depicting location of the medical device in the acquired image in response to determining the at least a portion of the outline of the medical device.Type: ApplicationFiled: January 28, 2011Publication date: December 1, 2011Applicant: SIEMENS MEDICAL SOLUTIONS USA, INC.Inventors: Soroosh Kargar, Weng Lei
-
Publication number: 20110293164Abstract: An X-ray imaging apparatus is configured to subtract a first X-ray image from a second X-ray image to generate a first subtraction image showing information on a blood vessel, calculate an amount of pixel shift between the first X-ray image and the third X-ray image, subtract the first X-ray image from the third X-ray image to generate a second subtraction image showing information on an insertion instrument, and combine the first subtraction image with the second subtraction image to generate a synthetic image by performing a pixel shift correction based on the amount of pixel shift.Type: ApplicationFiled: May 27, 2011Publication date: December 1, 2011Applicants: TOSHIBA MEDICAL SYSTEMS CORPORATION, Kabushiki Kaisha ToshibaInventors: Naotaka SATO, Shingo Abe, Yoshinori Shimizu, Kunio Shiraishi
-
Publication number: 20110293165Abstract: A method for training a classifier to be operative as an epithelial texture classifier, includes obtaining a plurality of training micrograph areas of biopsy tissue and for each of the training micrograph areas, identifying probable locations of nuclei that form epithelia, generating a skeleton graph from the probable locations of the nuclei that form the epithelia, manually drawing walls on the skeleton graph outside of the epithelia to divide the epithelia from one another, and manually selecting points that lie entirely inside the epithelia to generate open and/or closed geodesic paths in the skeleton graph between pairs of the selected points. Data is obtained from points selected from the walls and the paths and applied to a classifier to train the classifier as the epithelial texture classifier.Type: ApplicationFiled: May 11, 2011Publication date: December 1, 2011Applicants: NEC CORPORATION, NEC LABORATORIES AMERICA, INC.Inventors: Christopher D. Malon, Atsushi Marugame, Eric Cosatto
-
Publication number: 20110293166Abstract: Thermographic imaging is used to monitor quality parameters of pharmaceutical products (108 in a manufacturing process.Type: ApplicationFiled: February 4, 2010Publication date: December 1, 2011Applicant: D.I.R. Technologies (Detection IR) Ltd.Inventors: Eran Sinbar, Yoav Weinstein
-
Publication number: 20110293167Abstract: According to one embodiment, a defect inspecting method includes: separately detecting an amount of first secondary electrons emitted from a semiconductor substrate at a first elevation angle and an amount of second secondary electrons emitted at a second elevation angle different from the first elevation angle; creating potential contrast images respectively from the detected amounts of the first and second secondary electrons; determining a combination ratio of the created respective potential contrast images; combining the potential contrast images respectively created from the first and second secondary electrons at the determined combination ratio; and extracting a defect based on the combined potential contrast image.Type: ApplicationFiled: March 21, 2011Publication date: December 1, 2011Applicant: Kabushiki Kaisha ToshibaInventor: Hiroyuki HAYASHI
-
Publication number: 20110293168Abstract: A transparent component locally includes one or both of a plurality of thick parts that have a large thickness in the vertical direction, and a plurality of thin parts that have a small thickness in the vertical direction. A component recognition camera captures an image of the transparent component held by a mount head, from above or below the transparent component, while a single spotlight or a plurality of spotlights are being irradiated onto the transparent component.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Inventors: Yoshihiko Matsushima, Tadashi Kawakami, Tohru Imai, Montien Thuencharoen
-
Publication number: 20110293169Abstract: Thread parameters for a threaded object are determined. Spatial reference systems (X, Y, Z) and (X?, Y?, Z?) are respectively identified for a position sensor and the threaded object. A transformation matrix describing a quadratic form representing the threaded object in (X, Y, Z) may be determined to relate the reference systems. For example, a sensor trajectory on the threaded object may be determined, along with measurement points on the threaded object. The measurement points may be selected so the matrix, evaluated on these values, has maximum rank. Position data at measurement points in the second reference system may be transformed into the first reference system, yielding first results. After coating the threaded object, position data at the measurement points may be acquired again and transformed into the first reference system, yielding second results. Comparisons between the first and second results may provide thickness of the coating and quality verification.Type: ApplicationFiled: June 1, 2011Publication date: December 1, 2011Applicant: Tenaris Connections LimitedInventors: Nicolas Hernan Bonadeo, Sebastian Berra, Javier Ignacio Etcheverry
-
Publication number: 20110293170Abstract: The format of an input image is determined appropriately, and an appropriate output image adapted to a format that can be displayed on a display section is displayed. An image format determining section 120 determines whether an image inputted to an image input section 110 is in the image format for a plane display image or is in the image format for a stereo display image. A displayable format determining section 160 determines a displayable format of an output image which can be displayed on an image display section 170. An operational input accepting section 150 accepts the output format of the output image displayed on the image display section 170 as a designated display format. An output format determining section 130 determines the output format of the output image on the basis of the image format, the displayable format, and the designated display format.Type: ApplicationFiled: January 6, 2010Publication date: December 1, 2011Applicant: SONY CORPORATIONInventors: Yasunari Hatasawa, Kazuhiko Ueda, Masami Ogata
-
Publication number: 20110293171Abstract: A continuous scanning method employs one or more moveable sensors and one or more reference sensors deployed in the environment around a test subject. Each sensor is configured to sense an attribute of the test subject (e.g., sound energy, infrared energy, etc.) while continuously moving along a path and recording the sensed attribute, the position, and the orientation of each of the moveable sensors and each of the reference sensors. The system then constructs a set of transfer functions corresponding to points in space between the moveable sensors, wherein each of the transfer functions relates the test data of the moveable sensors to the test data of the reference sensors. In this way, a graphical representation of the attribute in the vicinity of test subject can be produced.Type: ApplicationFiled: December 30, 2010Publication date: December 1, 2011Applicant: ATA ENGINEERING, INC.Inventors: Havard I. Vold, Paul G. Bremner, Parthiv N. Shah
-
Publication number: 20110293172Abstract: An image processing apparatus 100 includes a parallax calculating unit 1. The parallax calculating unit 1 receives input of a pair of image input data Da1 and Db1 forming a three-dimensional video, calculates parallax amounts of respective regions obtained by dividing the pair of image input data Da1 and Db1 into a plurality of regions, and outputs the parallax amounts as parallax data T1 of the respective regions. The parallax calculating unit 1 includes a correlation calculating unit 10, a high-correlation-region extracting unit 11, a denseness detecting unit 12, and a parallax selecting unit 13. The correlation calculating unit 10 outputs correlation data T10 and pre-selection parallax data T13 of the respective regions. The high-correlation-region extracting unit 11 determines a level of correlation among the correlation data T10 of the regions and outputs high-correlation region data T11.Type: ApplicationFiled: May 27, 2011Publication date: December 1, 2011Inventors: Hirotaka SAKAMOTO, Noritaka Okuda, Satoshi Yamanaka, Toshiaki Kubo
-
Publication number: 20110293173Abstract: A classifier for detecting objects in images is constructed from a set of training images. For each training image, features are extracted from a window in the training image, wherein the window contains the object, and then randomly sample coefficients c of the features. N-combinations for each possible set of the coefficients are determined. For each possible combination of the coefficients, a Boolean valued proposition is determined using relational operators to generate a propositional space. Complex hypotheses of a classifier are defined by applying combinatorial functions of the Boolean operators to the propositional space to construct all possible logical propositions in the propositional space. Then, the complex hypotheses of the classifier can be applied to features in a test image to detect whether the test image contains the object.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Inventors: Fatih M. Porikli, Vijay Venkatarman
-
Publication number: 20110293174Abstract: According to example embodiments, an image processing method includes estimating, using a location-based multi-illuminant estimation unit, candidate correlated color temperature values and location values of sub-units of an image, calculating, using the location-based multi-illuminant estimation unit, a correlated color temperature (CCT) matrix based on the candidate CCT values and the location values, and performing, using a color processing unit, color processing by using the CCT matrix.Type: ApplicationFiled: May 23, 2011Publication date: December 1, 2011Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Duck-Soo Kim, Ilya Vol, Tae-Chan Kim
-
Publication number: 20110293175Abstract: Provided is an image processing apparatus. When a plurality of images acquired by photographing different directions is input, a region search unit of the image processing apparatus may search for an overlapping region within the plurality of images. An outlier removal unit may remove an outlier within the retrieved overlapping region.Type: ApplicationFiled: March 14, 2011Publication date: December 1, 2011Applicants: Gwangju Institute of Science and Technology, Samsung Electronics Co., Ltd.Inventors: Kuk Jin YOON, Young Sun Jeon, Young Su Moon, Yong-Ho Shin, Shi Hwa Lee, Min Gyu Park
-
Publication number: 20110293176Abstract: An apparatus for detecting a cut change based on a similarity between a first image and a second image, includes a unit for generating one of a luminance histogram and a color histogram of each of the first image and the second image, a unit for generating a spatial correlation image representing a correlation between spatial layouts of the first image and the second image, a unit for calculating a histogram similarity representing a similarity between the histogram of the first image and the histogram of the second image, a unit for calculating a spatial correlation image similarity representing a similarity between the spatial correlation image of the first image and the spatial correlation image of the second image, and a unit for determining whether a border between the first image and the second image is a cut change based on the histogram similarity and the spatial correlation image similarity.Type: ApplicationFiled: August 4, 2011Publication date: December 1, 2011Inventors: Mototsugu Abe, Masayuki Nishiguchi
-
Publication number: 20110293177Abstract: Colors of images and videos are modified to make differences in the colors more perceptible to colorblind users. An exemplary recoloring process utilizes a color space transformation, a local color rotation and a global color rotation to transform colors of visual objects from colors which may not be distinguishable by the colorblind user to colors which may be distinguishable by the colorblind user.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Applicant: Microsoft CorporationInventors: Meng Wang, Linjun Yang, Xian-Sheng Hua, Bo Liu
-
Publication number: 20110293178Abstract: An image processing device (100) includes a gradation correction value acquiring unit (12) that acquires a gradation correction value representing a ratio of a luminance component of an input image and a luminance component of an output image, a chroma analyzing unit (13) that calculates a chroma correction value, in which the total sum of degrees of chroma discrepancy between an analysis image equal to or different from the input image and a corrected image obtained by correcting a luminance component of the analysis image on the basis of one or more gradation correction values is the minimum, in correspondence with the gradation correction value, and an image output unit (14) that outputs as the output image an image obtained by correcting the input image received by the image input unit (11) on the basis of the gradation correction value acquired by the gradation correction value acquiring unit (12) and the chroma correction value correlated with the gradation correction value.Type: ApplicationFiled: December 17, 2009Publication date: December 1, 2011Applicant: NEC CORPORATIONInventor: Masato Toda
-
Publication number: 20110293179Abstract: An RGB color image and an infrared intensity image of a live video are received. The RGB color image is converted to a colorspace image comprising a channel corresponding to a brightness value. Each pixel of the converted colorspace image is evaluated to determine whether the brightness channel of the pixel exceeds a threshold value. If the brightness channel of the pixel exceeds the threshold value, the infrared intensity value of a corresponding pixel from the infrared intensity image is mixed into the pixel's channel value that corresponds to brightness. The converted colorspace image is converted back to an RGB color image.Type: ApplicationFiled: May 31, 2011Publication date: December 1, 2011Inventors: Mert Dikmen, Sanjay J. Patel, Dennis Lin, Quang J. Nguyen, Minh N. Do
-
Publication number: 20110293180Abstract: Foreground and background image segmentation is described. In an example, a seed region is selected in a foreground portion of an image, and a geodesic distance is calculated from each image element to the seed region. A subset of the image elements having a geodesic distance less than a threshold is determined, and this subset of image elements are labeled as foreground. In another example, an image element from an image showing at least a user, a foreground object in proximity to the user, and a background is applied to trained decision trees to obtain probabilities of the image element representing one of these items, and a corresponding classification assigned to the image element. This is repeated for each image element. Image elements classified as belonging to the user are labeled as foreground, and image elements classified as foreground objects or background are labeled as background.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Applicant: Microsoft CorporationInventors: Antonio Criminisi, Jamie Daniel Joseph Shotton, Andrew Fitzgibbon, Toby Sharp, Matthew Darius Cook
-
Publication number: 20110293181Abstract: A character classification system is disclosed. The character classification system has an input device for receiving a handwritten input character, and a processor. The processor is configured to, for each character model, each character model being associated with an output character and defining a model specific segmentation scheme for that output character and an associated segment model, the model specific segmentation scheme defining a minimum length corresponding to a number of points in a stroke of the output character: (i) decompose the handwritten input character into one or more segments in accordance with the model specific segmentation scheme of the respective character model; and (ii) evaluate the one or more segments against the segment model of the respective character model to produce a score indicative of the conformity of the one or more segments with the segment model.Type: ApplicationFiled: August 7, 2011Publication date: December 1, 2011Inventor: Jonathon Leigh Napper
-
Publication number: 20110293182Abstract: The present invention provides a method and system for resolving complete free space orientation of an active receiving device at high speed using simple optics, trigonometry, simple circuitry, and using minimal processing power. The rapid triangulation of distance from the emitters, as well as resolution of rotational orientation, are determined by changing the contrast level in an infrared spectrum and by using wide angle lenses with greater than 180 degree hemispherical viewing angles. Furthermore, the system consists of an optional accelerometer, resident on the receiver, to dynamically adjust the image sensor frame rate.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Inventor: MARCUS KRIETER
-
Publication number: 20110293183Abstract: A system includes an imaging device and an acquisition layer. The imaging device acquires an image. The acquisition layer is logically located between a source manager and the imaging device, the source manager being called by an application when a user of the system requests to acquire the image. The acquisition layer includes imaging acquisition logic that receives the image from the imaging device and performs optical character recognition (OCR) that extracts machine editable text from the image. The acquisition layer forwards the image to the application and makes the machine editable text available to the user.Type: ApplicationFiled: May 26, 2010Publication date: December 1, 2011Inventor: Hin Leong Tan
-
Publication number: 20110293184Abstract: A method of identifying a physical page containing printed text from a plurality of page fragment images captured by a camera. The method includes the steps of: placing a handheld electronic device in contact with a surface of the physical page; moving the device across the physical page and capturing the plurality of page fragment images at a plurality of different capture points; measuring a displacement or direction of movement; performing OCR on each captured page fragment image; creating a glyph group key for each page fragment image; looking up each created glyph group key in an inverted index of glyph group keys; comparing a displacement or direction between glyph group keys in the inverted index with a measured displacement or direction between the capture points for corresponding glyph group keys created using OCR; and identifying a page identity corresponding to the physical page using the comparison.Type: ApplicationFiled: March 18, 2011Publication date: December 1, 2011Inventors: Kia Silverbrook, Paul Lapstun, Jonathon Leigh Napper
-
Publication number: 20110293185Abstract: A hybrid system for identifying a printed page. The system includes: (i) the printed page having human-readable content and a coding pattern printed in every interstitial space between portions of human-readable content, the coding pattern being either absent from the human-readable content or unreadable when superimposed with the human-readable content; and (ii) a handheld device for overlaying and contacting the printed page. The handheld device includes: a camera for capturing page fragment images; and a processor configured for: decoding the coding pattern and determining the page identity in the event that the coding pattern is visible in and decodable from the captured page fragment image; and otherwise initiating OCR or SIFT techniques to identify the page.Type: ApplicationFiled: March 18, 2011Publication date: December 1, 2011Inventors: Kia Silverbrook, Jonathon Leigh Napper, Robert Dugald Gates, Paul Lapstun
-
Publication number: 20110293186Abstract: A method of classifying a character string formed from a known number of hand-written characters is disclosed. The method starts by determining character probabilities for each hand-written character in the character string. Each character probability represents a likelihood of the respective hand-written character being a respective one of a plurality of predetermined characters. Each predetermined character has a respective character type. Character templates having the known number of characters are next identified. Each character template has a respective predetermined probability and represents a respective combination of character types. Character sequence probabilities corresponding to each of the character templates having the known number of characters are next determined. The character sequence probabilities are a function of the predetermined probability of the respective character template and the character probabilities of the hand-written character in the character string.Type: ApplicationFiled: August 7, 2011Publication date: December 1, 2011Inventor: Jonathon Leigh Napper
-
Publication number: 20110293187Abstract: The present application is a method and system of interpreting an image by finding a configuration of multiple variables which optimizes an objective function with a factorizable upper bound, by applying an iterative algorithm that relies on efficient dynamic ordering of candidate configurations, in a priority queue, in a descending order of an upper bound score. As an example, consider a constellation model for an object. It specifies the appearance models for individual parts of objects, as well as spatial relations among these parts. These are combined into a single function whose value represents the likeness of the object in an image. To find the configuration in which the object is present in the image, we maximize this function over all candidate configurations. The purpose of the iterative algorithm mentioned above is to find such optimal configurations efficiently.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: PALO ALTO RESEARCH CENTER INCORPORATEDInventors: Prateek Sarkar, Evgeniy Bart
-
Publication number: 20110293188Abstract: A method for processing image data comprises generating face data representing a set of detected faces from image data representing a set of images forming a library of images, using the face data to determine a sub-set of the images from the library in which at least the same person appears, and generating at least one clothing signature for the person representing clothes worn by the person in at least one of the images in the sub-set, and using the or each clothing signature, identifying further images of the person in the library.Type: ApplicationFiled: June 1, 2010Publication date: December 1, 2011Inventors: Wei Zhang, Tong Zhang
-
Publication number: 20110293189Abstract: Described herein are techniques for obtaining compact face descriptors and using pose-specific comparisons to deal with different pose combinations for image comparison.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Applicant: Microsoft CorporationInventors: Jian Sun, Zhimin Cao, Qi Yin
-
Publication number: 20110293190Abstract: Image data defining a reference image and each input image in a sequence of images is processed to detect changes in the images. A value is calculated for each pixel defining the spatial rate of change of homogeneity of the image data at that pixel. Different regions of pixels in each image are selected and the values within each region are concatenated to define a vector for each region. The vectors are then processed to compare corresponding regions in each image. The results of the comparison define a correlation map identifying areas in the images in which change has occurred.Type: ApplicationFiled: July 16, 2007Publication date: December 1, 2011Applicant: Mitsubishi Denki Kabushiki KaishaInventor: Robert James O'Callaghan
-
Publication number: 20110293191Abstract: Provided are an apparatus and method for more accurately extracting edge portions from an image in various conditions. The apparatus includes: a brightness calculation unit calculating a representative brightness value of the neighborhood of a current pixel; a threshold value calculation unit calculating a threshold value, based on which it is determined whether the current pixel is an edge, by using the calculated representative brightness value; a mask application unit calculating a masking value by applying a mask to an area containing at least the current pixel; and an edge determination unit determining whether the current pixel is an edge by comparing the masking value and the threshold value.Type: ApplicationFiled: October 15, 2010Publication date: December 1, 2011Inventors: Hyunchul SHIN, Sungchul JUNG, Jeakwon SEO