Point Features (e.g., Spatial Coordinate Descriptors) Patents (Class 382/201)
-
Patent number: 7764838Abstract: A system and method for extracting an object of interest from an image using a robust active shape model are provided. A method for extracting an object of interest from an image comprises: generating an active shape model of the object; extracting feature points from the image; and determining an affine transformation and shape parameters of the active shape model to minimize an energy function of a distance between a transformed and deformed model of the object and the feature points.Type: GrantFiled: September 8, 2005Date of Patent: July 27, 2010Assignee: Siemens Medical Solutions USA, Inc.Inventors: Marie-Pierre Jolly, Julien Abi-Nahed
-
Publication number: 20100177966Abstract: A method, system and computer program product for representing an image is provided. The image that needs to be represented is represented in the form of a Gaussian pyramid which is a scale-space representation of the image and includes several pyramid images. The feature points in the pyramid images are identified and a specified number of feature points are selected. The orientations of the selected feature points are obtained by using a set of orientation calculating algorithms. A patch is extracted around the feature point in the pyramid images based on the orientations of the feature point and the sampling factor of the pyramid image. The boundary patches in lo the pyramid images are extracted by padding the pyramid images with extra pixels. The feature vectors of the extracted patches are defined. These feature vectors are normalized so that the components in the feature vectors are less than a threshold.Type: ApplicationFiled: January 14, 2009Publication date: July 15, 2010Inventors: Mark A. Ruzon, Raghavan Manmatha, Donald Tanguay
-
Publication number: 20100166319Abstract: The outlines of specific subjects becoming unnaturally deformed is prevented, while reducing the amount of calculations. A reference image setting section sets a reference image which is to become a reference from among a plurality of images. A specific subject detecting section detects a specific subject from within the reference image. A feature point extracting section extracts a plurality of feature points from within the reference image such that the average density of the feature points become higher in the vicinity of the outline of the specific subject. A corresponding point obtaining means corresponding point obtaining section obtains corresponding points within the other images that correspond to the extracted feature points. A coordinate converting section converts the coordinates of the positions of each pixel within the reference image and/or the other images such that the positions of the feature points and the positions of the corresponding points match.Type: ApplicationFiled: December 24, 2009Publication date: July 1, 2010Applicant: FUJIFILM CorporationInventor: Yitong Zhang
-
Publication number: 20100166299Abstract: Corresponding points or a motion vector is generated with reduced positional error even if incoming images have been shot under different illumination conditions. An image processing apparatus (100, 120) includes an initial corresponding point computing section (113) for computing and outputting multiple sets of corresponding points across multiple images and a corresponding point recomputing section (104), which selects reference corresponding points, consisting of multiple sets of initial corresponding points with small error, from the multiple sets of initial corresponding points by using a photometric constraint equation and a geometric constraint equation, newly computes aimed corresponding points associated with those reference corresponding points, and then outputs the reference corresponding points and the aimed corresponding points as corresponding points.Type: ApplicationFiled: February 28, 2008Publication date: July 1, 2010Inventor: Kunio Nobori
-
Publication number: 20100158371Abstract: A method of detecting a facial image includes pre-processing an image; and detecting a face region from the pre-processed image to create facial records of the detected face region. Further, the method of detecting the facial image includes detecting the facial image by creating coordinates of the face and eyes in the input image by using the facial records.Type: ApplicationFiled: June 15, 2009Publication date: June 24, 2010Applicant: Electronics and Telecommunications Research InstituteInventors: Sung Uk JUNG, Yun Su CHUNG, Jang-Hee YOO, Ki Young MOON
-
Publication number: 20100158319Abstract: A fake-face detection method using range information includes: detecting face range information and face features from an input face image; matching the face image with the range information; and distinguishing a fake face by analyzing the matched range information.Type: ApplicationFiled: July 27, 2009Publication date: June 24, 2010Applicant: Electronics and Telecommunications Research InstituteInventors: Sung Uk Jung, Yun Su Chung, Ki Young Moon
-
Publication number: 20100150408Abstract: The invention makes it possible to determine whether or not objects appearing in various temporal positions in an input video are identical to each other. Identity between a plurality of objects detected from an input video is determined by using an object identity probability determined based on an interframe distance, the interframe distance being a distance between frames from which the respective objects are detected.Type: ApplicationFiled: December 25, 2007Publication date: June 17, 2010Applicant: NEC CORPORATIONInventor: Masumi Ishikawa
-
Publication number: 20100142758Abstract: System for providing a mobile user, object related information related to an object visible thereto, the system including a camera directable toward the object, a local interest points and semi global geometry (LIPSGG) extraction processor, and a remote LIPSGG identifier, the camera acquiring an image of at least a portion of the object, the LIPSGG extraction processor being coupled with the camera, the LIPSGG extraction processor extracting an LIPSGG model of the object from the image, remote LIPSGG identifier being coupled with the LIPSGG extraction processor via a network, the remote LIPSGG identifier receiving the LIPSGG model from the LIPSGG extraction processor, via the network, the remote LIPSGG identifier identifying the object according to the LIPSGG model, the remote LIPSGG identifier retrieving the object related information, the remote LIPSGG identifier providing the object related information to the mobile user operating the camera.Type: ApplicationFiled: March 4, 2008Publication date: June 10, 2010Inventors: Adi Pinhas, Michael Chertok
-
Patent number: 7734087Abstract: A face recognition apparatus and method using Principal Component Analysis (PCA) learning per subgroup, the face recognition apparatus includes: a learning unit which performs Principal Component Analysis (PCA) learning on each of a plurality of subgroups constituting a training data set, and then performs Linear Discriminant Analysis (LDA) learning on the training data set, thereby generating a PCA-based LDA (PCLDA) basis vector set of each subgroup; a feature vector extraction unit which projects a PCLDA basis vector set of each subgroup to an input image and extracts a feature vector set of the input image with respect to each subgroup; a feature vector storing unit which projects a PCLDA basis vector set of each subgroup to each of a plurality of face images to be registered, thereby generating a feature vector set of each registered image with respect to each subgroup, and storing the feature vector set in a database; and a similarity calculation unit which calculates a similarity between the input imageType: GrantFiled: December 3, 2004Date of Patent: June 8, 2010Assignee: Samsung Electronics Co., Ltd.Inventors: Wonjun Hwang, Taekyun Kim
-
Publication number: 20100128971Abstract: A pair of images subjected to image processing is divided. Next, based on mutually-corresponding divided images, mutually-corresponding matching images are respectively set. When a corresponding point of a characteristic point in one matching image is not extracted from the other matching image, adjoining divided images are joined together, and based on the joined divided image, a new matching image is set.Type: ApplicationFiled: November 25, 2008Publication date: May 27, 2010Applicant: NEC System Technologies, Ltd.Inventors: Toshiyuki Kamiya, Hiroyuki Yagyuu, Hirokazu Koizumi
-
Publication number: 20100111364Abstract: This invention aims to create a three-dimensional model of satisfactory accuracy even when adopting a method of performing three-dimensional measurement by arbitrarily changing a positional relationship between an actual model and a camera, and aligning and integrating three-dimensional information restored by each measurement. A polygonal mark M having a shape in which a direction can be uniquely specified is attached to a predetermined area of an actual model WM of a work to be three-dimensionally recognized. A process of changing the orientation of the actual model WM such that a state in which the mark M is contained in the view of each camera is maintained, and then performing the three-dimensional measurement is executed over a plurality of times.Type: ApplicationFiled: November 3, 2009Publication date: May 6, 2010Inventors: Toyoo IIDA, Kengo Ichimura, Junichiro Ueki, Kazuhiro Shono, Yoshizo Togawa, Akira Ishida
-
Publication number: 20100104184Abstract: The described methods and systems provide for the representation and matching of video content, including spatio-temporal matching of different video sequences. A particular method of determining temporal correspondence between different sets of video data inputs the sets of video data and represents the video data as ordered sequences of visual nucleotides. Temporally corresponding subsets of video data are determined by aligning the sequences of visual nucleotides.Type: ApplicationFiled: January 6, 2009Publication date: April 29, 2010Applicant: Novafora, Inc.Inventors: Alexander Bronstein, Michael Bronstein, Shlomo Selim Rakib
-
Publication number: 20100104155Abstract: A method for linear structure detection in mammographic images, comprising: locating a plurality of microcalcification candidate clusters in digital mammographic images; extracting a region of interest that encloses each of the candidate clusters; processing the region of interest to generate feature points that reveal geometric properties in the region; applying a line detection algorithm to the feature points to produce a line model; and analyzing the line model to determine whether a true linear structure is present in the first region of interest.Type: ApplicationFiled: November 24, 2009Publication date: April 29, 2010Inventors: Shoupu Chen, Lawrence A. Ray
-
Publication number: 20100104158Abstract: A method includes matching at least portions of first, second signals using local self-similarity descriptors of the signals. The matching includes computing a local self-similarity descriptor for each one of at least a portion of points in the first signal, forming a query ensemble of the descriptors for the first signal and seeking an ensemble of descriptors of the second signal which matches the query ensemble of descriptors. This matching can be used for image categorization, object classification, object recognition, image segmentation, image alignment, video categorization, action recognition, action classification, video segmentation, video alignment, signal alignment, multi-sensor signal alignment, multi-sensor signal matching, optical character recognition, image and video synthesis, correspondence estimation, signal registration and change detection. It may also be used to synthesize a new signal with elements similar to those of a guiding signal synthesized from portions of the reference signal.Type: ApplicationFiled: December 20, 2007Publication date: April 29, 2010Inventors: Eli Shechtman, Michal Irani
-
Publication number: 20100098321Abstract: Feature points (41, 42, 43) in the heat image (10) of a casting die (1) are extracted and a predetermined geometrical conversion processing is performed on the heat image (10) such that the feature points are superimposed on the reference feature points (61, 62, 63) set in a reference heat image (30) picked up previously to generate a corrected heat image (20). A difference image (40) is generated by superimposing the corrected heat image (20) and the reference heat image (30) such that the corrected feature points (51, 52, 53) in the corrected heat image (20) is superimposed on the corresponding reference feature points (61, 62, 63). With such an arrangement, a highly reliable difference image can be generated even when the imaging field of vision slips off among a plurality of heat images.Type: ApplicationFiled: March 25, 2008Publication date: April 22, 2010Applicants: TOYOTA JIDOSHA KABUSHIKI KAISHA, MEIWA E-TEC CO., LTD.Inventors: Yuichi Furukawa, Shingo Nakamura, Yuji Okada, Fumio Kawahara
-
Publication number: 20100098324Abstract: A recognition processing method and an image processing device ends recognition of an object within a predetermined time while maintaining the recognition accuracy. The device extracts combinations of three points defining a triangle whose side length satisfy predetermined criterion values from feature points of the model of a recognition object, registers the extracted combinations as model triangles, and similarly extracts combinations of three points defining a triangle whose side lengths satisfy predetermined criterion values from feature points of the recognition object. The combinations are used as comparison object triangles and associated with the respective model triangles.Type: ApplicationFiled: March 5, 2008Publication date: April 22, 2010Inventor: Shiro Fujieda
-
Publication number: 20100097398Abstract: To allow a viewer to easily understand the details of a moving image shot by an image capturing apparatus in the case where the moving image is browsed. A camerawork detecting unit 120 detects the amount of movement of an image capturing apparatus at the time of shooting a moving image input from a moving-image input unit 110, and, on the basis of the amount of movement of the image capturing apparatus, calculates affine transformation parameters for transforming an image on a frame-by-frame basis. An image transforming unit 160 performs an affine transformation of at least one of the captured image and a history image held in an image memory 170, on the basis of the calculated affine transformation parameters. An image combining unit 180 combines, on a frame-by-frame basis, the captured image and the history image, at least one of which has been transformed, and causes the image memory 170 to hold a composite image.Type: ApplicationFiled: August 22, 2008Publication date: April 22, 2010Applicant: Sony CorporationInventor: Shingo Tsurumi
-
Publication number: 20100086175Abstract: An image processing apparatus includes a detector, a setting unit, and an image generator. The detector detects a target object image region from a first image. When one or more predetermined parameters are applicable to a target object within the region detected by the detector, the setting unit sets the relevant target object image region as a first region. The image generator then generates a second image by applying predetermined processing to either the image portion within the first region, or to the image portions in a second region containing image portions within the first image that are not contained in the first region.Type: ApplicationFiled: October 1, 2009Publication date: April 8, 2010Inventors: Jun YOKONO, Yuichi Hasegawa
-
Patent number: 7693334Abstract: A pathological diagnosis support device, a pathological diagnosis support program, a pathological diagnosis support method, and a pathological diagnosis support system extract a pathological tissue for diagnosis from a pathological image and diagnose the pathological tissue. A tissue collected in a pathological inspection is stained using, for example, hematoxylin and eosin. In consideration of the state of the tissue in which a cell nucleus and its peripheral constituent items are stained in respective colors unique thereto, subimages such as a cell nucleus, a pore, cytoplasm, interstitium are extracted from the pathological image, and color information of the cell nucleus is also extracted. The subimages and the color information are stored as feature candidates so that presence or absence of a tumor and benignity or malignity of the tumor are determined.Type: GrantFiled: November 30, 2005Date of Patent: April 6, 2010Assignee: NEC CorporationInventors: Maki Ogura, Akira Saitou, Kenichi Kamijo, Kenji Okajima, Tomoharu Kiyuna
-
Publication number: 20100080469Abstract: System and method of generating feature descriptors for image identification. Input image is Gaussian-blurred at different scales. A difference of Gaussian space is obtained from differences of adjacent Gaussian-blurred images. Key points are identified in the difference-of-Gaussian space. For each key point, primary sampling points are defined with three dimensional relative positions from key point and reaching into planes of different scales. Secondary sampling points are identified for each primary sampling point. Secondary image gradients are obtained between an image at a primary sampling point and images at secondary sampling points corresponding to this primary sampling point. Secondary image gradients form components of primary image gradients at primary sampling points. Primary image gradients are concatenated to obtain a descriptor vector for input image.Type: ApplicationFiled: September 24, 2009Publication date: April 1, 2010Applicants: FUJI XEROX CO., LTD., FUJIFILM CorporationInventors: Qiong Liu, Hironori Yano
-
Patent number: 7689043Abstract: A camera A produces a 3D natural image B which is re-oriented and repositioned at N to predetermined parameters. Different processes C, D, E extract different features from the image to provide different processed images. The data space occupied by the processed images is reduced, for example by Principle Component Analysis, at F, G, H and the reduced processed images are combined at O to provide an image key I representative of the image B. The image key I is compared at J with stored image keys L of known images, and the output comparison is sorted at K to produce a final list M of potential matches. In a verification process, just a single image key L may be compared. The processed images at C, D, E may alternatively be combined prior to a single subspace reduction and/or optimisation method. 2D data may be combined with or used instead of the 3D data.Type: GrantFiled: October 11, 2004Date of Patent: March 30, 2010Assignee: University of YorkInventors: James Austin, Nicholas Pears, Thomas Heseltine
-
Publication number: 20100074516Abstract: A defect detecting apparatus captures an image of a protein chip formed on each die of a wafer with a low magnification for every first division region obtained by dividing each die in plurality; stores each obtained image as an inspection target image together with an ID for identifying each first division region; creates a model image for every first division region by calculating an average luminance value of pixels of each inspection target image; extracts a difference between the model image and each inspection target image as a difference image; determines presence of a defect by extracting a Blob having an area larger than a preset value from the difference image; captures a high-magnification image of every second division region; creates a model image again and extracts a Blob; and determines the kind of the defect based on a feature point of the defect.Type: ApplicationFiled: November 30, 2007Publication date: March 25, 2010Applicant: TOKYO ELECTRON LIMITEDInventor: Hiroshi Kawaragi
-
Publication number: 20100074531Abstract: An image processing apparatus includes: an image transformation parameter calculation device which calculates an image transformation parameter for matching an acquired first image and a second image with each other among detected plurality of corresponding points; an image transformation device which transforms the second image using the calculated image transformation parameter and acquires the transformed image as a third image; and a feature point existing region determination device which determines whether or not the feature point extracted from the first image is positioned in an invalid image region at an edge of the image, the invalid image region being generated by execution of a predetermined filtering process on the first image, wherein the corresponding point detection device tracks the feature point determined that the feature point extracted from the first image is positioned in the invalid image region, using the first image and the third image.Type: ApplicationFiled: September 11, 2009Publication date: March 25, 2010Applicant: FUJIFILM CORPORATIONInventor: Koichi TANAKA
-
Patent number: 7684916Abstract: An imaging device collects color image data to facilitate distinguishing crop image data from background data. A definer defines a series of scan line segments generally perpendicular to a transverse axis of the vehicle or of the imaging device. An intensity evaluator determines scan line intensity data for each of the scan line segments. An alignment detector identifies a preferential heading of the vehicle that is generally aligned with respect to a crop feature, associated with the crop image data, based on the determined scan line intensity meeting or exceeding a maximum value or minimum threshold value. A reliability estimator estimates a reliability of the vehicle heading based on compliance with an intensity level criteria associated with one or more crop rows.Type: GrantFiled: December 16, 2005Date of Patent: March 23, 2010Assignees: Deere & Company, Kansas State University Research FoundationInventors: Jiantao Wei, Shufeng Han
-
Publication number: 20100067804Abstract: A measuring method and equipment for detecting quickly and with high precision feature points (peak points or trough points) of a waveform even with waveform signals with irregular feature point values or irregular distances between feature points as in the density waveform signals or the like obtained from tree ring images or the like of wood specimens.Type: ApplicationFiled: February 26, 2009Publication date: March 18, 2010Inventor: Takayuki Okochi
-
Publication number: 20100067805Abstract: A device for identifying a traffic sign in an image includes a Hough transformer implemented to identify a plurality of line sections running in different directions through the image in the image or in an edge image derived from same. The device further includes a shape detector implemented to detect a predefined shape in the image or in the edge image derived from same based on the identified line sections. The device apart from that includes a pattern identifier implemented to select an image section corresponding to the detected predefined shape based on the detected predefined shape and to identify a traffic sign based on the selected image section.Type: ApplicationFiled: December 18, 2007Publication date: March 18, 2010Applicant: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.Inventor: Frank Klefenz
-
Patent number: 7680761Abstract: The present invention is directed to a method and mechanism for partitioning using information not directly located in the object being partitioned. According to an embodiment of the invention, foreign key-primary key relationships are utilized to create join conditions between multiple database tables to implement partitioning of a database object. Also, disclosed are methods and mechanisms to perform partition pruning.Type: GrantFiled: September 27, 2004Date of Patent: March 16, 2010Assignee: Oracle International CorporationInventors: Mohamed Zait, Benoit Dageville
-
Publication number: 20100061639Abstract: A method for identifying symbolic points on the image of a face including images of a right eye, left eye and mouth, includes: detecting and identifying elements with strong contrasts such as the irises, nostrils, or mouth; selecting zones of the image with respect to the elements with strong contrasts including a priori two sought-after symbolic points interrelated by a morphological criterion; searching within the zones for natural points through the convergence of lines of the image and, for each natural point, determining a signature, determining a score with respect to pre-established signatures and selecting the natural points having a score above a threshold value; in each zone, identifying pairs of natural points and determining for each identified pair a score with respect to pairs of standard symbolic points and selecting the pair of natural points having the best score as symbolic points of the zone.Type: ApplicationFiled: December 5, 2007Publication date: March 11, 2010Applicant: CONTINENTAL AUTOMOTIVE FRANCEInventor: Bertrand Godreau
-
Publication number: 20100061637Abstract: An image processing apparatus includes a binarization unit that binarizes a selected image region based on prescribed criteria into an image trimming target white pixel region and a background black pixel region, a hole filling unit that fills a hole of the white pixel region by converting a black pixel surrounded by the white pixel region into the white pixel, a distance conversion unit that calculates a distance value from each white pixel to the black pixel region, a skeleton extraction unit that extracts points indicating a skeleton of the white pixel region according to each distance value, and a trimming outline determination unit that excludes a less significant point based on prescribed conditions, draws an oval or perfect circle according to each distance value, centered at a non-excluded point, and determines an outer edge of ovals or perfect circles as an outline for trimming a trimming target image.Type: ApplicationFiled: August 31, 2009Publication date: March 11, 2010Inventors: Daisuke Mochizuki, Akiko Terayama, Takuro Noda, Takuo Ikeda, Eijiro Mori
-
Publication number: 20100061623Abstract: A position measuring apparatus including a first irradiating part that irradiates a first beam to an object, a second irradiating part that irradiates a second beam to the object, a capturing part that captures images of the object, a processing part that generates a first difference image and a second difference image by processing the images captured by the capturing part, an extracting part that extracts a contour and a feature point of the object from the first difference image, a calculating part that calculates three-dimensional coordinates of a reflection point located on the object based on the second difference image, and a determining part that determines a position of the object by matching the contour, the feature point, and the three-dimensional coordinates with respect to predetermined modeled data of the object.Type: ApplicationFiled: September 3, 2009Publication date: March 11, 2010Applicant: FUJITSU LIMITEDInventors: Hironori Yokoi, Toshio Endoh
-
Publication number: 20100054593Abstract: An image processing apparatus executes smoothing processing (reduction conversion) of an input image to acquire a smoothed image (reduced image), acquires a normalization parameter for normalization from the smoothed image, and normalizes pixel values of the input image based on the normalization parameter.Type: ApplicationFiled: September 2, 2009Publication date: March 4, 2010Applicant: CANON KABUSHIKI KAISHAInventors: Masahiro Matsushita, Hirotaka Shiiyama, Hidetomo Sohma, Koichi Magai
-
Publication number: 20100046840Abstract: An image processing method includes sending test image data to a plurality of image recognition units configured to detect a recognition object from an image, setting an evaluation condition for evaluating a recognition result, evaluating a recognition result of the test image data by each of the plurality of image recognition units under the evaluation condition, and selecting from the plurality of image recognition units an image recognition unit to be used based on an evaluation result by the evaluation.Type: ApplicationFiled: August 19, 2009Publication date: February 25, 2010Applicant: CANON KABUSHIKI KAISHAInventors: Noriyasu Hashiguchi, Kinya Osa
-
Publication number: 20100040292Abstract: The enhanced detection of a waving engagement gesture, in which a shape is defined within motion data, the motion data is sampled at points that are aligned with the defined shape, and, based on the sampled motion data, positions of a moving object along the defined shape are determined over time. It is determined whether the moving object is performing a gesture based on a pattern exhibited by the determined positions, and an application is controlled if determining that the moving object is performing the gesture.Type: ApplicationFiled: July 24, 2009Publication date: February 18, 2010Applicant: GESTURETEK, INC.Inventor: IAN CLARKSON
-
Patent number: 7660441Abstract: Automatic conflation systems and techniques which provide vector-imagery conflation and map-imagery conflation. Vector-imagery conflation is an efficient approach that exploits knowledge from multiple data sources to identify a set of accurate control points. Vector-imagery conflation provides automatic and accurate alignment of various vector datasets and imagery, and is appropriate for GIS applications, for example, requiring alignment of vector data and imagery over large geographical regions. Map-imagery conflation utilizes common vector datasets as “glue” to automatically integrate street maps with imagery. This approach provides automatic, accurate, and intelligent images that combine the visual appeal and accuracy of imagery with the detailed attribution information often contained in such diverse maps. Both conflation approaches are applicable for GIS applications requiring, for example, alignment of vector data, raster maps, and imagery.Type: GrantFiled: June 28, 2005Date of Patent: February 9, 2010Assignee: Southern California, UniversityInventors: Ching-Chien Chen, Craig A. Knoblock, Cyrus Shahabi, Yao-Yi Chiang
-
Patent number: 7653247Abstract: A system and method for extracting a corner point in a space using pixel information obtained from a camera are provided. The corner point extracting system includes a light generation module emitting light in a predetermined form (such as a plane form), an image acquisition module acquiring an image of a reflector reflecting the light emitted from the light generation module, and a control module obtaining distance data between the light generation module and the reflector using the acquired image and extracting a corner point by performing split-merge using a threshold proportional to the distance data. The threshold is a value proportional to the distance data which corresponds to pixel information of the image acquisition module.Type: GrantFiled: September 29, 2005Date of Patent: January 26, 2010Assignee: Samsung Electronics Co., Ltd.Inventors: Myung-Jin Jung, Dong-Ryeol Park, Seok-Won Bang, Hyoung-Ki Lee
-
Publication number: 20100014841Abstract: A video recording apparatus optimally records partial images made up of a partial area from an original image and containing a feature point such as person's face. An imaging unit acquires the original image by imaging at a resolution higher than the pixel count used for video recording. A feature point detector detects one or more feature points from the original image. A partial image clipper clips, from the original image, a partial image containing a feature point selected by the user. An encoder encodes the clipped partial image. A recording unit then records the encoded partial image to a recording medium. In so doing, the user is able to select a recording subject from a summary of detected feature points before or during video recording, and thus record a desired face.Type: ApplicationFiled: July 7, 2009Publication date: January 21, 2010Applicant: SONY CORPORATIONInventors: Shinya KANO, Shunji Okada
-
Patent number: 7643966Abstract: A computer model of a physical structure (or object) can be generated using context-based hypothesis testing. For a set of point data, a user selects a context specifying a geometric category corresponding to the structure shape. The user specifies at least one seed point from the set that lies on a surface of the structure of interest. Using the context and point data, the system loads points in a region near the seed point(s), and determines the dimensions and orientation of an initial surface component in the context that corresponds to those points. If the selected component is supported by the points, that component can be added to a computer model of the surface. The system can repeatedly find points near a possible extension of the surface model, using the context and current surface component(s) to generate hypotheses for extending the surface model to these points.Type: GrantFiled: March 8, 2005Date of Patent: January 5, 2010Assignee: Leica Geosystems AGInventors: Jeffrey Minoru Adachi, Mark Damon Wheeler, Jonathan Apollo Kung, Richard William Bukowski, Laura Michele Downs
-
Publication number: 20090324070Abstract: An information processing apparatus inputs image data and calculates a relative magnitude between coefficient or pixel values of the input image data. The image processing apparatus generates verification data of the image data using the calculated relative magnitude.Type: ApplicationFiled: June 22, 2009Publication date: December 31, 2009Applicant: CANON KABUSHIKI KAISHAInventor: Junichi Hayashi
-
Patent number: 7639258Abstract: Methods and apparatus, including computer program products, that implement a method for determining a winding order for a glyph associated with a font. The glyph has an outline that has an outside path. In one aspect, a method includes identifying four extrema points of the outline, each being an intersection of two vectors obtained from the outline; and for each of the points, calculating a cross product of the two vectors intersecting at the point. A positive result indicates that the outside path is wound in a first direction, and a negative result indicates that the outside path is wound in an opposite, second direction. The winding order of the outside path is determined based on the cross products calculated. In a particular implementation, the method determines that the outside path is wound counter clockwise when three or four of the results are positive.Type: GrantFiled: August 15, 2006Date of Patent: December 29, 2009Assignee: Adobe Systems IncorporatedInventors: Terence S. Dowling, R. David Arnold
-
Publication number: 20090316990Abstract: An object recognition device includes: a model image processing unit having a feature point set decision unit setting a feature point set in a model image, and detecting the feature quantity of the feature point set, and a segmentation unit segmenting the model image; a processing-target image processing unit having a feature point setting unit setting a feature point in a processing-target image and detecting the feature quantity of the feature point; a matching unit comparing the feature quantities of the feature points set in the model image and in the processing-target image so as to detect the feature point corresponding to the feature point set, and executes a matching; and a determination unit determining the processing result in the matching unit so as to determine presence/absence of a model object in the processing-target image.Type: ApplicationFiled: June 19, 2009Publication date: December 24, 2009Inventors: Akira NAKAMURA, Yoshiaki Iwai, Takayuki Yoshigahara
-
Publication number: 20090310869Abstract: A system, apparatus, method, and computer program product for evaluating an object disposed on an upper surface of an object holder. At least one first frame representing a captured portion of the object is acquired, while the object holder is positioned at each of multiple locations. At least one second frame representing a captured portion of at least one other surface of the object holder is acquired, while the object holder is positioned at each of the locations. At least one spatial characteristic associated with the captured portion of the object is determined based on at least one of the acquired frames. Values of multiple optical markers captured in each second frame are determined, where at least two of the optical markers have different characteristics. At least one of coordinates associated with the values and an orientation of the captured portion of the at least one other surface are determined.Type: ApplicationFiled: November 26, 2008Publication date: December 17, 2009Applicant: Sirona Dental Systems GmbHInventors: Frank Thiel, Bjorn Popilka, Gerrit Kocherscheidt
-
Patent number: 7630555Abstract: A position and orientation measuring apparatus includes an inside-out index detecting section that can obtain an image from an imaging apparatus and detect an index provided on a measuring object body, an outside-in index detecting section that can observe and detect indices provided on the imaging apparatus and on the measuring object body based on an image obtained from an outside-in camera provided in an environment, and a position and orientation calculating section that can calculate positions and orientations of the imaging apparatus and the measuring object body based on information relating to image coordinates of the detected indices.Type: GrantFiled: May 4, 2006Date of Patent: December 8, 2009Assignee: Canon Kabushiki KaishaInventors: Kiyohide Satoh, Shinji Uchiyama
-
Publication number: 20090296986Abstract: An image processing device includes: a tracking unit to track a predetermined point on an image as a tracking point, to correspond with an operation of a user; a display control unit to display the tracking point candidate serving as the tracking point candidates, which are greater in number than objects moving on the image and fewer than the number of pixels of the image, on the image; and a setting unit to set the tracking point candidates as the tracking points on the next frame of the tracking unit, corresponding to an operation by a user.Type: ApplicationFiled: May 6, 2009Publication date: December 3, 2009Applicant: SONY CORPORATIONInventor: Michimasa OBANA
-
Patent number: 7627178Abstract: In an image recognition apparatus, feature point extraction sections and extract feature points from a model image and an object image. Feature quantity retention sections extract a feature quantity for each of the feature points and retain them along with positional information of the feature points. A feature quantity comparison section compares the feature quantities with each other to calculate the similarity or the dissimilarity and generates a candidate-associated feature point pair having a high possibility of correspondence. A model attitude estimation section repeats an operation of projecting an affine transformation parameter determined by three pairs randomly selected from the candidate-associated feature point pair group onto a parameter space. The model attitude estimation section assumes each member in a cluster having the largest number of members formed in the parameter space to be an inlier.Type: GrantFiled: April 22, 2004Date of Patent: December 1, 2009Assignee: Sony CorporationInventors: Hirotaka Suzuki, Kohtaro Sabe, Masahiro Fujita
-
Patent number: 7623736Abstract: A method for determining a translation of a three-dimensional pre-operative image data set to obtain a registration of the three-dimensional image data with a patient positioned in a projection imaging system. In one embodiment the user identifies an initial three-dimensional organ center from projections and extreme contour landmark points of the object on a set of projections. A set of contour points for the image object in each of a plurality of three-dimensional cross-section planes; is obtained and the points projecting nearest to the user-identified landmark points are selected. A three-dimensional grid having a predetermined number of intervals at a predetermined interval spacing centered at the user-identified organ center is defined. The three-dimensional image data contour points as centered onto each grid point are projected for evaluation and selection of the grid point leading to contour points projecting nearest to the user-identified landmark points.Type: GrantFiled: May 5, 2006Date of Patent: November 24, 2009Assignee: Stereotaxis, Inc.Inventor: Raju R. Viswanathan
-
Publication number: 20090279740Abstract: A distance measuring device (1) is included of a feature point detecting element (31) for detecting a set of feature points from a plurality of images (2) picked up at a plurality of places (a, b) by a pickup element, a change detecting element (33) for detecting relative changes among images with respect to the detected set of the feature points, and a first specifying element (8) for specifying rotation quantities of the image pickup element among the places in accordance with the detected changes. Here, the feature point detecting element detects a set of points (P1, P2, P3, P4) composing lines (310, 320) that belong to a stationary object and that are substantially perpendicular to the optical axis of the image pickup element as a set of feature points.Type: ApplicationFiled: September 13, 2006Publication date: November 12, 2009Applicant: PIONEER CORPORATIONInventor: Osamu Yamazaki
-
Publication number: 20090274373Abstract: The invention provides an image processing apparatus for generating coordination calibration points. The image processing apparatus includes a subtracting module, an edge detection module and an intersection point generation module. The subtracting module subtracts a first image from a second image to generate a first subtracted image, and subtracts the first image from a third image to generate a second subtracted image. The edge detection module performs an edge detection process for the first subtracted image to generate a first edge image, and performs the edge detection process for the second subtracted image to generate a second edge image, wherein the first edge image includes a first edge and the second edge image includes a second edge. The intersection point generation module generates, according to the first and second edges, an intersection point pixel, serving as the coordination calibration point corresponding to the first edge and the second edge.Type: ApplicationFiled: October 24, 2008Publication date: November 5, 2009Inventors: Yi-Ming Huang, Ching-Chun Chiang, Yun-Cheng Liu
-
Publication number: 20090268264Abstract: An image scanner apparatus includes an automatic image acquiring unit having a feature point detecting unit, an inclination calculating unit, a feature point rotation calculating unit, and a rectangular area calculating unit. The feature point detecting unit is arranged to detect a plurality of feature points of an original document outline from image data acquired by scanning an original document. The inclination calculating unit is arranged to calculate values regarding an original document inclination. The feature point rotation calculating unit is arranged to calculate positions of rotated feature points acquired by rotating the plurality of feature points detected through the feature point detecting unit around a prescribed center point by an inclination angle in a direction in which the original document inclination is corrected.Type: ApplicationFiled: March 9, 2009Publication date: October 29, 2009Applicant: MURATA MACHINERY, LTD.Inventor: Katsushi MINAMINO
-
Publication number: 20090257636Abstract: A method is disclosed for use in eye registrations and examinations, some embodiments of the method including acquiring scan pattern scans; forming images from the scans; registering the images; generating a feature map from the scan pattern scans; identifying features on the feature map; and recording positional information of features.Type: ApplicationFiled: April 14, 2009Publication date: October 15, 2009Inventors: Jay Wei, Ben Kwei Jang, Shuw-Shenn Luh, Yuanmu Deng
-
Publication number: 20090252377Abstract: A mobile object recognizing device comprises a camera (2) for taking time-series images, a feature point extracting unit (23) for extracting the feature points of the individual time-series images taken by the camera (2), an optical flow creating unit (24) for comparing the feature points of the time-series images between different images, to create an optical flow joining the feature points having the same pattern, and a grouping operation unit (25) for selecting that optical flow as one belonging to one mobile object, the prolonged line of which intersects with one vanishing point within a predetermined error range and in which the external ratio of a segment joining the other end point and the vanishing point with one end point of the optical flow being the externally dividing point in the extension of the optical flow.Type: ApplicationFiled: October 4, 2007Publication date: October 8, 2009Applicant: AISIN SEIKI KABUSHIKI KAISHAInventors: Tokihiko Akita, Keita Hayashi