Local Or Regional Features Patents (Class 382/195)
  • Patent number: 9002055
    Abstract: A method (100) and system (300) is described for processing video data comprising a plurality of images. The method (100) comprising obtaining (104, 106), for each of the plurality of images, a segmentation in a plurality of regions and a set of keypoints, and tracking (108) at least one region between a first image and a subsequent image resulting in a matched region in the subsequent image taking into account a matching between keypoints in the first image and the subsequent image. The latter results in accurate tracking of regions. Furthermore the method may optionally also perform label propagation taking into account keypoint tracking.
    Type: Grant
    Filed: October 13, 2008
    Date of Patent: April 7, 2015
    Assignees: Toyota Motor Europe NV, Cambridge Enterprise Limited
    Inventors: Ryuji Funayama, Hiromichi Yanagihara, Julien Fauqueur, Gabriel Brostow, Roberto Cipolla
  • Patent number: 9002114
    Abstract: Methods, apparatus, and articles of manufacture to measure geographical features using an image of a geographical location are disclosed. An example method includes dividing, with a processor, an image of a geographic area of interest into a plurality of geographical zones, the geographical zones being representative of different geographical areas having approximately equal physical areas, measuring, with the processor, a geographical feature represented in the image for corresponding ones of the plurality of geographical zones, storing descriptions for the geographical zones in a computer memory, and storing values representative of the geographical feature of the geographical zones.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: April 7, 2015
    Assignee: The Nielsen Company (US), LLC
    Inventors: David Miller, Pawel Mikolaj Bedynski, Mainak Mazumdar, Ludo Daemen
  • Patent number: 9002085
    Abstract: Embodiments disclose systems and methods that aid in screening, diagnosis and/or monitoring of medical conditions. The systems and methods may allow, for example, for automated identification and localization of lesions and other anatomical structures from medical data obtained from medical imaging devices, computation of image-based biomarkers including quantification of dynamics of lesions, and/or integration with telemedicine services, programs, or software.
    Type: Grant
    Filed: April 30, 2014
    Date of Patent: April 7, 2015
    Assignee: Eyenuk, Inc.
    Inventors: Kaushal Mohanlal Solanki, Chaithanya Amai Ramachandra, Sandeep Bhat Krupakar
  • Patent number: 9002117
    Abstract: Methods, systems, and computer program products for parsing objects in a video are provided herein. A method includes producing a plurality of versions of an image of an object, wherein each version has a different resolution of said image of said object, and computing an appearance score at each of a plurality of regions on the lowest resolution version for at least one attribute for said object. Such a method also includes analyzing one or more other versions to compute a resolution context score for each of the plurality of regions in the lowest resolution version, and determining a configuration of the at least one semantic attribute in the lowest resolution version based on the appearance score and the resolution context score.
    Type: Grant
    Filed: March 7, 2014
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Lisa Marie Brown, Rogerio Schmidt Feris, Arun Hampapur, Daniel Andre Vaquero
  • Patent number: 9002115
    Abstract: A dictionary data registration apparatus includes a dictionary configured to be registered a local feature amount for each region of an image with respect to each of a plurality of categories, an extraction unit configured to extract the local feature amount from a plurality of regions of an input image, a selection unit configured to select a plurality of the local feature amounts for each region according to a distribution of the local feature amounts extracted by the extraction unit from a plurality of regions of a plurality of pieces of input images which belongs to the category with respect to each of the plurality of categories, and a registration unit configured to register the selected plurality of local feature amounts on the dictionary as a local feature amount for each region with respect to the category.
    Type: Grant
    Filed: June 19, 2012
    Date of Patent: April 7, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kiyotaka Takahashi, Kotaro Yano, Takashi Suzuki, Hiroshi Sato
  • Patent number: 9002116
    Abstract: One exemplary embodiment involves identifying feature matches between each of a plurality of object images and a test image, each feature matches between a feature of a respective object image and a matching feature of the test image, wherein there is a spatial relationship between each respective object image feature and a test image feature, and wherein the object depicted in the test image comprises a plurality of attributes. Additionally, the embodiment involves estimating, for each attribute in the test image, an attribute value based at least in part on information stored in a metadata associated with each of the object images.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: April 7, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan Brandt, Xiaohui Shen
  • Publication number: 20150092997
    Abstract: There are provided a person recognition apparatus, method, and a non-transitory computer readable recording medium which can perform accurate person recognition according to a time dependent face change. A sorting section sorts a plurality of images by shooting date and time. A group division section divides the plurality of images into a plurality of groups according to a predetermined shooting date and time range. A face recognition section extracts feature amounts by face recognition for each group. An in-group person determination section determines a person having a similarity of a predetermined reference threshold value or higher as the same person and integrates the feature amounts relevant to the person for each group. An inter-group person recognition section recognizes persons having a similarity of a predetermined recognition threshold value or higher as the same person between two groups based on the feature amounts integrated in adjacent groups.
    Type: Application
    Filed: September 29, 2014
    Publication date: April 2, 2015
    Inventor: Yoshihiro YAMAGUCHI
  • Publication number: 20150093033
    Abstract: A method and apparatus for generating a dewarped document using a document image captured using a camera are provided. The method includes obtaining the document image captures using the camera, extracting text lines from the document image captured using the camera, determining a projection formula to convert positions of respective points constituting the extracted text lines to coordinates projected on a plane of the dewarped document, determining a target function used to calculate a difference between text lines projected on the place of the dewarped document using the projection formula and real text lines, calculating parameters that minimize the target function, and converting the document image to the dewarped document by substituting the calculated parameters into the projection formula.
    Type: Application
    Filed: September 29, 2014
    Publication date: April 2, 2015
    Inventors: Mu-sik KWON, Nam-ik CHO, Sang-ho KIM, Beom-su KIM, Won-kyo SEO
  • Publication number: 20150093032
    Abstract: An image processing apparatus includes a subject area detector and a subject area determinator. The subject area detector is configured to perform subject detection processing to detect a subject area from an input image. The subject area determinator is configured to determine a final subject area by majority decision processing that is based on the subject areas detected in the subject detection processing performed a plurality of times.
    Type: Application
    Filed: August 14, 2014
    Publication date: April 2, 2015
    Applicant: Sony Corporation
    Inventor: Yuta NAKAO
  • Publication number: 20150093015
    Abstract: An image processor generates a Super-Resolution (SR) frame by upscaling. A Human Visual Preference Model (HVPM) helps detect random texture regions, where visual artifacts and errors are tolerated to allow for more image details, and immaculate regions having flat areas, corners, or regular structures, where details may be sacrificed to prevent annoying visual artifacts that seem to stand out more. A regularity or isotropic measurement is generated for each input pixel. More regular and less anisotropic regions are mapped as immaculate regions. Higher weights for blurring, smoothing, or blending from a single frame source are assigned for immaculate regions to reduce the likelihood of generated artifacts. In the random texture regions, multiple frames are used as sources for blending, and sharpening is increased to enhance details, but more artifacts are likely. These artifacts are more easily tolerated by humans in the random texture regions than in the regular-structure immaculate regions.
    Type: Application
    Filed: January 24, 2014
    Publication date: April 2, 2015
    Applicant: Hong Kong Applied Science & Technology Research Institute Company Limited
    Inventors: Luhong LIANG, Peng LUO, King Hung CHIU, Wai Keung CHEUNG
  • Patent number: 8995758
    Abstract: According to an embodiment, a method for filtering descriptors for visual object recognition is provided. The method includes identifying false positive descriptors having a local match confidence that exceeds a predetermined threshold and a global image match confidence that is less than a second threshold. The method also includes training at least one classifier to discriminate between the false positive descriptors and other descriptors. The method further includes filtering feature point matches using the at least one classifier. According to another embodiment, the filtering step may further include removing one or more feature point matches from a result set. According to a further embodiment, a system for filtering feature point matches for visual object recognition is provided. The system includes a hard false positive identifier, a classifier trainer and a hard false positive filter.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: March 31, 2015
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Ulrich Buddemeier, Hartmut Neven
  • Patent number: 8995772
    Abstract: The subject disclosure is directed towards a face detection technology in which image data is classified as being a non-face image or a face image. Image data is processed into an image pyramid. Features, comprising pixel pairs of the image pyramid, are provided to stages of a cascading classifier to remove sub-window candidates that are classified as non-face sub-windows within each stage. The face detection technology continues with one or more subsequent stages to output a result as to whether the image contains a face.
    Type: Grant
    Filed: November 9, 2012
    Date of Patent: March 31, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Wolf Kienzle
  • Patent number: 8995771
    Abstract: Implementations for identifying duplicate images in an image space are described. An image space is partitioned into a plurality of coarse clusters based on signatures of the images within the image space. The signatures are determined from compact descriptors of the images. Refined clusters that include one or more images of an individual coarse cluster are created based on pair-wise comparisons of the compact descriptors of images in the coarse cluster, and the refined clusters are identified as sets of duplicate images. The refined clusters are grown by searching in similar coarse clusters for images to add to the refined clusters.
    Type: Grant
    Filed: April 30, 2012
    Date of Patent: March 31, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lei Zhang, Xin-Jing Wang, Wei-Ying Ma
  • Publication number: 20150086119
    Abstract: An image processing apparatus includes: an obtaining member to obtain an image; a copying member to copy the image obtained by the obtaining member; a detecting member to detect a face area from the copied image or the obtained image; a correcting member to perform correction to adjust a brightness and a color shade of a whole of the obtained or copied image from which the face area is detected by the detecting member so that a flesh color component of the face area detected by the detecting member is corrected to become a predetermined status; and a synthesizing member to make a transparency of the face area detected by the detecting member different from another area and to synthesize the image corrected by the correcting member and the obtained or copied image from which the face are is not detected by the detecting member.
    Type: Application
    Filed: September 15, 2014
    Publication date: March 26, 2015
    Applicant: CASIO COMPUTER CO., LTD.
    Inventor: Takeshi SATO
  • Publication number: 20150086118
    Abstract: A method for recognizing a visual context in an image includes at least one step of extracting features from the image and including at least one step of coding the plurality of local descriptors developing a coding matrix by associating each local descriptor with one or a plurality of visual words in a codebook according to at least one similarity criterion, the method being characterized in that said coding step results from a compromise between the similarity between a given local descriptor and the visual words of a codebook and its resemblance to the visual words associated with the local descriptors which are spatially near to it in the domain of the image.
    Type: Application
    Filed: April 11, 2013
    Publication date: March 26, 2015
    Inventors: Aymen Shabou, Herve Le Borgne
  • Publication number: 20150085118
    Abstract: The invention concerns a method for detecting raindrops on a windscreen of a vehicle, in which an image of at least an area of the windscreen is captured, wherein at least one object its extracted from the captured image, and wherein ambient light conditions are determined (S12). At least one of at least two ways of object extraction (S14, S18) is performed in dependence on the ambient light conditions. Moreover, the invention concerns a camera assembly for detecting raindrops on a windscreen of a vehicle.
    Type: Application
    Filed: September 7, 2011
    Publication date: March 26, 2015
    Applicant: VALEO SCHALTER UND SENSOREN GMBH
    Inventors: Samia Ahiad, Caroline Robert-Landry
  • Publication number: 20150086120
    Abstract: In an image processing apparatus, an image acquiring section acquires one or more images. An image analysis information acquiring section acquires image analysis information on each of the one or more images. A theme determining section determines a main theme representing a theme of each group of images related to each other among the one or more images and a subtheme representing a theme of each of the one or more images based on information on photography tendencies of images associated with each of one or more themes and the image analysis information on each of the one or more images. A theme information output section outputs information on the main theme and information on the subtheme.
    Type: Application
    Filed: September 24, 2014
    Publication date: March 26, 2015
    Inventors: Kei YAMAJI, Daisuke YAMADA, Kazuma TSUKAGOSHI, Yohei MOMOKI
  • Patent number: 8989452
    Abstract: A method for authenticating the identity of a handset user is provided. The method includes: obtaining, a login account and a password from the user; judging whether the login account and the password are correct; if the login account or the password is incorrect, refusing the user to access an operating system of the handset; if the login account and the password are correct, sending the login account and the password to a cloud server, wherein the login account and the password correspond to a face sample image library of the user stored on the cloud server; acquiring an input face image of the user; sending the input face image to the cloud server; authenticating, by the cloud server, the identity of the user according to the login account, the password and the input face image.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: March 24, 2015
    Assignee: Dongguan Ruiteng Electronics Technologies Co., Ltd
    Inventors: Xiaojun Liu, Dongxuan Gao
  • Patent number: 8988190
    Abstract: A portable information handling system includes a top cover, a base, and an electronic latch. The top cover is connected to the base. The top cover has a gesture sensitive surface configured to receive a trace. The electronic latch is in communication with the gesture sensitive surface, and is configured to latch the top cover and the base together. The electronic latch is further configured to unlatch the top cover from the base in response to receiving a signal representing that the trace received on the gesture sensitive surface is proper.
    Type: Grant
    Filed: September 3, 2009
    Date of Patent: March 24, 2015
    Assignee: Dell Products, LP
    Inventors: Bradley M. Lawrence, Keith A. Kozak, Nicolas A. Denhez
  • Patent number: 8989502
    Abstract: A computer-implemented method of providing georeferenced information regarding a location of capture of an image is provided. The method includes receiving a first image at an image-based georeferencing system, the first image comprising digital image information and identifying a cataloged second image that correlates to the first image. The method further includes automatically determining reference features common to both the second image and the first image, accessing geographic location information related to the common reference features, utilizing the geographic location information related to the common features to determine a georeferenced location of capture of the first image and providing the georeferenced location of capture for access by a user of the image-based georeferencing system.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: March 24, 2015
    Assignee: Trimble Navigation Limited
    Inventors: James M. Janky, Michael V. McCusker, Harold L. Longaker, Peter G. France
  • Publication number: 20150078667
    Abstract: A method for providing object information for a scene in a wearable computer is disclosed. In this method, an image of the scene is captured. Further, the method includes determining a current location of the wearable computer and a view direction of an image sensor of the wearable computer and extracting at least one feature from the image indicative of at least one object. Based on the current location, the view direction, and the at least one feature, information on the at least one object is determined Then, the determined information is output.
    Type: Application
    Filed: September 17, 2013
    Publication date: March 19, 2015
    Applicant: QUALCOMM Incorporated
    Inventors: Sungrack Yun, Kyu Woong Hwang, Jun-Cheol Cho, Taesu Kim, Minho Jin, Yongwoo Cho, Kang Kim
  • Patent number: 8983201
    Abstract: The techniques discussed herein discover three-dimensional (3-D) visual phrases for an object based on a 3-D model of the object. The techniques then describe the 3-D visual phrases. Once described, the techniques use the 3-D visual phrases to detect the object in an image (e.g., object recognition).
    Type: Grant
    Filed: July 30, 2012
    Date of Patent: March 17, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rui Cai, Zhiwei Li, Lei Zhang, Qiang Hao
  • Patent number: 8983142
    Abstract: A set of silhouette attributes are determined for a class of objects, where each of the silhouette attribute corresponds to a discriminative feature that is not associated with any other silhouette attribute in the set. An image content item depicting an object of the class is analyzed. A discriminative feature is identified for the object. The silhouette attribute associated with the determined discriminative feature is associated with the object as provided in the image content item.
    Type: Grant
    Filed: November 10, 2011
    Date of Patent: March 17, 2015
    Assignee: Google Inc.
    Inventors: Wei Zhang, Emilio Rodriguez Antunez, III, Salih Burak Gokturk, Baris Sumengen
  • Patent number: 8983145
    Abstract: A method for authenticating the identity of a handset user is provided. The method includes: obtaining, a login account and a password from the user; judging whether the login account and the password are correct; if the login account or the password is incorrect, refusing the user to access an operating system of the handset; if the login account and the password are correct, sending the login account and the password to a cloud server, wherein the login account and the password correspond to a face sample image library of the user stored on the cloud server; acquiring an input face image of the user; sending the input face image to the cloud server; authenticating, by the cloud server, the identity of the user according to the login account, the password and the input face image.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: March 17, 2015
    Assignee: Shenzhen Junshenghuichuang Technologies Co., Ltd
    Inventors: Xiaojun Liu, Dongxuan Gao
  • Patent number: 8983157
    Abstract: System and method are provided for determining hair tail positions. An image containing hair which is received from an image acquisition device is processed to find a coarse hair tail position. The coarse hair tail position is refined through further processing. The refined hair tail position may be used for accurate positioning, for example, of hair transplantation tools in various hair transplantation applications.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: March 17, 2015
    Assignee: Restoration Robotics, Inc.
    Inventor: Hui Zhang
  • Patent number: 8983202
    Abstract: Systems and methods of smile detection are disclosed. An exemplary method comprises generating a search map (400) for a subset of an image (300). The method also comprises identifying a plurality of candidates (400a-f) representing mouth corners. The method also comprises generating parabolas (410) between each pair of candidates representing mouth corners. The method also comprises analyzing contour of at least one of the parabolas to determine whether the mouth curves substantially upward to form a smile or curves substantially downward to form a frown.
    Type: Grant
    Filed: September 13, 2010
    Date of Patent: March 17, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Dan L. Dalton, Daniel Bloom, David Staudacher
  • Patent number: 8983235
    Abstract: Disclosed is a pupil detection device capable improving the pupil detection accuracy even if a detection target image is a low-resolution image. In a pupil detection device (100), an eye area actual size calculation unit (102) acquires an actual scale value of an eye area, a pupil state prediction unit (103) calculates an actual scale prediction value of a pupil diameter, a necessary resolution estimation unit (105) calculates a target value of resolution on the basis of the calculated actual scale prediction value, an eye area image normalization unit (107) calculates a scale-up/scale-down factor on the basis of the calculated target value of resolution and the actual scale value of the eye area, and normalizes the image of the eye area on the basis of the calculated scale-up/scale-down factor, and a pupil detection unit (108) detects a pupil image from the normalized eye area image.
    Type: Grant
    Filed: September 22, 2011
    Date of Patent: March 17, 2015
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Sotaro Tsukizawa, Kenji Oka
  • Publication number: 20150071547
    Abstract: Systems and methods for improving automatic selection of keeper images from a commonly captured set of images are described. A combination of image type identification and image quality metrics may be used to identify one or more images in the set as keeper images. Image type identification may be used to categorize the captured images into, for example, three or more categories. The categories may include portrait, action, or “other.” Depending on the category identified, the images may be analyzed differently to identify keeper images. For portrait images, an operation may be used to identify the best set of faces. For action images, the set may be divided into sections such that keeper images selected from each section tell the story of the action. For the “other” category, the images may be analyzed such that those having higher quality metrics for an identified region of interest are selected.
    Type: Application
    Filed: September 9, 2013
    Publication date: March 12, 2015
    Applicant: Apple Inc.
    Inventors: Brett Keating, Vincent Wong, Todd Sachs, Claus Molgaard, Michael Rousson, Elliott Harris, Justin Titi, Karl Hsu, Jeff Brasket, Marco Zuliani
  • Publication number: 20150071548
    Abstract: An image pickup device transmits to a server a transmission sample including a detection image detected by a first detection section from a transmitting/receiving section under the control of a transmission sample control section. The server performs detection processing that requires more resources than those of the first detection section on the detection image transmitted by a second detection section from the image pickup device, and determines whether or not the detection image in question is spurious, based on a second detection score which is thereby obtained. A transmission frequency deciding section generates transmission frequency control information such as to raise the transmission frequency by an image pickup device that has a high frequency of spurious detection; a transmitting/receiving section transmits the transmission frequency control information to the image pickup device.
    Type: Application
    Filed: April 18, 2013
    Publication date: March 12, 2015
    Applicant: PANASONIC CORPORATION
    Inventors: Hirofumi Fujii, Sumio Yokomitsu, Takeshi Watanabe, Masataka Sugiura, Michio Miwa
  • Publication number: 20150071550
    Abstract: An apparatus for detecting an afterimage candidate region includes: a comparison unit which compares gradation data of an n-th frame with integrated gradation data of an (n?1)-th frame and generates integrated gradation data of the n-th frame, where n is a natural number; a memory which provides the integrated gradation data of the (n?1)-th frame to the comparison unit and stores the integrated gradation data of the n-th frame; and an afterimage candidate region detection unit which detects an afterimage candidate region based on the integrated gradation data of the n-th frame, where each of the integrated gradation data of the n-th frame and the integrated gradation data of the (n?1)-th frame comprises a comparison region and a gradation region.
    Type: Application
    Filed: March 11, 2014
    Publication date: March 12, 2015
    Applicant: Samsung Display Co., Ltd.
    Inventors: Yong Jun JANG, Nam Gon CHOI, Joon Chul GOH, Gi Geun KIM, Geun Jeong PARK, Cheol Woo PARK, Yun Ki BAEK, Jeong Hun SO, Dong Gyu LEE
  • Patent number: 8977049
    Abstract: A method for estimating signal-dependent noise includes defining a plurality of pixel groups from among the image pixels. The method further includes computing, for one or more signal levels of the image, a difference value between two pixel groups, whereby a respective one or more difference values are computed collectively. The method determines an estimated noise response of the image as a function of the one or more computed difference values.
    Type: Grant
    Filed: January 8, 2010
    Date of Patent: March 10, 2015
    Assignee: NVIDIA Corporation
    Inventors: Timo Aila, Samuli Laine
  • Patent number: 8977076
    Abstract: An input image (7) having a first pixel resolution is acquired from an image capture system (2). A respective characterization of each of at least one visual quality feature of the input image (7) is determined. An output thumbnail image (9) is produced from the input image (7). The output thumbnail image (9) reflects the respective characterization of each visual quality feature. The output thumbnail image (9) has a second pixel resolution lower than the first pixel resolution. The output thumbnail image (9) is output in association with operation of the image capture system (2).
    Type: Grant
    Filed: March 20, 2008
    Date of Patent: March 10, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ramin Samadani, Daniel R. Tretter, Keith Moore
  • Patent number: 8977005
    Abstract: Provided is a carried item region extraction device for accurately extracting a carried item region from an image. This carried item region extraction device has: a string region processing unit for extracting a string region including a string of a carried item from image information; and a carried item region processing unit for extracting a carried item region including a carried item from the image information on the basis of the string region.
    Type: Grant
    Filed: September 15, 2011
    Date of Patent: March 10, 2015
    Assignee: NEC Corporation
    Inventor: Yasufumi Hirakawa
  • Publication number: 20150063704
    Abstract: Information corresponding to a face image preferred by a user as a whole is presented while considering a face element preferred by the user. An information processing apparatus identifies a priority of each of a plurality of elements included in a face in a reference face image. The priority is according to specification by a user. The information processing apparatus extracts, from among a plurality of face images, face images whose similarities of an area including the plurality of elements to the reference face image is greater than or equal to a first threshold value. The information processing apparatus decides, on the basis of similarities of each element between the reference face image and the extracted face images and the identified priority of each element, presentation order of presentation information presented as search results corresponding to the extracted face images.
    Type: Application
    Filed: February 28, 2013
    Publication date: March 5, 2015
    Applicant: Rakuten, Inc.
    Inventor: Hideaki Tobinai
  • Publication number: 20150063705
    Abstract: A method for content aware multimedia resizing includes selecting at least one Region Of Interest (ROI) in an input multimedia, resizing the at least one ROI, and generating an output multimedia with the resized at least one ROI. An electronic device for content aware multimedia resizing includes a processor configured to select ROI in an input multimedia, resize the at least one ROI, and generate an output multimedia with the resized at least one ROI. A computer-readable medium storing a program for content aware multimedia resizing, the program which when executed by a processor causes the processor to perform operations including selecting ROI in an input multimedia, resizing the at least one ROI, and generating an output multimedia with the resized at least one ROI.
    Type: Application
    Filed: August 29, 2014
    Publication date: March 5, 2015
    Inventors: Nandan Hosaagrahara Shankaramurthy, Sanjay Narasimha Murthy, Pavan Sudheendra, Rajaram Hanumantacharya Naganur
  • Patent number: 8971636
    Abstract: Disclosed is an image creating device including a first obtaining unit which obtains an image including a face, a first extraction unit which extracts a face component image relating to main components of the face in the image and a direction of the face, a second obtaining unit which obtains a face contour image associated to the face in the image and a second extraction unit which extracts a direction of a face contour in the face contour image. The image creating device further includes a converting unit which converts at least one of the face component image and the face contour image based on the both directions of the face and the face contour and a creating unit which creates a portrait image by using at least one of the face component image and the face contour image being converted by the converting unit.
    Type: Grant
    Filed: June 20, 2013
    Date of Patent: March 3, 2015
    Assignee: Casio Computer Co., Ltd.
    Inventors: Keisuke Shimada, Shigeru Kafuku, Hirokiyo Kasahara
  • Patent number: 8971635
    Abstract: Disclosed herein is an image processing apparatus including an upper body feature data storage unit (110) which stores upper body directional feature data, which indicates the upper body of a person and indicates that the upper body is facing a specific direction, for each of a plurality of directions in conjunction with directional data indicative of the direction of the upper body directional feature data; and an upper body detection unit (140) which extracts upper body image data indicative of the upper body of the person from image data by reading the plurality of upper body directional feature data stored in the upper body feature data storage unit (110) in conjunction with the directional data and using each of the plurality of upper body directional feature data.
    Type: Grant
    Filed: January 20, 2011
    Date of Patent: March 3, 2015
    Assignee: NEC Solution Innovators, Ltd.
    Inventors: Takayuki Kodaira, Satoshi Imaizumi
  • Publication number: 20150055872
    Abstract: An information processing apparatus includes a detector that detects a symbol image representing a symbol from content including an image, a determining unit that determines, on the basis of detail of image processing that changes a display form of the content, a display form of the symbol image in the content subjected to the image processing, and an addition indicating unit that indicates addition of the symbol image in the display form determined by the determining unit to the content subjected to the image processing.
    Type: Application
    Filed: June 4, 2014
    Publication date: February 26, 2015
    Applicant: FUJI XEROX CO., LTD.
    Inventor: Kohshiro INOMATA
  • Publication number: 20150055871
    Abstract: A computer implemented method and apparatus for analyzing image content and associating behaviors to the analyzed image content. The method comprises accessing a digital image; determining one or more patterns in the digital image; associating, based on the one or more determined patterns, a set of pre-defined behaviors with each determined pattern; and storing interactions with the digital image, wherein the interactions are associated with the behaviors.
    Type: Application
    Filed: August 26, 2013
    Publication date: February 26, 2015
    Applicant: Adobe Systems Incorporated
    Inventor: Pillai Subbiah Muthuswamy
  • Patent number: 8960906
    Abstract: An image processing apparatus includes an identification unit configured to identify periodicity of a fundus image obtained by capturing an image of a fundus of an eye, and an information acquisition unit configured to acquire information indicating an imaging state of photoreceptor cells in the fundus image based on the periodicity.
    Type: Grant
    Filed: September 6, 2012
    Date of Patent: February 24, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Keiko Yonezawa, Kazuhide Miyata
  • Patent number: 8965130
    Abstract: Methods and apparatus for image matching using local features, in particular a method and apparatus for flexible interest point computation. The method involves producing multiple octaves of a digital image, wherein each octave of said multiple scale octaves comprises multiple layers; initiating a process comprising detection and description of interest points, wherein said process is programmed to progress layer-by-layer over said multiple layers of each of said multiple octaves, and to continue to a next octave of said multiple octaves upon completion of all layers of a current octave of said multiple octaves; upon the detection and the description of each interest point of said interest points during said process, recording an indication associated with said interest point in a memory, such that said memory accumulates indications during said process; and upon interruption to said process, returning a result being based at least on said indications.
    Type: Grant
    Filed: November 9, 2011
    Date of Patent: February 24, 2015
    Assignee: Bar-Ilan University
    Inventors: Gal Kaminka, Eran Sadeh-Or
  • Patent number: 8963960
    Abstract: A system and method for performing content aware cropping/expansion may be applied to resize an image or to resize a selected object therein. An image object may be selected using an approximate bounding box of the object. The system may receive input indicating a lowest priority edge or corner of the image or object to be resized (e.g., using a drag operation). Respective energy values for some pixels of the image and/or of the object to be resized may be weighted based on their distance from the lowest priority edge/corner and/or on a cropping or expansion graph, and relative costs may be determined for seams of the image dependent on the energy values. Low cost seams may be removed or replicated in different portions of the image and/or the object to modify the image. The selected object may be resized using interpolated scaling and patched over the modified image.
    Type: Grant
    Filed: May 20, 2009
    Date of Patent: February 24, 2015
    Assignee: Adobe Systems Incorporated
    Inventor: Anant Gilra
  • Patent number: 8963951
    Abstract: To allow a viewer to easily understand the details of a moving image shot by an image capturing apparatus in the case where the moving image is browsed. A camerawork detecting unit 120 detects the amount of movement of an image capturing apparatus at the time of shooting a moving image input from a moving-image input unit 110, and, on the basis of the amount of movement of the image capturing apparatus, calculates affine transformation parameters for transforming an image on a frame-by-frame basis. An image transforming unit 160 performs an affine transformation of at least one of the captured image and a history image held in an image memory 170, on the basis of the calculated affine transformation parameters. An image combining unit 180 combines, on a frame-by-frame basis, the captured image and the history image, at least one of which has been transformed, and causes the image memory 170 to hold a composite image.
    Type: Grant
    Filed: August 22, 2008
    Date of Patent: February 24, 2015
    Assignee: Sony Corporation
    Inventor: Shingo Tsurumi
  • Patent number: 8965133
    Abstract: An image processing apparatus to obtain highly reliable local feature point and local feature amount. With the number of local feature points as a factor of an image local feature amount description size, the reproducibility of the local feature point and local feature amount is estimated, and description is made by the local amount description size, sequentially from local feature point and local feature amount with the highest reproducibility. It is possible to ensure a local feature amount description size and search accuracy.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: February 24, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hirotaka Shiiyama
  • Patent number: 8965140
    Abstract: A method and apparatus for encoding a frame from a mixed content image sequence. In one embodiment, the method, executed under the control of a processor configured with computer executable instructions, comprises (i) generating, by an encoding processor, an image type mask that divides the frame into an unchanged portion, an object portion and a picture portion; (ii) producing lossless encoded content, by the encoding processor, from the object portion and the image type mask; (iii) generating, by the encoding processor, a filtered facsimile from the frame, the filtered facsimile generated by retaining the picture portion and filling the unchanged portion and the object portion with neutral image data; and (iv) producing, by the encoding processor, lossy encoded content from the filtered facsimile.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: February 24, 2015
    Assignee: Teradici Corporation
    Inventors: Zhan Xu, David Victor Hobbs
  • Publication number: 20150049952
    Abstract: Systems and methods for measuring facial characteristics of patients. In various embodiments, the system uses a geometric pattern to determine a reference scale for an image that includes the geometric pattern and at least a portion of the patient's face. The system may determine the reference scale based at least in part on a known measurement within the geometric pattern. The known measurement may include a distance between two geometric attributes of the geometric pattern. The system may be further configured to correct for errors caused by an orientation of the geometric pattern within the image and/or distortion of the geometric pattern within the image. The geometric pattern may be disposed on a reference device that may be configured to enable a user to attach the reference device to the head of the patient or a pair of eyewear worn by the patient.
    Type: Application
    Filed: August 14, 2013
    Publication date: February 19, 2015
    Applicant: VSP LABS, INC.
    Inventors: SAMEER CHOLAYIL, BRIAN HUNG DOAN, PHUONG THI XUAN PHAM
  • Publication number: 20150049195
    Abstract: An image processing unit includes a first feature amount storage in which a certain feature amount of a candidate of an object to be recognized in an image is stored, the image being input in a unit of frame from an imager, an image processor including a second feature amount storage in which the certain feature amount is stored when the candidate of the object corresponds to the certain feature amount stored in the first feature amount storage, and a target object detector to detect, on the basis of the second feature amount storage, the candidate of the object as a target object from the image of frames subsequent to a frame including the certain feature amount stored in the second feature amount storage, and when the target object is not detected, to detect the target object on the basis of the first feature amount storage.
    Type: Application
    Filed: August 5, 2014
    Publication date: February 19, 2015
    Inventors: Tomoko ISHIGAKI, Soichiro YOKOTA, Xue LI
  • Patent number: 8957907
    Abstract: A surface definition module of a hair/fur pipeline may be used to generate a shape defining a surface and an associated volume. A control hair module may be used to fill the volume with control hairs and an interpolation module may be used to interpolate final hair strands from the control hairs.
    Type: Grant
    Filed: May 11, 2007
    Date of Patent: February 17, 2015
    Assignees: Sony Corporation, Sony Pictures Entertainment Inc.
    Inventors: Armin Walter Bruderlin, Francois Chardavoine, Clint Chun, Gustav Melich
  • Patent number: 8958625
    Abstract: An image analysis embodiment comprises generating a bulge mask from a digital image, the bulge mask comprising potential convergence hubs for spiculated anomalies, detecting ridges in the digital image to generate a detected ridges map, projecting the detected ridges map onto a set of direction maps having different directional vectors to generate a set of ridge direction projection maps, determining wedge features for the potential convergence hubs from the set of ridge direction projection maps, selecting ridge convergence hubs from the potential convergence hubs having strongest wedge features, extracting classification features for each of the selected ridge convergence hubs, and classifying the selected ridge convergence hubs based on the extracted classification features.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: February 17, 2015
    Assignee: Vucomp, Inc.
    Inventors: Jeffrey C. Wehnes, David S. Harding
  • Patent number: 8958634
    Abstract: The image acquisition unit 41 acquires an image including an object. By comparing information related to the shape of a relevant natural object that is included as the object in the target image acquired by the image acquisition unit 41, and information related to respective shapes of a plurality of types prepared in advance, at least one flower type for the natural object in question is selected. The secondary selection unit 43 then selects data of a representative image from among data of a plurality of images of different color, of the same flower type as prepared in advance, for each of at least one flower type selected by the primary selection unit 42, based on information related to color of the relevant natural object included as the object in the image acquired by the image acquisition unit 41.
    Type: Grant
    Filed: March 20, 2013
    Date of Patent: February 17, 2015
    Assignee: Casio Computer Co., Ltd.
    Inventors: Kouichi Nakagome, Shigeru Kafuku, Kazuhisa Matsunaga, Michihiro Nihei