Local Or Regional Features Patents (Class 382/195)
  • Patent number: 9619734
    Abstract: Land classification based on analysis of image data. Feature extraction techniques may be used to generate a feature stack corresponding to the image data to be classified. A user may identify training data from the image data from which a classification model may be generated using one or more machine learning techniques applied to one or more features of the image. In this regard, the classification module may in turn be used to classify pixels from the image data other than the training data. Additionally, quantifiable metrics regarding the accuracy and/or precision of the models may be provided for model evaluation and/or comparison. Additionally, the generation of models may be performed in a distributed system such that model creation and/or application may be distributed in a multi-user environment for collaborative and/or iterative approaches.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: April 11, 2017
    Assignee: DigitalGlobe, Inc.
    Inventors: Giovanni B. Marchisio, Carsten Tusk, Krzysztof Koperski, Mark D. Tabb, Jeffrey D. Shafer
  • Patent number: 9613457
    Abstract: Provided are a multi-primitive fitting method including an acquiring point cloud data by collecting data of each of input points, a obtaining a segment for the points using the point cloud data, and a performing primitive fitting using data of points included in the segment and the point cloud data, and a multi-primitive fitting device that performs the method.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: April 4, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young Mi Cha, Chang Woo Chu, Jae Hean Kim
  • Patent number: 9600892
    Abstract: A non-parametric method of, and system for, dimensioning an object of arbitrary shape, captures a three-dimensional (3D) point cloud of data points over a field of view containing the object and a base surface on which the object is positioned, detects a base plane indicative of the base surface from the point cloud, extracts the data points of the object from the point cloud, processes the extracted data points of the object to obtain a convex hull, and fits a bounding box of minimum volume to enclose the convex hull. The bounding box has a pair of mutually orthogonal planar faces, and the fitting is performed by orienting one of the faces to be generally perpendicular to the base plane, and by simultaneously orienting the other of the faces to be generally parallel to the base plane.
    Type: Grant
    Filed: November 6, 2014
    Date of Patent: March 21, 2017
    Assignee: Symbol Technologies, LLC
    Inventors: Ankur R Patel, Kevin J O'Connell, Cuneyt M Taskiran, Jay J Williams
  • Patent number: 9589203
    Abstract: A processor implemented system and method for identification of an activity performed by a subject based on sensor data analysis is described herein. In an implementation, the method includes capturing movements of the subject in real-time using a sensing device. At least one action associated with the subject is ascertained from a predefined set of actions. From the predefined set of actions, a plurality of actions can collectively form at least one activity. The ascertaining is based on captured movements of the subject and at least one predefined action rule. The at least one action rule is based on context-free grammar (CFG) and is indicative of a sequence of actions for occurrence of the at least one activity. Further, a current activity performed by the subject is dynamically determined, based on the at least one action and an immediately preceding activity, using a non-deterministic push-down automata (NPDA) state machine.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: March 7, 2017
    Assignee: TATA Consultancy Services Limited
    Inventors: Dipti Prasad Mukherjee, Tamal Batabyal, Tanushyam Chattopadhyay
  • Patent number: 9575004
    Abstract: Systems and methods for inspecting a surface are disclosed. A source, detector, a base, a controller, and a processing device are used to collect image data related to the surface and information relating to the location of the image data on the surface. The image data and information relating to location are correlated and stored in a processing device to create a map of surface condition.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: February 21, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Gary E. Georgeson, Scott W. Lea, James J. Troy
  • Patent number: 9547866
    Abstract: Methods and apparatus to estimate demography based on aerial images are disclosed. An example method includes analyzing a first aerial image of a first geographic area to detect a first plurality of objects, analyzing a second aerial image of a second geographic area to detect a second plurality of objects, associating first demographic information to the second plurality of objects, the first demographic information obtained by a sampling of the second geographic area, and comparing the second plurality of objects to the first plurality of objects to estimate a demographic characteristic of the first geographic area based on the comparison.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: January 17, 2017
    Assignee: THE NIELSEN COMPANY (US), LLC
    Inventors: Alejandro Terrazas, Michael Himmelfarb, David Miller, Paul Donato
  • Patent number: 9544565
    Abstract: Data for making calculations from a three dimensional observation is derived from a recording device. The data is combined with information developed by a three dimensional remote sensing platform to create measurement points in space for an object. Descriptive information from at least one object model is used to direct at least one of a resolution resource, results gained from group measurements and an object-deployed resolution asset. Order is thereafter found in two dimensional to three dimensional observations in a subject area.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: January 10, 2017
    Assignee: Vy Corporation
    Inventor: Thomas Martel
  • Patent number: 9524432
    Abstract: The subject technology provides embodiments for performing fast corner detection in a given image for augmented reality applications. Embodiments disclose a high-speed test that examines intensities of pairs of pixels around a candidate center pixel. In one example, the examined pairs are comprised of pixels that are diametrically opposite ends of a circle formed with the candidate center pixel. Further, a pyramid of images including four rings of surrounding pixels is generated. An orientation of the pixels from the four rings are determined and a vector of discrete values of the pixels are provided. Next, a forest of trees are generated for the vector of discrete values corresponding to a descriptor for a first image. For a second image including a set of descriptors, approximate nearest neighbors are determined from the forest of tree representing closest matching descriptors from the first image.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: December 20, 2016
    Assignee: A9.com, Inc.
    Inventors: William Brendel, Nityananda Jayadevaprakash, David Creighton Mott, Jie Feng
  • Patent number: 9523772
    Abstract: In scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: December 20, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec
  • Patent number: 9478063
    Abstract: Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: October 25, 2016
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, Yang Bai
  • Patent number: 9477885
    Abstract: An image processing apparatus according to one embodiment includes a first extraction unit, a second extraction unit, and a specifying unit. The first extraction unit performs stroke width transform on an image and thereby extracts a SWT region from the image. The second extraction unit performs clustering based on pixel values on the image and thereby extracts a single-color region from the image. The specifying unit specifies a pixel group included in a candidate text region based at least on the single-color region when a ratio of the number of pixels in an overlap part between the SWT region and the single-color region to the number of pixels in the single-color region is equal to or more than a first reference value, or more than the first reference value.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: October 25, 2016
    Assignee: Rakuten, Inc.
    Inventor: Naoki Chiba
  • Patent number: 9471832
    Abstract: Automated analysis of video data for determination of human behavior includes segmenting a video stream into a plurality of discrete individual frame image primitives which are combined into a visual event that may encompass an activity of concern as a function of a hypothesis. The visual event is optimized by setting a binary variable to true or false as a function of one or more constraints. The visual event is processed in view of associated non-video transaction data and the binary variable by associating the visual event with a logged transaction if associable, issuing an alert if the binary variable is true and the visual event is not associable with the logged transaction, and dropping the visual event if the binary variable is false and the visual event is not associable.
    Type: Grant
    Filed: May 13, 2014
    Date of Patent: October 18, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lei Ding, Quanfu Fan, Sharathchandra U. Pankanti
  • Patent number: 9465992
    Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue
  • Patent number: 9460357
    Abstract: Embodiments disclosed facilitate robust, accurate, and reliable recovery of words and/or characters in the presence of non-uniform lighting and/or shadows. In some embodiments, a method to recover text from image may comprise: expanding a Maximally Stable Extremal Region (MSER) in an image, the neighborhood comprising a plurality of sub-blocks; thresholding a subset of the plurality of sub-blocks in the neighborhood, the subset comprising sub-blocks with text, wherein each sub-block in the subset is thresholded using a corresponding threshold associated with the sub-block; and obtaining a thresholded neighborhood.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: October 4, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Hemanth P. Acharya, Pawan Kumar Baheti, Kishor K. Barman
  • Patent number: 9448716
    Abstract: A process for the management of a graphical user interface (10) includes application software graphical components (26, 40, 40?), such as windows that display computer applications, displaying the data of an associated application software function. The process includes the stages of: tracing (E102), on the interface, a graphical shape (31, 31?) in such a way as to create a graphical component (30, 30?), combining (E128) an application software function with the graphical component that was created for assigning the graphical component to the display of the application software function, and determining (E110) a direction (S) of the graphical component that is created in such a way as to display data of the associated application software function according to the determined direction.
    Type: Grant
    Filed: October 28, 2010
    Date of Patent: September 20, 2016
    Assignee: Orange
    Inventors: François Coldefy, Mohammed Belatar
  • Patent number: 9444990
    Abstract: The present disclosure provides a system and method of setting the focus of a digital image based on social relationship. In accordance with embodiments of the present disclosure, a scene is imaged with an electronic device and a face present in the imaged scene is detected. An identity of an individual having the detected face is recognized by determining that the detected face is the face of an individual having a social relationship with the user of the electronic device. The focus of the image is set to focus on the face of the recognized individual.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: September 13, 2016
    Assignee: Sony Mobile Communications Inc.
    Inventors: Mathias Jensen, Vishal Kondabathini, Sten Wendel, Stellan Nordström
  • Patent number: 9438891
    Abstract: Aspects of the present invention comprise holocam systems and methods that enable the capture and streaming of scenes. In embodiments, multiple image capture devices, which may be referred to as “orbs,” are used to capture images of a scene from different vantage points or frames of reference. In embodiments, each orb captures three-dimensional (3D) information, which is preferably in the form of a depth map and visible images (such as stereo image pairs and regular images). Aspects of the present invention also include mechanisms by which data captured by two or more orbs may be combined to create one composite 3D model of the scene. A viewer may then, in embodiments, use the 3D model to generate a view from a different frame of reference than was originally created by any single orb.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: September 6, 2016
    Assignee: Seiko Epson Corporation
    Inventors: Michael Mannion, Sujay Sukumaran, Ivo Moravec, Syed Alimul Huda, Bogdan Matei, Arash Abadpour, Irina Kezele
  • Patent number: 9406138
    Abstract: In one embodiment, a technique is provided for semi-automatically extracting a polyline from a linear feature in a point cloud. The user may provide initial parameters, including a point about the linear feature and a starting direction. A linear feature extraction process may automatically follow the linear feature beginning in the starting direction from about the selected point. The linear feature extraction process may attempt to follow a linear segment of the linear feature. If some points may be followed that constitute a linear segment, a line segment modeling the linear segment is created. The linear feature extraction process then determines whether the end of the linear feature has been reached. If the end has not been reached, the linear feature extraction process may repeat. If the end has been reached, the linear feature extraction process may return the line segments and create a polyline from them.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: August 2, 2016
    Assignee: Bentley Systems, Incorporated
    Inventor: Mathieu St-Pierre
  • Patent number: 9406107
    Abstract: An imaging system includes a computer programmed to estimate noise in computed tomography (CT) imaging data, correlate the noise estimation with neighboring CT imaging data to generate a weighting estimation based on the correlation, de-noise the CT imaging data based on the noise estimation and on the weighting, and reconstruct an image using the de-noised CT imaging data.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: August 2, 2016
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Jiahua Fan, Meghan L. Yue, Jiang Hsieh, Roman Melnyk, Masatake Nukui, Yujiro Yazaki
  • Patent number: 9397844
    Abstract: Embodiments of the present disclosure relate to automatic generation of dynamically changing layouts for a graphical user-interface. Specifically, embodiments of the present disclosure employ analysis of an image associated with the view (e.g., either the current view or a future view) of the graphical user-interface to determine colors that are complementary to the image. The colors are applied to the view, such that the color scheme of the view matches the image.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: July 19, 2016
    Assignee: APPLE INC.
    Inventors: Joe R. Howard, Brian R. Frick, Timothy B. Martin, Christopher John Sanders
  • Patent number: 9377298
    Abstract: A method for surveying an object for and/or using a geodetic surveying device that include a derivation of an item of surface information at least for one object region, at least one geodetically precise single point determination for the object region, wherein a position of at least one object point is determined geodetically precisely, and an update of the item of surface information based on the determined position of the at least one object point. In some embodiments a scan to derive the item of surface information may be performed using object-point-independent scanning of the object region by progressive alignment changes of the measuring radiation, with a determination of a respective distance and of a respective alignment of the measuring radiation emitted for the distance measurement for scanning points lying within the object region, and having a generation of a point cloud which represents the item of surface information.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: June 28, 2016
    Assignee: LEICA GEOSYSTEMS AG
    Inventors: Hans-Martin Zogg, Norbert Kotzur
  • Patent number: 9349072
    Abstract: The use of local feature descriptors of an image to generate compressed image data and reconstruct the image using image patches that are external to the image based on the compressed image data may increase image compression efficiency. A down-sampled version of the image is initially compressed to produce an encoded visual descriptor. The local feature descriptors of the image and the encoded visual descriptor are then obtained. A set of differential feature descriptors are subsequently determined based on the differences between the local feature descriptors of the input image and the encoded visual descriptor. At least some of the differential feature descriptors are compressed to produce encoded feature descriptors, which are then combined with the encoded visual feature descriptor produce image data. The image data may be used to select image patches from an image database to reconstruct the image.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: May 24, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xiaoyan Sun, Feng Wu
  • Patent number: 9330333
    Abstract: A method and apparatus for image automatic brightness detection. The method includes determining region of interest (ROI) candidates in an image, extracting features from each of the ROI candidates, selecting the optimum ROI based on weighted score of each of the ROI candidate, and calculating brightness value of the selected optimum ROI candidate as brightness feedback. The method and apparatus for image automatic brightness detection according to embodiments of the present invention, can automatically detect the point of interest for clinicians, and then provide a more accurate feedback to the imaging system, in order to provide a more efficient dose management in the imaging system and thus to achieve constant image quality without wasting any dose, thereby further optimizing the dose/IQ performance and the high efficient utilization of the system.
    Type: Grant
    Filed: March 30, 2011
    Date of Patent: May 3, 2016
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Xiao Xuan, Romain Areste, Vivek Walimbe
  • Patent number: 9323981
    Abstract: Disclosed is a face component extraction apparatus including an eye detection unit which detects a plurality of combinations of eye regions, each combination forming a pair, a first calculation unit which calculates a first evaluation value for each pair of eye regions, a fitting unit which fits a plurality of extraction models for extracting a plurality of face components in the image based on a number of pairs of eye regions whose first evaluation values are equal to or greater than a predetermined value, a second calculation unit which calculates a second evaluation value for each of a number of pairs of eye regions, and a deciding unit which decides a fitting mode of the plurality of extraction models to be fitted by the fitting unit based on calculation results of a number of second evaluation values by the second calculation unit.
    Type: Grant
    Filed: October 10, 2013
    Date of Patent: April 26, 2016
    Assignee: CASIO COMPUTER CO., LTD.
    Inventors: Hirokiyo Kasahara, Keisuke Shimada
  • Patent number: 9311523
    Abstract: A method for supporting object recognition is disclosed. The method includes the steps of: setting calculation blocks, each of which includes one or more pixels in an image, acquiring respective average values of the pixels included in the respective calculation blocks, and matching information on the respective calculation blocks with the respective average values or respective adjusted values derived from the respective average values; referring to information on windows, each of which includes information on one or more reference blocks which are different in at least either positions or sizes and information on corresponding relations between the calculation blocks and the average values or the adjusted values, to thereby assign the respective average values or the respective adjusted values to the respective reference blocks; and acquiring necessary information by using the respective average values or the respective adjusted values assigned to the respective reference blocks.
    Type: Grant
    Filed: July 29, 2015
    Date of Patent: April 12, 2016
    Assignee: StradVision Korea, Inc.
    Inventor: Woonhyun Nam
  • Patent number: 9300321
    Abstract: Methods and apparatus for lossless LiDAR LAS file compression and decompression are provided that include predictive coding, variable-length coding, and arithmetic coding. The predictive coding uses four different predictors including three predictors for x, y, and z coordinates and a constant predictor for scalar values, associated with each LiDAR data point.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: March 29, 2016
    Assignee: University of Maribor
    Inventors: Borut Zalik, Domen Mongus
  • Patent number: 9280223
    Abstract: An imaging apparatus includes an imaging part for capturing a subject image, a touch panel for acquiring a touch position input by a user, and a control part for controlling an imaging operation performed by the imaging part. The control part acquires the touch position to cause the imaging part to perform the imaging operation each time the touch position is displaced on the touch panel by a predetermined amount repeatedly during a continuous touch user input.
    Type: Grant
    Filed: January 24, 2013
    Date of Patent: March 8, 2016
    Assignee: Olympus Corporation
    Inventors: Maki Toida, Izumi Sakuma, Kensei Ito
  • Patent number: 9275308
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting objects in images. One of the methods includes receiving an input image. A full object mask is generated by providing the input image to a first deep neural network object detector that produces a full object mask for an object of a particular object type depicted in the input image. A partial object mask is generated by providing the input image to a second deep neural network object detector that produces a partial object mask for a portion of the object of the particular object type depicted in the input image. A bounding box is determined for the object in the image using the full object mask and the partial object mask.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: March 1, 2016
    Assignee: Google Inc.
    Inventors: Christian Szegedy, Dumitru Erhan, Alexander Toshkov Toshev
  • Patent number: 9277357
    Abstract: Methods and systems for map generation for location and navigation with user sharing/social networking may comprise a premises-based crowd-sourced database that receives images and location data from a plurality of users of wireless communication devices, and for each of said plurality of users: receiving a determined position of a wireless communication device (WCD), where the position is determined by capturing images of the surroundings of the WCD. Data associated with objects in the surroundings of the WCD may be extracted from the captured images, positions of the objects may be determined, and the determined positions and the data may then update the premises-based crowd-sourced database. The position of the WCD may be determined utilizing global navigation satellite system (GNSS) signals. The elements may comprise structural and/or textual features in the surroundings of the WCD. The position may be determined utilizing sensors that measure a distance from a known position.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: March 1, 2016
    Assignee: Maxlinear, Inc.
    Inventor: Curtis Ling
  • Patent number: 9269022
    Abstract: Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: February 23, 2016
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, Yang Bai, Tony F. Rodriguez, Eliot Rogers, Ravi K. Sharma, John D. Lord, Scott Long, Brian T. MacIntosh, Kurt M. Eaton
  • Patent number: 9268795
    Abstract: Implementations consistent with the principles described herein relate to ranking a set of images based on features of the images determine the most representative and/or highest quality images in the set. In one implementation, an initial set of images is obtained and ranked based on a comparison of each image in the set of images to other images in the set of images. The comparison is performed using at least one predetermined feature of the images.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: February 23, 2016
    Assignee: Google Inc.
    Inventors: Shumeet Baluja, Yushi Jing
  • Patent number: 9258482
    Abstract: A facial expression recognition apparatus (10) detects a face image of a person from an input image, calculates a facial expression evaluation value corresponding to each facial expression from the detected face image, updates, based on the face image, the relationship between the calculated facial expression evaluation value and a threshold for determining a facial expression set for the facial expression evaluation value, and determines the facial expression of the face image based on the updated relationship between the facial expression evaluation value and the threshold for determining a facial expression.
    Type: Grant
    Filed: August 20, 2015
    Date of Patent: February 9, 2016
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yuji Kaneda
  • Patent number: 9256802
    Abstract: An information representation method for representing an object or a shape includes: dividing a contour shape of an entirety or a part of the object or the shape into one or a plurality of curves; and representing the contour shape of the object or the shape by parameters including a degree of curvature and a positional relationship of each curve obtained by the dividing. Therefore, there is provided an information representation method for an object or a shape, which is capable of robust object recognition against a change in image by geometric transformations and occlusions.
    Type: Grant
    Filed: November 11, 2011
    Date of Patent: February 9, 2016
    Assignees: NEC CORPORATION, TOHOKU UNIVERSITY
    Inventors: Yuma Matsuda, Masatsugu Ogawa, Masafumi Yano, Susumu Kawakami
  • Patent number: 9218064
    Abstract: A computing device comprises a processor and an authoring tool executing on the processor. The processor receives demonstration data representative of at least one demonstration of a multi-finger gesture and declaration data specifying one or more constraints for the multi-finger gesture. The processor generates, in accordance with the demonstration data and the declaration data, a module to detect the multi-finger gesture within a computer-generated user interface.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: December 22, 2015
    Assignee: Google Inc.
    Inventors: Yang Li, Hao Lu
  • Patent number: 9208379
    Abstract: An image processing apparatus connectable to a terminal which captures an image includes an acquisition unit configured to acquire augmented information and attribute information from feature information extracted from a captured image, a processing unit configured to generate, if a plurality of pieces of the feature information is extracted, at least one piece of new augmented information by using a plurality of pieces of the augmented information acquired by the acquisition unit, based on the attribute information, and a transmission unit configured to transmit the new augmented information generated by the processing unit to the terminal.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: December 8, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yumiko Uchida
  • Patent number: 9188676
    Abstract: A system uses range and Doppler velocity measurements from a lidar system and images from a video system to estimate a six degree-of-freedom trajectory of a target. The system may determine a skin area or face contour based on the 3D measurements from a lidar subsystem and information regarding the location of various facial features from the 2D video images of a video subsystem.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 17, 2015
    Assignee: Digital Signal Corporation
    Inventor: Anatoley T. Zheleznyak
  • Patent number: 9183638
    Abstract: A method and apparatus for identifying a position of a platform. Features are identified in a series of images generated by a camera system associated with the platform while the platform is moving. A shift in a perspective of the camera system is identified from a shift in a position of the features in the series of images. A change in the position of the platform is identified based on the shift in the perspective.
    Type: Grant
    Filed: August 9, 2011
    Date of Patent: November 10, 2015
    Assignee: THE BOEING COMPANY
    Inventors: Carson Reynolds, Emad William Saad, John Lyle Vian, Masatoshi Ishikawa
  • Patent number: 9165390
    Abstract: Provided is an object frame display device (100) in which: an object detection frame computation unit (102) derives a first object detection frame which denotes a region of an object to be detected by carrying out a pattern recognition process on an inputted image, and derives a second object detection frame by integrating first object detection frames which are inferred to be object detection frames relating to the same object to be detected; a containment frame computation unit (103) derives, for each second object detection frame, a third object detection frame which contains the first object detection frame upon which the second object detection frame is based; and a display frame forming unit (105) forms an object detection frame which is displayed on the basis of a relation between the size of the second object detection frame and the size of the third object detection frame.
    Type: Grant
    Filed: May 15, 2012
    Date of Patent: October 20, 2015
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: Yuichi Matsumoto
  • Patent number: 9165468
    Abstract: A system for longitudinal and lateral control of a vehicle. The system includes a camera generating a plurality of images representative of a field of view in front of the vehicle, and a controller receiving the plurality of images. The controller detects an object in the plurality of images, extracts a plurality of features from the plurality of images, generates a reference feature depiction based on the plurality of features from one or more of the plurality of images, generates a comparison feature depiction based on the plurality of features from one of the plurality of images, compares the reference feature depiction and the comparison feature depiction, and determines a longitudinal position of the vehicle relative to the object based on differences between the reference feature depiction and the comparison feature depiction.
    Type: Grant
    Filed: April 12, 2010
    Date of Patent: October 20, 2015
    Assignee: Robert Bosch GmbH
    Inventors: Yun Luo, Dieter Hoetzer
  • Patent number: 9154769
    Abstract: A measurement apparatus for automatic three-dimensional measurement of space includes a camera sensor array that is configured to generate low-resolution video recordings. The camera sensor array is further configured to automatically generate high-resolution images at geometrically suitable positions in the space. Automatic recording of the high-resolution images is based on a three-dimensional real-time reconstruction of the video recordings. A measurement system includes the measurement apparatus and a corresponding method is implemented for the automatic three-dimensional measurement of the space.
    Type: Grant
    Filed: May 30, 2011
    Date of Patent: October 6, 2015
    Assignee: Robert Bosch GmbH
    Inventors: Matthias Roland, Sebastian Jackisch, Alexander Fietz, Benjamin Pitzer
  • Patent number: 9135711
    Abstract: A method for controlling a video segmentation apparatus is provided. The method includes receiving an image corresponding to a frame of a video; estimating a motion of an object in the received image to be extracted from the received image, determining a plurality of positions of windows corresponding to the object; adjusting at least one of a size and a spacing of at least one window located at a position of the plurality of determined positions of the windows based on an image characteristic; and extracting the object from the received image based on the at least one window of which the at least one of the size and the spacing is adjusted.
    Type: Grant
    Filed: September 30, 2013
    Date of Patent: September 15, 2015
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Se-Hoon Kim, Young-Ho Moon, Soo-Chahn Lee, Han-Tak Kwak, Woo-Sung Shim, Ji-Hwan Woo
  • Patent number: 9131246
    Abstract: An artifact in a discrete cosine transform based decoder output may be detected by developing a set of templates. An average intensity within each block in a reconstructed picture is calculated (34). The differences of each pixel value from the average output intensity within each block is determined (36). That difference is then multiplied by one of the templates within each block and the results of the multiplications are summed to obtain a calculated result (38). The calculated result is then compared to a threshold to detect an artifact such as ringing or mosquito noise artifacts (40).
    Type: Grant
    Filed: December 1, 2009
    Date of Patent: September 8, 2015
    Assignee: Intel Corporation
    Inventors: Andrey Turlikov, Marat Gilmutdinov, Anton Veselov
  • Patent number: 9122958
    Abstract: Object recognition systems, methods, and devices are provided. Candidate objects may be detected. The candidate objects may be verified as depicting objects of a predetermined object type with verification tests that are based on comparisons with reference images known to include such objects and/or based on context of the candidate objects. The object recognition system may identify images in a social networking service that may include objects of a predetermined type.
    Type: Grant
    Filed: February 14, 2014
    Date of Patent: September 1, 2015
    Assignee: Social Sweepster, LLC
    Inventors: Tod Joseph Curtis, Thomas Ryan McGrath, Kenneth Edward Jagacinski Schweickert
  • Patent number: 9092877
    Abstract: An imaging device includes an imaging unit capturing an image of a subject, and tracks, through images captured in time series, an area in which a specific target appears. The device includes a parameter acquiring unit acquiring a photographic parameter from the imaging unit, a target area determining unit determining an area of a captured image including the specific target as a target area, a track area adjusting unit setting a track frame for a track area to track the target area including the specific target and adjusting a size of the track frame based on the photographic parameter, and a track area searching unit searching the captured image for the track area, while moving the size-adjusted track frame, based on a similarity between a characteristic amount of the track area of a current captured image and that of the target area of a previous captured image.
    Type: Grant
    Filed: February 14, 2011
    Date of Patent: July 28, 2015
    Assignee: RICOH COMPANY, LTD.
    Inventor: Haike Guan
  • Patent number: 9082004
    Abstract: Methods and apparatus to capture images are disclosed. An example apparatus includes a resolution determiner to determine that a first frame of image data is to undergo processing at a first resolution and that a second frame of image data is to undergo processing at a second resolution lower than the first resolution; and a controller to activate an illuminator when an image sensor is to capture the first frame and to deactivate the illuminator when the image sensor is to capture the second frame.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: July 14, 2015
    Assignee: THE NIELSEN COMPANY (US), LLC.
    Inventor: Christen V. Nielsen
  • Patent number: 9053540
    Abstract: An image matching apparatus includes a bilateral filter that filters a left image and a right image to output a second left image and a second right image; a census cost calculation unit performing census transform on a window based on a first pixel of the second left image and a window based on a second pixel of the second right image to calculate a census cost corresponding to a pair of pixels of the first and second pixels; a support weight calculation unit obtaining support weights of the left and right images or the second left and second right images; a cost aggregation unit obtaining energy values of nodes corresponding to the pair of pixels of the first and second pixels using the census cost and the support weights; and a dynamic programming unit performing image matching using dynamic programming by the energy values of each node obtained.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: June 9, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ji Ho Chang, Eul Gyoon Lim, Ho Chul Shin, Jae Il Cho
  • Publication number: 20150146987
    Abstract: The present disclosure relates to a method and terminal device for processing an image. The method includes: acquiring face information from a template image, if receiving a request for processing an image containing a face; and applying a photo makeover on the face according to the face information acquired from the template image. By acquiring face information from a template image and automatically applying a photo makeover on the face according to the face information acquired from the template image, manual setting for makeover parameters will be avoided and the efficiency will be improved.
    Type: Application
    Filed: August 19, 2014
    Publication date: May 28, 2015
    Inventors: Mingyong Tang, Bo Zhang, Xiao Liu, Lin Wang
  • Patent number: 9042595
    Abstract: A proof information processing apparatus adds a plurality of types of annotative information to a proof image by use of a plurality of input modes for inputting respective different types of annotative information. A proof information processing method is carried out by using the proof information processing apparatus. A recording medium stores a program for performing the functions of the proof information processing apparatus. An electronic proofreading system includes the proof information processing apparatus and a remote server. At least one of input modes including a text input mode, a stylus input mode, a color information input mode, and a speech input mode is selected depending on characteristics of an image in a region of interest which is indicated.
    Type: Grant
    Filed: April 12, 2012
    Date of Patent: May 26, 2015
    Assignee: FUJIFILM Corporation
    Inventor: Akira Watanabe
  • Patent number: 9042656
    Abstract: The image signature extraction device includes an extraction unit and a generation unit. The extraction unit extracts region features from respective sub-regions in an image in accordance with a plurality of pairs of sub-regions in the image, the pairs of sub-regions including at least one pair of sub-regions in which both a combination of shapes of two sub-regions of the pair and a relative position between the two sub-regions of the pair differ from those of at least one of other pairs of sub-regions. The generation unit generates, based on the extracted region features of the respective sub-regions, an image signature to be used for identifying the image.
    Type: Grant
    Filed: January 14, 2010
    Date of Patent: May 26, 2015
    Assignee: NEC CORPORATION
    Inventors: Kota Iwamoto, Ryoma Oami
  • Patent number: 9042658
    Abstract: An image processing device that generates a pixel value of a pixel and interpolates the pixel with the pixel value, the image processing device including: a periodicity determining unit that determines whether an area including the pixel is a periodic area; a boundary determining unit that determines whether the pixel belongs to the periodic area or a non-periodic area; a first pixel value generating unit that generates a first pixel value; a second pixel value generating unit that generates a second pixel value; a control unit that determines whether the first pixel value generating unit is to be used or the second pixel value generating unit is to be used, based on determination results of the periodicity determining unit and the boundary determining unit; a pixel value inputting unit that inputs one of the first pixel value and the second pixel value to the pixel.
    Type: Grant
    Filed: January 11, 2012
    Date of Patent: May 26, 2015
    Assignee: RICOH COMPANY, LTD.
    Inventor: Satoshi Nakamura