Local Or Regional Features Patents (Class 382/195)
  • Patent number: 9842282
    Abstract: An approach is provided for classifying objects that are present at a geo-location and providing an uncluttered presentation of images of some of the objects in an application such as a map application. The approach includes determining one or more regions of interest associated with at least one geo-location, wherein the one or more regions of interest are at least one textured three-dimensional representation of one or more objects that may be present at the at least one geo-location. The approach also includes processing and/or facilitating a processing of the at least one textured three-dimensional representation to determine at least one two-dimensional footprint and three-dimensional geometry information for the one or more objects.
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: December 12, 2017
    Assignee: HERE Global B.V.
    Inventors: Xiaoqing Liu, Jeffrey Adachi, Antonio Haro, Jane MacFarlane
  • Patent number: 9836835
    Abstract: A technique is disclosed for helping prevent image quality of a three-dimensional image from becoming poor due to fluctuations in the rotation speed of an imaging core. For this purpose, if data is obtained from the imaging core by moving and rotating the imaging core, a cross-sectional image is generated at each movement position. Then, a direction where a guidewire is present in each of the cross-sectional images is detected. An angular difference between the direction of the detected guidewire and a preset direction is obtained so as to rotate each of the cross-sectional images in accordance with the angular difference. Then, the cross-sectional images which are previously rotated in this way are connected to one another, thereby generating the three-dimensional image.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: December 5, 2017
    Assignee: TERUMO KABUSHIKI KAISHA
    Inventors: Junya Furuichi, Kouichi Inoue
  • Patent number: 9805279
    Abstract: A method determining user liveness is provided that includes calculating, by a device, eye openness measures for a frame included in captured authentication data, and storing the eye openness measures in a buffer of the device. Moreover the method includes calculating confidence scores from the eye openness measures stored in the buffer, and detecting an eye blink when a maximum confidence score is greater than a threshold score.
    Type: Grant
    Filed: February 25, 2016
    Date of Patent: October 31, 2017
    Assignee: DAON HOLDINGS LIMITED
    Inventor: Mircea Ionita
  • Patent number: 9784587
    Abstract: A method includes applying a correlation rule defining a correlation relationship between a first and second object and determining, using a processor, whether a first motion vector of the first object is correlated at a threshold level of correlation with a second motion vector of the second object, the correlation relationship between the first and second objects identifying the threshold level of correlation between the first and second motion vectors. The method also includes, in response to determining that the first motion vector is not correlated at the threshold level with the second motion vector, determining a convergence point for the first and second objects in accordance with a policy. The method further includes transmitting instructions for arriving at the convergence point to the first and second objects.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: October 10, 2017
    Assignee: CA, Inc.
    Inventors: Steven L. Greenspan, Maria C. Velez-Rojas, Serguei Mankovskii
  • Patent number: 9779497
    Abstract: Measuring the number of glomeruli in the entire, intact kidney using non-destructive techniques is of immense importance in studying several renal and systemic diseases. In particular, a recent Magnetic Resonance Imaging (MRI) technique, based on injection of a contrast agent, cationic ferritin, has been effective in identifying glomerular regions in the kidney. In various embodiments, a low-complexity, high accuracy method for obtaining the glomerular count from such kidney MRI images is described. This method employs a patch-based approach for identifying a low-dimensional embedding that enables the separation of glomeruli regions from the rest. By using only a few images marked by the expert for learning the model, the method provides an accurate estimate of the glomerular number for any kidney image obtained with the contrast agent. In addition, the implementation of our method shows that this method is near real-time, and can process about 5 images per second.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: October 3, 2017
    Assignee: ARIZONA BOARD OF REGENTS, A BODY CORPORATE OF THE STATE OF ARIZONA, ACTING FOR AND ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jayaraman Jayaraman Thiagarajan, Karthikeyan Ramamurthy, Andreas Spanias, David Frakes
  • Patent number: 9766717
    Abstract: There is provided an optical pointing system including at least one reference beacon, an image sensor, a storage unit and a processing unit. The image sensor is configured to capture an image frame containing a beacon image associated with the at least one reference beacon. The storage unit is configured to save image data of at least one object image in the image frame. The processing unit is configured to sequentially process every pixel of the image frame for identifying the object image and real-timely remove or merge the image data, saved in the storage unit, associated with two object images within a pixel range of the image frame thereby reducing the used memory space.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: September 19, 2017
    Assignee: PixArt Imaging Inc.
    Inventor: Chia-Cheun Liang
  • Patent number: 9754176
    Abstract: The present invention is directed to a method of extracting data from fields in an image of a document. In one implementation, a text representation of the image of the document is obtained. A graph for storing features of the text fragments in the text representation of the image of the document and their links is constructed. A cascade classification for computing the features of the text fragments in the text representation of the image of the document and their link is run. Hypotheses about the belonging of text fragments to the fields in the image of the document are generated. Combinations of the hypotheses are generated. A combination of the hypotheses is selected. And data from the fields in the image of the document is extracted based on the selected combination of the hypotheses.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: September 5, 2017
    Assignee: ABBYY PRODUCTION LLC
    Inventor: Mikhail Kostyukov
  • Patent number: 9741171
    Abstract: An image processing device includes memory; and a processor configured to execute a plurality of instructions stored in the memory, the instructions comprising: recognizing a target object recognized from a first image, which is a captured image, including the target object in a real world; controlling a second image, which is an augmented image, including information of the target object from the first image, and a third image which is an augmented image of the second image and to be formed so as to inscribe an outer surrounding the second image and covers a center of visual field of a user relative to the second image; and displaying, in a state where the user directly visually recognizes the target object in the real world, the second image and the third image such that the second image and the third image are caused to correspond to a position.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: August 22, 2017
    Assignee: FUJITSU LIMITED
    Inventor: Nobuyuki Hara
  • Patent number: 9734411
    Abstract: A method assists in locating objects using their images. One or more processors receive a set of one or more machine readable reference images of an object, and then distribute the set of one or more machine readable reference images to a plurality of computing devices, where each computing device from the plurality of computing devices is capable of capturing an image. Each computing device from the plurality of computing devices captures a set of one or more images. For each set of one or more images in each computing device from the plurality of computing devices, machine logic within each computing device determines whether each set of one or more images includes an image portion corresponding to the object.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: August 15, 2017
    Assignee: International Business Machines Corporation
    Inventors: Simon A. S. Briggs, James K. Hook, Hamish C. Hunt, Nicholas K. Lincoln
  • Patent number: 9727978
    Abstract: A method is provided for extracting outer space feature information from spatial geometric data. The method comprises: an input step S10 of inputting spatial geometric data for a target region; a sampling step S20 of determining a sample by selecting an arbitrary area for the spatial geometric data input in the input step using a preset selection method; a feature extraction step S30 of acquiring feature information for a corresponding sampling plane using a convex hull method based on sampling information including sampling plane information of the spatial geometric data for a sampling plane selected in the sampling step. The sampling step and the feature extraction step are repeatedly performed in a preset manner.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: August 8, 2017
    Assignee: Korea University Research and Business Foundation
    Inventors: Chang Hyun Jun, Nakju Lett Doh
  • Patent number: 9706131
    Abstract: First, images are captured with three exposure levels, and pixel levels of a low exposure image and an intermediate exposure image are amplified to be matched to those of a high exposure image. Next, a brightness combining ratio for each image is calculated based on the low exposure image that has been matched in brightness. Then, images having brightness combining ratios that are not 0% in a region of interest are selected, and only the selected images are used to generate a combined image in the region of interest, and the low exposure image is used as a substitute, for example, in a region other than the region of interest.
    Type: Grant
    Filed: May 27, 2015
    Date of Patent: July 11, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventor: Satoru Kobayashi
  • Patent number: 9684386
    Abstract: There is provided an optical pointing system including at least one reference beacon, an image sensor, a storage unit and a processing unit. The image sensor is configured to capture an image frame containing a beacon image associated with the at least one reference beacon. The storage unit is configured to save image data of at least one object image in the image frame. The processing unit is configured to sequentially process every pixel of the image frame for identifying the object image and real-timely remove or merge the image data, saved in the storage unit, associated with two object images within a pixel range of the image frame thereby reducing the used memory space.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: June 20, 2017
    Assignee: PIXART IMAGING INC.
    Inventor: Chia-Cheun Liang
  • Patent number: 9684817
    Abstract: Disclosed is a method for automatically optimizing point cloud data quality, including the following steps of: acquiring initial point cloud data for a target to be reconstructed, to obtain an initial discrete point cloud; performing preliminary data cleaning on the obtained initial discrete point cloud to obtain a Locally Optimal Projection operator (LOP) sampling model; obtaining a Possion reconstruction point cloud model by using a Possion surface reconstruction method on the obtained initial discrete point cloud; performing iterative closest point algorithm registration on the obtained Possion reconstruction point cloud model and the obtained initial discrete point cloud; and for each point on a currently registered model, calculating a weight of a surrounding point within a certain radius distance region of a position corresponding to the point for the point on the obtained LOP sampling model, and comparing the weight with a threshold, to determine whether a region where the point is located requires rep
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: June 20, 2017
    Assignee: Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences
    Inventor: Hui Huang
  • Patent number: 9679221
    Abstract: An input image showing a same object as an object shown in a reference image is identified more accurately. A difference area in the input image is determined by converting a difference area in the reference image, on a basis of geometric transformation information calculated by an analysis using a local descriptor. By matching a descriptor extracted from the difference area in the input image with the difference area in the reference image, fine differences that cannot be identified by conventional matching using only a local descriptor can be distinguished and images showing a same object can be exclusively identified.
    Type: Grant
    Filed: May 21, 2013
    Date of Patent: June 13, 2017
    Assignee: NEC Corporation
    Inventor: Ryota Mase
  • Patent number: 9659349
    Abstract: A system identifies a scaling position in a captured image, and identifies red subpixels adjacent to the scaling position. The system computes a scaled red subpixel for the scaling position based on the identified red subpixels according to constraints. The system further computes a scaled blue subpixel based on identified adjacent blue subpixels, according to constraints, and computes a scaled green subpixel position based on Gr and Gb subpixels adjacent to the scaling position according to certain constraints. The system then generates a scaled image representative of the captured image, the scaled image including at least the scaled red subpixel value, the scaled blue subpixel value, and the scaled green subpixel value.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: May 23, 2017
    Assignee: GoPro, Inc.
    Inventors: Bruno Cesar Douady-Pleven, Michael Serge André Kraak, Guillaume Matthieu Guerin, Thomas Nicolas Emmanuel Veit
  • Patent number: 9652688
    Abstract: Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: May 16, 2017
    Assignee: Captricity, Inc.
    Inventors: Huguens Jean, Yoriyasu Yano, Hui Peng Hu, Kuang Chen
  • Patent number: 9641755
    Abstract: One or more systems, devices, and/or methods for emphasizing objects in an image, such as a panoramic image, are disclosed. For example, a method includes receiving a depthmap generated from an optical distancing system, wherein the depthmap includes position data and depth data for each of a plurality of points. The optical distancing system measures physical data. The depthmap is overlaid on the panoramic image according to the position data. Data is received that indicates a location on the panoramic image and, accordingly, a first point of the plurality of points that is associated with the location. The depth data of the first point is compared to depth data of surrounding points to identify an area on the panoramic image corresponding to a subset of the surrounding points. The panoramic image is altered with a graphical effect that indicates the location.
    Type: Grant
    Filed: September 12, 2013
    Date of Patent: May 2, 2017
    Assignee: HERE Global B.V.
    Inventor: James D. Lynch
  • Patent number: 9630318
    Abstract: A robotic device may be operated by a learning controller comprising a feature learning configured to determine control signal based on sensory input. An input may be analyzed in order to determine occurrence of one or more features. Features in the input may be associated with the control signal during online supervised training. During training, learning process may be adapted based on training input and the predicted output. A combination of the predicted and the target output may be provided to a robotic device to execute a task. Feature determination may comprise online adaptation of input, sparse encoding transformations. Computations related to learning process adaptation and feature detection may be performed on board by the robotic device in real time thereby enabling autonomous navigation by trained robots.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: April 25, 2017
    Assignee: Brain Corporation
    Inventors: Borja Ibarz Gabardos, Andrew Smith, Peter O'Connor
  • Patent number: 9633278
    Abstract: Disclosed is an object identification device and the like for reducing identification error in a reference image which presents an object that is only slightly difference from an object presented in an input image.
    Type: Grant
    Filed: December 25, 2013
    Date of Patent: April 25, 2017
    Assignee: NEC CORPORATION
    Inventor: Ryota Mase
  • Patent number: 9619734
    Abstract: Land classification based on analysis of image data. Feature extraction techniques may be used to generate a feature stack corresponding to the image data to be classified. A user may identify training data from the image data from which a classification model may be generated using one or more machine learning techniques applied to one or more features of the image. In this regard, the classification module may in turn be used to classify pixels from the image data other than the training data. Additionally, quantifiable metrics regarding the accuracy and/or precision of the models may be provided for model evaluation and/or comparison. Additionally, the generation of models may be performed in a distributed system such that model creation and/or application may be distributed in a multi-user environment for collaborative and/or iterative approaches.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: April 11, 2017
    Assignee: DigitalGlobe, Inc.
    Inventors: Giovanni B. Marchisio, Carsten Tusk, Krzysztof Koperski, Mark D. Tabb, Jeffrey D. Shafer
  • Patent number: 9615805
    Abstract: A method of aligning at least two breast images includes aligning a relevant image part in each of the images, the relevant image parts being obtained on the basis of the result of a shape analysis procedure performed on the breast images.
    Type: Grant
    Filed: February 25, 2013
    Date of Patent: April 11, 2017
    Assignee: AGFA HEALTHCARE NV
    Inventor: Gert Behiels
  • Patent number: 9613457
    Abstract: Provided are a multi-primitive fitting method including an acquiring point cloud data by collecting data of each of input points, a obtaining a segment for the points using the point cloud data, and a performing primitive fitting using data of points included in the segment and the point cloud data, and a multi-primitive fitting device that performs the method.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: April 4, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young Mi Cha, Chang Woo Chu, Jae Hean Kim
  • Patent number: 9600892
    Abstract: A non-parametric method of, and system for, dimensioning an object of arbitrary shape, captures a three-dimensional (3D) point cloud of data points over a field of view containing the object and a base surface on which the object is positioned, detects a base plane indicative of the base surface from the point cloud, extracts the data points of the object from the point cloud, processes the extracted data points of the object to obtain a convex hull, and fits a bounding box of minimum volume to enclose the convex hull. The bounding box has a pair of mutually orthogonal planar faces, and the fitting is performed by orienting one of the faces to be generally perpendicular to the base plane, and by simultaneously orienting the other of the faces to be generally parallel to the base plane.
    Type: Grant
    Filed: November 6, 2014
    Date of Patent: March 21, 2017
    Assignee: Symbol Technologies, LLC
    Inventors: Ankur R Patel, Kevin J O'Connell, Cuneyt M Taskiran, Jay J Williams
  • Patent number: 9589203
    Abstract: A processor implemented system and method for identification of an activity performed by a subject based on sensor data analysis is described herein. In an implementation, the method includes capturing movements of the subject in real-time using a sensing device. At least one action associated with the subject is ascertained from a predefined set of actions. From the predefined set of actions, a plurality of actions can collectively form at least one activity. The ascertaining is based on captured movements of the subject and at least one predefined action rule. The at least one action rule is based on context-free grammar (CFG) and is indicative of a sequence of actions for occurrence of the at least one activity. Further, a current activity performed by the subject is dynamically determined, based on the at least one action and an immediately preceding activity, using a non-deterministic push-down automata (NPDA) state machine.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: March 7, 2017
    Assignee: TATA Consultancy Services Limited
    Inventors: Dipti Prasad Mukherjee, Tamal Batabyal, Tanushyam Chattopadhyay
  • Patent number: 9575004
    Abstract: Systems and methods for inspecting a surface are disclosed. A source, detector, a base, a controller, and a processing device are used to collect image data related to the surface and information relating to the location of the image data on the surface. The image data and information relating to location are correlated and stored in a processing device to create a map of surface condition.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: February 21, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Gary E. Georgeson, Scott W. Lea, James J. Troy
  • Patent number: 9547866
    Abstract: Methods and apparatus to estimate demography based on aerial images are disclosed. An example method includes analyzing a first aerial image of a first geographic area to detect a first plurality of objects, analyzing a second aerial image of a second geographic area to detect a second plurality of objects, associating first demographic information to the second plurality of objects, the first demographic information obtained by a sampling of the second geographic area, and comparing the second plurality of objects to the first plurality of objects to estimate a demographic characteristic of the first geographic area based on the comparison.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: January 17, 2017
    Assignee: THE NIELSEN COMPANY (US), LLC
    Inventors: Alejandro Terrazas, Michael Himmelfarb, David Miller, Paul Donato
  • Patent number: 9544565
    Abstract: Data for making calculations from a three dimensional observation is derived from a recording device. The data is combined with information developed by a three dimensional remote sensing platform to create measurement points in space for an object. Descriptive information from at least one object model is used to direct at least one of a resolution resource, results gained from group measurements and an object-deployed resolution asset. Order is thereafter found in two dimensional to three dimensional observations in a subject area.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: January 10, 2017
    Assignee: Vy Corporation
    Inventor: Thomas Martel
  • Patent number: 9524432
    Abstract: The subject technology provides embodiments for performing fast corner detection in a given image for augmented reality applications. Embodiments disclose a high-speed test that examines intensities of pairs of pixels around a candidate center pixel. In one example, the examined pairs are comprised of pixels that are diametrically opposite ends of a circle formed with the candidate center pixel. Further, a pyramid of images including four rings of surrounding pixels is generated. An orientation of the pixels from the four rings are determined and a vector of discrete values of the pixels are provided. Next, a forest of trees are generated for the vector of discrete values corresponding to a descriptor for a first image. For a second image including a set of descriptors, approximate nearest neighbors are determined from the forest of tree representing closest matching descriptors from the first image.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: December 20, 2016
    Assignee: A9.com, Inc.
    Inventors: William Brendel, Nityananda Jayadevaprakash, David Creighton Mott, Jie Feng
  • Patent number: 9523772
    Abstract: In scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: December 20, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec
  • Patent number: 9478063
    Abstract: Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: October 25, 2016
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, Yang Bai
  • Patent number: 9477885
    Abstract: An image processing apparatus according to one embodiment includes a first extraction unit, a second extraction unit, and a specifying unit. The first extraction unit performs stroke width transform on an image and thereby extracts a SWT region from the image. The second extraction unit performs clustering based on pixel values on the image and thereby extracts a single-color region from the image. The specifying unit specifies a pixel group included in a candidate text region based at least on the single-color region when a ratio of the number of pixels in an overlap part between the SWT region and the single-color region to the number of pixels in the single-color region is equal to or more than a first reference value, or more than the first reference value.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: October 25, 2016
    Assignee: Rakuten, Inc.
    Inventor: Naoki Chiba
  • Patent number: 9471832
    Abstract: Automated analysis of video data for determination of human behavior includes segmenting a video stream into a plurality of discrete individual frame image primitives which are combined into a visual event that may encompass an activity of concern as a function of a hypothesis. The visual event is optimized by setting a binary variable to true or false as a function of one or more constraints. The visual event is processed in view of associated non-video transaction data and the binary variable by associating the visual event with a logged transaction if associable, issuing an alert if the binary variable is true and the visual event is not associable with the logged transaction, and dropping the visual event if the binary variable is false and the visual event is not associable.
    Type: Grant
    Filed: May 13, 2014
    Date of Patent: October 18, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lei Ding, Quanfu Fan, Sharathchandra U. Pankanti
  • Patent number: 9465992
    Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue
  • Patent number: 9460357
    Abstract: Embodiments disclosed facilitate robust, accurate, and reliable recovery of words and/or characters in the presence of non-uniform lighting and/or shadows. In some embodiments, a method to recover text from image may comprise: expanding a Maximally Stable Extremal Region (MSER) in an image, the neighborhood comprising a plurality of sub-blocks; thresholding a subset of the plurality of sub-blocks in the neighborhood, the subset comprising sub-blocks with text, wherein each sub-block in the subset is thresholded using a corresponding threshold associated with the sub-block; and obtaining a thresholded neighborhood.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: October 4, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Hemanth P. Acharya, Pawan Kumar Baheti, Kishor K. Barman
  • Patent number: 9448716
    Abstract: A process for the management of a graphical user interface (10) includes application software graphical components (26, 40, 40?), such as windows that display computer applications, displaying the data of an associated application software function. The process includes the stages of: tracing (E102), on the interface, a graphical shape (31, 31?) in such a way as to create a graphical component (30, 30?), combining (E128) an application software function with the graphical component that was created for assigning the graphical component to the display of the application software function, and determining (E110) a direction (S) of the graphical component that is created in such a way as to display data of the associated application software function according to the determined direction.
    Type: Grant
    Filed: October 28, 2010
    Date of Patent: September 20, 2016
    Assignee: Orange
    Inventors: François Coldefy, Mohammed Belatar
  • Patent number: 9444990
    Abstract: The present disclosure provides a system and method of setting the focus of a digital image based on social relationship. In accordance with embodiments of the present disclosure, a scene is imaged with an electronic device and a face present in the imaged scene is detected. An identity of an individual having the detected face is recognized by determining that the detected face is the face of an individual having a social relationship with the user of the electronic device. The focus of the image is set to focus on the face of the recognized individual.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: September 13, 2016
    Assignee: Sony Mobile Communications Inc.
    Inventors: Mathias Jensen, Vishal Kondabathini, Sten Wendel, Stellan Nordström
  • Patent number: 9438891
    Abstract: Aspects of the present invention comprise holocam systems and methods that enable the capture and streaming of scenes. In embodiments, multiple image capture devices, which may be referred to as “orbs,” are used to capture images of a scene from different vantage points or frames of reference. In embodiments, each orb captures three-dimensional (3D) information, which is preferably in the form of a depth map and visible images (such as stereo image pairs and regular images). Aspects of the present invention also include mechanisms by which data captured by two or more orbs may be combined to create one composite 3D model of the scene. A viewer may then, in embodiments, use the 3D model to generate a view from a different frame of reference than was originally created by any single orb.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: September 6, 2016
    Assignee: Seiko Epson Corporation
    Inventors: Michael Mannion, Sujay Sukumaran, Ivo Moravec, Syed Alimul Huda, Bogdan Matei, Arash Abadpour, Irina Kezele
  • Patent number: 9406138
    Abstract: In one embodiment, a technique is provided for semi-automatically extracting a polyline from a linear feature in a point cloud. The user may provide initial parameters, including a point about the linear feature and a starting direction. A linear feature extraction process may automatically follow the linear feature beginning in the starting direction from about the selected point. The linear feature extraction process may attempt to follow a linear segment of the linear feature. If some points may be followed that constitute a linear segment, a line segment modeling the linear segment is created. The linear feature extraction process then determines whether the end of the linear feature has been reached. If the end has not been reached, the linear feature extraction process may repeat. If the end has been reached, the linear feature extraction process may return the line segments and create a polyline from them.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: August 2, 2016
    Assignee: Bentley Systems, Incorporated
    Inventor: Mathieu St-Pierre
  • Patent number: 9406107
    Abstract: An imaging system includes a computer programmed to estimate noise in computed tomography (CT) imaging data, correlate the noise estimation with neighboring CT imaging data to generate a weighting estimation based on the correlation, de-noise the CT imaging data based on the noise estimation and on the weighting, and reconstruct an image using the de-noised CT imaging data.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: August 2, 2016
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Jiahua Fan, Meghan L. Yue, Jiang Hsieh, Roman Melnyk, Masatake Nukui, Yujiro Yazaki
  • Patent number: 9397844
    Abstract: Embodiments of the present disclosure relate to automatic generation of dynamically changing layouts for a graphical user-interface. Specifically, embodiments of the present disclosure employ analysis of an image associated with the view (e.g., either the current view or a future view) of the graphical user-interface to determine colors that are complementary to the image. The colors are applied to the view, such that the color scheme of the view matches the image.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: July 19, 2016
    Assignee: APPLE INC.
    Inventors: Joe R. Howard, Brian R. Frick, Timothy B. Martin, Christopher John Sanders
  • Patent number: 9377298
    Abstract: A method for surveying an object for and/or using a geodetic surveying device that include a derivation of an item of surface information at least for one object region, at least one geodetically precise single point determination for the object region, wherein a position of at least one object point is determined geodetically precisely, and an update of the item of surface information based on the determined position of the at least one object point. In some embodiments a scan to derive the item of surface information may be performed using object-point-independent scanning of the object region by progressive alignment changes of the measuring radiation, with a determination of a respective distance and of a respective alignment of the measuring radiation emitted for the distance measurement for scanning points lying within the object region, and having a generation of a point cloud which represents the item of surface information.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: June 28, 2016
    Assignee: LEICA GEOSYSTEMS AG
    Inventors: Hans-Martin Zogg, Norbert Kotzur
  • Patent number: 9349072
    Abstract: The use of local feature descriptors of an image to generate compressed image data and reconstruct the image using image patches that are external to the image based on the compressed image data may increase image compression efficiency. A down-sampled version of the image is initially compressed to produce an encoded visual descriptor. The local feature descriptors of the image and the encoded visual descriptor are then obtained. A set of differential feature descriptors are subsequently determined based on the differences between the local feature descriptors of the input image and the encoded visual descriptor. At least some of the differential feature descriptors are compressed to produce encoded feature descriptors, which are then combined with the encoded visual feature descriptor produce image data. The image data may be used to select image patches from an image database to reconstruct the image.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: May 24, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xiaoyan Sun, Feng Wu
  • Patent number: 9330333
    Abstract: A method and apparatus for image automatic brightness detection. The method includes determining region of interest (ROI) candidates in an image, extracting features from each of the ROI candidates, selecting the optimum ROI based on weighted score of each of the ROI candidate, and calculating brightness value of the selected optimum ROI candidate as brightness feedback. The method and apparatus for image automatic brightness detection according to embodiments of the present invention, can automatically detect the point of interest for clinicians, and then provide a more accurate feedback to the imaging system, in order to provide a more efficient dose management in the imaging system and thus to achieve constant image quality without wasting any dose, thereby further optimizing the dose/IQ performance and the high efficient utilization of the system.
    Type: Grant
    Filed: March 30, 2011
    Date of Patent: May 3, 2016
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Xiao Xuan, Romain Areste, Vivek Walimbe
  • Patent number: 9323981
    Abstract: Disclosed is a face component extraction apparatus including an eye detection unit which detects a plurality of combinations of eye regions, each combination forming a pair, a first calculation unit which calculates a first evaluation value for each pair of eye regions, a fitting unit which fits a plurality of extraction models for extracting a plurality of face components in the image based on a number of pairs of eye regions whose first evaluation values are equal to or greater than a predetermined value, a second calculation unit which calculates a second evaluation value for each of a number of pairs of eye regions, and a deciding unit which decides a fitting mode of the plurality of extraction models to be fitted by the fitting unit based on calculation results of a number of second evaluation values by the second calculation unit.
    Type: Grant
    Filed: October 10, 2013
    Date of Patent: April 26, 2016
    Assignee: CASIO COMPUTER CO., LTD.
    Inventors: Hirokiyo Kasahara, Keisuke Shimada
  • Patent number: 9311523
    Abstract: A method for supporting object recognition is disclosed. The method includes the steps of: setting calculation blocks, each of which includes one or more pixels in an image, acquiring respective average values of the pixels included in the respective calculation blocks, and matching information on the respective calculation blocks with the respective average values or respective adjusted values derived from the respective average values; referring to information on windows, each of which includes information on one or more reference blocks which are different in at least either positions or sizes and information on corresponding relations between the calculation blocks and the average values or the adjusted values, to thereby assign the respective average values or the respective adjusted values to the respective reference blocks; and acquiring necessary information by using the respective average values or the respective adjusted values assigned to the respective reference blocks.
    Type: Grant
    Filed: July 29, 2015
    Date of Patent: April 12, 2016
    Assignee: StradVision Korea, Inc.
    Inventor: Woonhyun Nam
  • Patent number: 9300321
    Abstract: Methods and apparatus for lossless LiDAR LAS file compression and decompression are provided that include predictive coding, variable-length coding, and arithmetic coding. The predictive coding uses four different predictors including three predictors for x, y, and z coordinates and a constant predictor for scalar values, associated with each LiDAR data point.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: March 29, 2016
    Assignee: University of Maribor
    Inventors: Borut Zalik, Domen Mongus
  • Patent number: 9280223
    Abstract: An imaging apparatus includes an imaging part for capturing a subject image, a touch panel for acquiring a touch position input by a user, and a control part for controlling an imaging operation performed by the imaging part. The control part acquires the touch position to cause the imaging part to perform the imaging operation each time the touch position is displaced on the touch panel by a predetermined amount repeatedly during a continuous touch user input.
    Type: Grant
    Filed: January 24, 2013
    Date of Patent: March 8, 2016
    Assignee: Olympus Corporation
    Inventors: Maki Toida, Izumi Sakuma, Kensei Ito
  • Patent number: 9277357
    Abstract: Methods and systems for map generation for location and navigation with user sharing/social networking may comprise a premises-based crowd-sourced database that receives images and location data from a plurality of users of wireless communication devices, and for each of said plurality of users: receiving a determined position of a wireless communication device (WCD), where the position is determined by capturing images of the surroundings of the WCD. Data associated with objects in the surroundings of the WCD may be extracted from the captured images, positions of the objects may be determined, and the determined positions and the data may then update the premises-based crowd-sourced database. The position of the WCD may be determined utilizing global navigation satellite system (GNSS) signals. The elements may comprise structural and/or textual features in the surroundings of the WCD. The position may be determined utilizing sensors that measure a distance from a known position.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: March 1, 2016
    Assignee: Maxlinear, Inc.
    Inventor: Curtis Ling
  • Patent number: 9275308
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting objects in images. One of the methods includes receiving an input image. A full object mask is generated by providing the input image to a first deep neural network object detector that produces a full object mask for an object of a particular object type depicted in the input image. A partial object mask is generated by providing the input image to a second deep neural network object detector that produces a partial object mask for a portion of the object of the particular object type depicted in the input image. A bounding box is determined for the object in the image using the full object mask and the partial object mask.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: March 1, 2016
    Assignee: Google Inc.
    Inventors: Christian Szegedy, Dumitru Erhan, Alexander Toshkov Toshev
  • Patent number: 9269022
    Abstract: Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: February 23, 2016
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, Yang Bai, Tony F. Rodriguez, Eliot Rogers, Ravi K. Sharma, John D. Lord, Scott Long, Brian T. MacIntosh, Kurt M. Eaton