Local Or Regional Features Patents (Class 382/195)
  • Patent number: 10015518
    Abstract: A system and method for imaging is disclosed wherein light that is convolved by a blade is received by an imaging sensor. The received light may be convolved by a blade moving laterally across the image plane. The received light may be recorded as light data. The light data may be processed by rotations, collapses, normalizations, and applying one or more derivative filters to generate enhanced result images.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: July 3, 2018
    Inventor: Christopher Joseph Brittain
  • Patent number: 10013636
    Abstract: The present invention relates to an image object category recognition method and device. The recognition method comprises an off-line autonomous learning process of a computer, which mainly comprises the following steps: image feature extracting, cluster analyzing and acquisition of an average image of object categories. In addition, the method of the present invention also comprises an on-line automatic category recognition process. The present invention can significantly reduce the amount of computation, reduce computation errors and improve the recognition accuracy significantly in the recognition process.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: July 3, 2018
    Assignees: Beijing Jingdong Shangke Information Technology Co., Ltd., Beijing Jingdong Century Trading Co., Ltd.
    Inventors: Yongzhou Gan, Zhengping Deng
  • Patent number: 9990565
    Abstract: Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: June 5, 2018
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, Yang Bai, Tony F. Rodriguez, Eliot Rogers, Ravi K. Sharma, John D. Lord, Scott Long, Brian T. MacIntosh, Kurt M. Eaton
  • Patent number: 9977964
    Abstract: In the image processing device, the image processing method and the recording medium, the instruction acquiring section acquires the instruction input by the first user. The image group selecting section selects, as the second image group, a part of images from the first image group owned by the first user based on the instruction. The image analyzer carries out image analysis on images contained in the first image group. And the image group extracting section extracts, as the third image group, at least a part of images having relevance to images contained in the second image group from the first image group except the second image group based on the result of image analysis on images contained in the first image group.
    Type: Grant
    Filed: July 14, 2016
    Date of Patent: May 22, 2018
    Assignee: FUJIFILM Corporation
    Inventor: Masaki Saito
  • Patent number: 9971411
    Abstract: A method used in an interactive device and for recognizing a behavior of a user operating on the interactive device includes: capturing a plurality of images; forming a plurality of polygon images corresponding to the plurality of captured images according to a skin-tone model; and performing a function by analyzing the plurality of polygon images.
    Type: Grant
    Filed: December 10, 2013
    Date of Patent: May 15, 2018
    Assignee: HTC Corporation
    Inventor: Jing-Jo Bei
  • Patent number: 9967453
    Abstract: An autofocus method includes receiving a left image having a first blurriness value and a right image having a second blurriness value, filtering the left image so that the first blurriness value becomes the same as the second blurriness value, and generating a control signal for controlling the lens module based on a difference between a third blurriness value of a filtered left image and the second blurriness value.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: May 8, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dae Kwan Kim, Se Hwan Yun, Chae Sung Kim, Dong Ki Min
  • Patent number: 9953240
    Abstract: An image processing system includes: a first identification unit that identifies a static area from an input image captured at each of a plurality of time points; an image generation unit that generates a first image by using the static areas of respective input images captured in a first time span from a processing time point and generates a second image by using the static areas of respective images captured in a second time span from the processing time point; and a second identification unit that compares the first image and the second image and identifies an area having a difference.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: April 24, 2018
    Assignee: NEC Corporation
    Inventor: Yukie Ebiyama
  • Patent number: 9939263
    Abstract: Some embodiments of the invention include a surveying system having a position determination unit such as, for example, a total station or a GNSS module, for determining a target position in a defined coordinate system, and having a mobile target unit for definition and/or position determination of target points in the coordinate system. In some embodiments, the surveying system may be adapted to capture and/or receive image data that is related to a task image. In some embodiments, the surveying system may include a control unit for allowing a user to control surveying tasks of the surveying system in order to acquire surveying task data that is related to the surveying tasks and comprises spatial coordinates of at least one target point, a data storage unit for storing the surveying task data, and an electronic graphical display for displaying a visualization of the surveying task data.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: April 10, 2018
    Assignee: LEICA GEOSYSTEMS AG
    Inventors: Alastair Green, Andreas Daubner, Paul Dainty
  • Patent number: 9930102
    Abstract: Emotional state data is used to tailor the user experience of an interactive software system, by monitoring and obtaining data about a user's emotional state. The resulting emotional state data is analyzed and used to dynamically modify the user's experience by selecting user experience components based on the analysis of the user's emotional state data. In this way, different types of user experience components can be utilized to provide the user with an individualized user experience that is adapted to the user's emotional state. Different types of user experience components can be utilized to adjust the user experience to adapt to the user's new emotional state, prevent the user from entering an undesirable emotional state, and/or encourage the user to enter into a desirable emotional state.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: March 27, 2018
    Assignee: Intuit Inc.
    Inventors: Wolfgang Paulus, Luis Felipe Cabrera, Mike Graves
  • Patent number: 9919706
    Abstract: Disclosed herein are methods for automatically adjusting a speed of a vehicle capable of allowing the vehicle to arrive at a destination within a time desired by a driver in consideration of a desired arrival time to the destination, a distance to the destination, a current traffic volume, or the like. A method for adjusting a speed of a vehicle in a system for adjusting a speed of a vehicle may include a driver of the vehicle setting a destination, and the system for adjusting a speed of a vehicle searching a path to the destination and calculating a distance to the destination and a current traffic volume to calculate an arrival time to the destination. The method may further include the driver setting a desired arrival time to the destination, and the system for adjusting a speed of a vehicle deciding whether or not the vehicle arrives at the destination within the desired arrival time.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: March 20, 2018
    Assignee: Hyundai Motor Company
    Inventor: Ji Hyun Yoon
  • Patent number: 9922427
    Abstract: A time-of-flight (TOF) camera system includes a radiation source, a radiation detector, a location sensor system and a processor. The radiation source is configured to generate and emit a radiation that strikes a target object. The radiation detector is configured to detect the radiation reflected from the target object and generate a sample set comprising at least two raw samples detected in succession at different times based on the reflected radiation. The location sensor system is configured to detect movements of the TOF camera during the detection and generate a movement signal having portions thereof uniquely corresponding to each of the raw samples of the sample set based on the movements of the TOF camera, wherein a portion of the movement signal is detected at a same time of generating the corresponding raw sample.
    Type: Grant
    Filed: June 6, 2014
    Date of Patent: March 20, 2018
    Assignee: Infineon Technologies AG
    Inventors: Markus Dielacher, Josef Prainsack, Martin Flatscher, Michael Mark
  • Patent number: 9886646
    Abstract: An image processing apparatus includes a unifying unit, a memory, a storing unit, a setting unit, a selecting unit, an extracting, and a determining unit. The unifying unit unifies images of identification target regions cut out from a learning image. The memory stores a learning model. The storing unit stores identification target images converted into images of different image sizes. The setting unit sets a position and a size of a candidate region which is likely to include an identification target object of an identification target image. The selecting unit selects an identification target image of an image size with which the size of the cut-out candidate region is closest to the fixed size. The extracting unit extracts the information. The determining unit determines a target object included in the image of the candidate region.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: February 6, 2018
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Motofumi Fukui, Ryota Ozaki, Noriji Kato
  • Patent number: 9858500
    Abstract: The subject technology provides embodiments for performing fast corner detection in a given image for augmented reality applications. Embodiments disclose a high-speed test that examines intensities of pairs of pixels around a candidate center pixel. In one example, the examined pairs are comprised of pixels that are diametrically opposite ends of a circle formed with the candidate center pixel. Further, a pyramid of images including four rings of surrounding pixels is generated. An orientation of the pixels from the four rings are determined and a vector of discrete values of the pixels are provided. Next, a forest of trees are generated for the vector of discrete values corresponding to a descriptor for a first image. For a second image including a set of descriptors, approximate nearest neighbors are determined from the forest of tree representing closest matching descriptors from the first image.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: January 2, 2018
    Assignee: A9.com Inc.
    Inventors: William Brendel, Nityananda Jayadevaprakash, David Creighton Mott, Jie Feng
  • Patent number: 9842282
    Abstract: An approach is provided for classifying objects that are present at a geo-location and providing an uncluttered presentation of images of some of the objects in an application such as a map application. The approach includes determining one or more regions of interest associated with at least one geo-location, wherein the one or more regions of interest are at least one textured three-dimensional representation of one or more objects that may be present at the at least one geo-location. The approach also includes processing and/or facilitating a processing of the at least one textured three-dimensional representation to determine at least one two-dimensional footprint and three-dimensional geometry information for the one or more objects.
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: December 12, 2017
    Assignee: HERE Global B.V.
    Inventors: Xiaoqing Liu, Jeffrey Adachi, Antonio Haro, Jane MacFarlane
  • Patent number: 9836835
    Abstract: A technique is disclosed for helping prevent image quality of a three-dimensional image from becoming poor due to fluctuations in the rotation speed of an imaging core. For this purpose, if data is obtained from the imaging core by moving and rotating the imaging core, a cross-sectional image is generated at each movement position. Then, a direction where a guidewire is present in each of the cross-sectional images is detected. An angular difference between the direction of the detected guidewire and a preset direction is obtained so as to rotate each of the cross-sectional images in accordance with the angular difference. Then, the cross-sectional images which are previously rotated in this way are connected to one another, thereby generating the three-dimensional image.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: December 5, 2017
    Assignee: TERUMO KABUSHIKI KAISHA
    Inventors: Junya Furuichi, Kouichi Inoue
  • Patent number: 9805279
    Abstract: A method determining user liveness is provided that includes calculating, by a device, eye openness measures for a frame included in captured authentication data, and storing the eye openness measures in a buffer of the device. Moreover the method includes calculating confidence scores from the eye openness measures stored in the buffer, and detecting an eye blink when a maximum confidence score is greater than a threshold score.
    Type: Grant
    Filed: February 25, 2016
    Date of Patent: October 31, 2017
    Assignee: DAON HOLDINGS LIMITED
    Inventor: Mircea Ionita
  • Patent number: 9784587
    Abstract: A method includes applying a correlation rule defining a correlation relationship between a first and second object and determining, using a processor, whether a first motion vector of the first object is correlated at a threshold level of correlation with a second motion vector of the second object, the correlation relationship between the first and second objects identifying the threshold level of correlation between the first and second motion vectors. The method also includes, in response to determining that the first motion vector is not correlated at the threshold level with the second motion vector, determining a convergence point for the first and second objects in accordance with a policy. The method further includes transmitting instructions for arriving at the convergence point to the first and second objects.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: October 10, 2017
    Assignee: CA, Inc.
    Inventors: Steven L. Greenspan, Maria C. Velez-Rojas, Serguei Mankovskii
  • Patent number: 9779497
    Abstract: Measuring the number of glomeruli in the entire, intact kidney using non-destructive techniques is of immense importance in studying several renal and systemic diseases. In particular, a recent Magnetic Resonance Imaging (MRI) technique, based on injection of a contrast agent, cationic ferritin, has been effective in identifying glomerular regions in the kidney. In various embodiments, a low-complexity, high accuracy method for obtaining the glomerular count from such kidney MRI images is described. This method employs a patch-based approach for identifying a low-dimensional embedding that enables the separation of glomeruli regions from the rest. By using only a few images marked by the expert for learning the model, the method provides an accurate estimate of the glomerular number for any kidney image obtained with the contrast agent. In addition, the implementation of our method shows that this method is near real-time, and can process about 5 images per second.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: October 3, 2017
    Assignee: ARIZONA BOARD OF REGENTS, A BODY CORPORATE OF THE STATE OF ARIZONA, ACTING FOR AND ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jayaraman Jayaraman Thiagarajan, Karthikeyan Ramamurthy, Andreas Spanias, David Frakes
  • Patent number: 9766717
    Abstract: There is provided an optical pointing system including at least one reference beacon, an image sensor, a storage unit and a processing unit. The image sensor is configured to capture an image frame containing a beacon image associated with the at least one reference beacon. The storage unit is configured to save image data of at least one object image in the image frame. The processing unit is configured to sequentially process every pixel of the image frame for identifying the object image and real-timely remove or merge the image data, saved in the storage unit, associated with two object images within a pixel range of the image frame thereby reducing the used memory space.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: September 19, 2017
    Assignee: PixArt Imaging Inc.
    Inventor: Chia-Cheun Liang
  • Patent number: 9754176
    Abstract: The present invention is directed to a method of extracting data from fields in an image of a document. In one implementation, a text representation of the image of the document is obtained. A graph for storing features of the text fragments in the text representation of the image of the document and their links is constructed. A cascade classification for computing the features of the text fragments in the text representation of the image of the document and their link is run. Hypotheses about the belonging of text fragments to the fields in the image of the document are generated. Combinations of the hypotheses are generated. A combination of the hypotheses is selected. And data from the fields in the image of the document is extracted based on the selected combination of the hypotheses.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: September 5, 2017
    Assignee: ABBYY PRODUCTION LLC
    Inventor: Mikhail Kostyukov
  • Patent number: 9741171
    Abstract: An image processing device includes memory; and a processor configured to execute a plurality of instructions stored in the memory, the instructions comprising: recognizing a target object recognized from a first image, which is a captured image, including the target object in a real world; controlling a second image, which is an augmented image, including information of the target object from the first image, and a third image which is an augmented image of the second image and to be formed so as to inscribe an outer surrounding the second image and covers a center of visual field of a user relative to the second image; and displaying, in a state where the user directly visually recognizes the target object in the real world, the second image and the third image such that the second image and the third image are caused to correspond to a position.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: August 22, 2017
    Assignee: FUJITSU LIMITED
    Inventor: Nobuyuki Hara
  • Patent number: 9734411
    Abstract: A method assists in locating objects using their images. One or more processors receive a set of one or more machine readable reference images of an object, and then distribute the set of one or more machine readable reference images to a plurality of computing devices, where each computing device from the plurality of computing devices is capable of capturing an image. Each computing device from the plurality of computing devices captures a set of one or more images. For each set of one or more images in each computing device from the plurality of computing devices, machine logic within each computing device determines whether each set of one or more images includes an image portion corresponding to the object.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: August 15, 2017
    Assignee: International Business Machines Corporation
    Inventors: Simon A. S. Briggs, James K. Hook, Hamish C. Hunt, Nicholas K. Lincoln
  • Patent number: 9727978
    Abstract: A method is provided for extracting outer space feature information from spatial geometric data. The method comprises: an input step S10 of inputting spatial geometric data for a target region; a sampling step S20 of determining a sample by selecting an arbitrary area for the spatial geometric data input in the input step using a preset selection method; a feature extraction step S30 of acquiring feature information for a corresponding sampling plane using a convex hull method based on sampling information including sampling plane information of the spatial geometric data for a sampling plane selected in the sampling step. The sampling step and the feature extraction step are repeatedly performed in a preset manner.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: August 8, 2017
    Assignee: Korea University Research and Business Foundation
    Inventors: Chang Hyun Jun, Nakju Lett Doh
  • Patent number: 9706131
    Abstract: First, images are captured with three exposure levels, and pixel levels of a low exposure image and an intermediate exposure image are amplified to be matched to those of a high exposure image. Next, a brightness combining ratio for each image is calculated based on the low exposure image that has been matched in brightness. Then, images having brightness combining ratios that are not 0% in a region of interest are selected, and only the selected images are used to generate a combined image in the region of interest, and the low exposure image is used as a substitute, for example, in a region other than the region of interest.
    Type: Grant
    Filed: May 27, 2015
    Date of Patent: July 11, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventor: Satoru Kobayashi
  • Patent number: 9684817
    Abstract: Disclosed is a method for automatically optimizing point cloud data quality, including the following steps of: acquiring initial point cloud data for a target to be reconstructed, to obtain an initial discrete point cloud; performing preliminary data cleaning on the obtained initial discrete point cloud to obtain a Locally Optimal Projection operator (LOP) sampling model; obtaining a Possion reconstruction point cloud model by using a Possion surface reconstruction method on the obtained initial discrete point cloud; performing iterative closest point algorithm registration on the obtained Possion reconstruction point cloud model and the obtained initial discrete point cloud; and for each point on a currently registered model, calculating a weight of a surrounding point within a certain radius distance region of a position corresponding to the point for the point on the obtained LOP sampling model, and comparing the weight with a threshold, to determine whether a region where the point is located requires rep
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: June 20, 2017
    Assignee: Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences
    Inventor: Hui Huang
  • Patent number: 9684386
    Abstract: There is provided an optical pointing system including at least one reference beacon, an image sensor, a storage unit and a processing unit. The image sensor is configured to capture an image frame containing a beacon image associated with the at least one reference beacon. The storage unit is configured to save image data of at least one object image in the image frame. The processing unit is configured to sequentially process every pixel of the image frame for identifying the object image and real-timely remove or merge the image data, saved in the storage unit, associated with two object images within a pixel range of the image frame thereby reducing the used memory space.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: June 20, 2017
    Assignee: PIXART IMAGING INC.
    Inventor: Chia-Cheun Liang
  • Patent number: 9679221
    Abstract: An input image showing a same object as an object shown in a reference image is identified more accurately. A difference area in the input image is determined by converting a difference area in the reference image, on a basis of geometric transformation information calculated by an analysis using a local descriptor. By matching a descriptor extracted from the difference area in the input image with the difference area in the reference image, fine differences that cannot be identified by conventional matching using only a local descriptor can be distinguished and images showing a same object can be exclusively identified.
    Type: Grant
    Filed: May 21, 2013
    Date of Patent: June 13, 2017
    Assignee: NEC Corporation
    Inventor: Ryota Mase
  • Patent number: 9659349
    Abstract: A system identifies a scaling position in a captured image, and identifies red subpixels adjacent to the scaling position. The system computes a scaled red subpixel for the scaling position based on the identified red subpixels according to constraints. The system further computes a scaled blue subpixel based on identified adjacent blue subpixels, according to constraints, and computes a scaled green subpixel position based on Gr and Gb subpixels adjacent to the scaling position according to certain constraints. The system then generates a scaled image representative of the captured image, the scaled image including at least the scaled red subpixel value, the scaled blue subpixel value, and the scaled green subpixel value.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: May 23, 2017
    Assignee: GoPro, Inc.
    Inventors: Bruno Cesar Douady-Pleven, Michael Serge André Kraak, Guillaume Matthieu Guerin, Thomas Nicolas Emmanuel Veit
  • Patent number: 9652688
    Abstract: Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: May 16, 2017
    Assignee: Captricity, Inc.
    Inventors: Huguens Jean, Yoriyasu Yano, Hui Peng Hu, Kuang Chen
  • Patent number: 9641755
    Abstract: One or more systems, devices, and/or methods for emphasizing objects in an image, such as a panoramic image, are disclosed. For example, a method includes receiving a depthmap generated from an optical distancing system, wherein the depthmap includes position data and depth data for each of a plurality of points. The optical distancing system measures physical data. The depthmap is overlaid on the panoramic image according to the position data. Data is received that indicates a location on the panoramic image and, accordingly, a first point of the plurality of points that is associated with the location. The depth data of the first point is compared to depth data of surrounding points to identify an area on the panoramic image corresponding to a subset of the surrounding points. The panoramic image is altered with a graphical effect that indicates the location.
    Type: Grant
    Filed: September 12, 2013
    Date of Patent: May 2, 2017
    Assignee: HERE Global B.V.
    Inventor: James D. Lynch
  • Patent number: 9630318
    Abstract: A robotic device may be operated by a learning controller comprising a feature learning configured to determine control signal based on sensory input. An input may be analyzed in order to determine occurrence of one or more features. Features in the input may be associated with the control signal during online supervised training. During training, learning process may be adapted based on training input and the predicted output. A combination of the predicted and the target output may be provided to a robotic device to execute a task. Feature determination may comprise online adaptation of input, sparse encoding transformations. Computations related to learning process adaptation and feature detection may be performed on board by the robotic device in real time thereby enabling autonomous navigation by trained robots.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: April 25, 2017
    Assignee: Brain Corporation
    Inventors: Borja Ibarz Gabardos, Andrew Smith, Peter O'Connor
  • Patent number: 9633278
    Abstract: Disclosed is an object identification device and the like for reducing identification error in a reference image which presents an object that is only slightly difference from an object presented in an input image.
    Type: Grant
    Filed: December 25, 2013
    Date of Patent: April 25, 2017
    Assignee: NEC CORPORATION
    Inventor: Ryota Mase
  • Patent number: 9615805
    Abstract: A method of aligning at least two breast images includes aligning a relevant image part in each of the images, the relevant image parts being obtained on the basis of the result of a shape analysis procedure performed on the breast images.
    Type: Grant
    Filed: February 25, 2013
    Date of Patent: April 11, 2017
    Assignee: AGFA HEALTHCARE NV
    Inventor: Gert Behiels
  • Patent number: 9619734
    Abstract: Land classification based on analysis of image data. Feature extraction techniques may be used to generate a feature stack corresponding to the image data to be classified. A user may identify training data from the image data from which a classification model may be generated using one or more machine learning techniques applied to one or more features of the image. In this regard, the classification module may in turn be used to classify pixels from the image data other than the training data. Additionally, quantifiable metrics regarding the accuracy and/or precision of the models may be provided for model evaluation and/or comparison. Additionally, the generation of models may be performed in a distributed system such that model creation and/or application may be distributed in a multi-user environment for collaborative and/or iterative approaches.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: April 11, 2017
    Assignee: DigitalGlobe, Inc.
    Inventors: Giovanni B. Marchisio, Carsten Tusk, Krzysztof Koperski, Mark D. Tabb, Jeffrey D. Shafer
  • Patent number: 9613457
    Abstract: Provided are a multi-primitive fitting method including an acquiring point cloud data by collecting data of each of input points, a obtaining a segment for the points using the point cloud data, and a performing primitive fitting using data of points included in the segment and the point cloud data, and a multi-primitive fitting device that performs the method.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: April 4, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young Mi Cha, Chang Woo Chu, Jae Hean Kim
  • Patent number: 9600892
    Abstract: A non-parametric method of, and system for, dimensioning an object of arbitrary shape, captures a three-dimensional (3D) point cloud of data points over a field of view containing the object and a base surface on which the object is positioned, detects a base plane indicative of the base surface from the point cloud, extracts the data points of the object from the point cloud, processes the extracted data points of the object to obtain a convex hull, and fits a bounding box of minimum volume to enclose the convex hull. The bounding box has a pair of mutually orthogonal planar faces, and the fitting is performed by orienting one of the faces to be generally perpendicular to the base plane, and by simultaneously orienting the other of the faces to be generally parallel to the base plane.
    Type: Grant
    Filed: November 6, 2014
    Date of Patent: March 21, 2017
    Assignee: Symbol Technologies, LLC
    Inventors: Ankur R Patel, Kevin J O'Connell, Cuneyt M Taskiran, Jay J Williams
  • Patent number: 9589203
    Abstract: A processor implemented system and method for identification of an activity performed by a subject based on sensor data analysis is described herein. In an implementation, the method includes capturing movements of the subject in real-time using a sensing device. At least one action associated with the subject is ascertained from a predefined set of actions. From the predefined set of actions, a plurality of actions can collectively form at least one activity. The ascertaining is based on captured movements of the subject and at least one predefined action rule. The at least one action rule is based on context-free grammar (CFG) and is indicative of a sequence of actions for occurrence of the at least one activity. Further, a current activity performed by the subject is dynamically determined, based on the at least one action and an immediately preceding activity, using a non-deterministic push-down automata (NPDA) state machine.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: March 7, 2017
    Assignee: TATA Consultancy Services Limited
    Inventors: Dipti Prasad Mukherjee, Tamal Batabyal, Tanushyam Chattopadhyay
  • Patent number: 9575004
    Abstract: Systems and methods for inspecting a surface are disclosed. A source, detector, a base, a controller, and a processing device are used to collect image data related to the surface and information relating to the location of the image data on the surface. The image data and information relating to location are correlated and stored in a processing device to create a map of surface condition.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: February 21, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Gary E. Georgeson, Scott W. Lea, James J. Troy
  • Patent number: 9547866
    Abstract: Methods and apparatus to estimate demography based on aerial images are disclosed. An example method includes analyzing a first aerial image of a first geographic area to detect a first plurality of objects, analyzing a second aerial image of a second geographic area to detect a second plurality of objects, associating first demographic information to the second plurality of objects, the first demographic information obtained by a sampling of the second geographic area, and comparing the second plurality of objects to the first plurality of objects to estimate a demographic characteristic of the first geographic area based on the comparison.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: January 17, 2017
    Assignee: THE NIELSEN COMPANY (US), LLC
    Inventors: Alejandro Terrazas, Michael Himmelfarb, David Miller, Paul Donato
  • Patent number: 9544565
    Abstract: Data for making calculations from a three dimensional observation is derived from a recording device. The data is combined with information developed by a three dimensional remote sensing platform to create measurement points in space for an object. Descriptive information from at least one object model is used to direct at least one of a resolution resource, results gained from group measurements and an object-deployed resolution asset. Order is thereafter found in two dimensional to three dimensional observations in a subject area.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: January 10, 2017
    Assignee: Vy Corporation
    Inventor: Thomas Martel
  • Patent number: 9524432
    Abstract: The subject technology provides embodiments for performing fast corner detection in a given image for augmented reality applications. Embodiments disclose a high-speed test that examines intensities of pairs of pixels around a candidate center pixel. In one example, the examined pairs are comprised of pixels that are diametrically opposite ends of a circle formed with the candidate center pixel. Further, a pyramid of images including four rings of surrounding pixels is generated. An orientation of the pixels from the four rings are determined and a vector of discrete values of the pixels are provided. Next, a forest of trees are generated for the vector of discrete values corresponding to a descriptor for a first image. For a second image including a set of descriptors, approximate nearest neighbors are determined from the forest of tree representing closest matching descriptors from the first image.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: December 20, 2016
    Assignee: A9.com, Inc.
    Inventors: William Brendel, Nityananda Jayadevaprakash, David Creighton Mott, Jie Feng
  • Patent number: 9523772
    Abstract: In scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: December 20, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aaron Matthew Rogan, Benjamin James Kadlec
  • Patent number: 9477885
    Abstract: An image processing apparatus according to one embodiment includes a first extraction unit, a second extraction unit, and a specifying unit. The first extraction unit performs stroke width transform on an image and thereby extracts a SWT region from the image. The second extraction unit performs clustering based on pixel values on the image and thereby extracts a single-color region from the image. The specifying unit specifies a pixel group included in a candidate text region based at least on the single-color region when a ratio of the number of pixels in an overlap part between the SWT region and the single-color region to the number of pixels in the single-color region is equal to or more than a first reference value, or more than the first reference value.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: October 25, 2016
    Assignee: Rakuten, Inc.
    Inventor: Naoki Chiba
  • Patent number: 9478063
    Abstract: Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: October 25, 2016
    Assignee: Digimarc Corporation
    Inventors: Geoffrey B. Rhoads, Yang Bai
  • Patent number: 9471832
    Abstract: Automated analysis of video data for determination of human behavior includes segmenting a video stream into a plurality of discrete individual frame image primitives which are combined into a visual event that may encompass an activity of concern as a function of a hypothesis. The visual event is optimized by setting a binary variable to true or false as a function of one or more constraints. The visual event is processed in view of associated non-video transaction data and the binary variable by associating the visual event with a logged transaction if associable, issuing an alert if the binary variable is true and the visual event is not associable with the logged transaction, and dropping the visual event if the binary variable is false and the visual event is not associable.
    Type: Grant
    Filed: May 13, 2014
    Date of Patent: October 18, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lei Ding, Quanfu Fan, Sharathchandra U. Pankanti
  • Patent number: 9465992
    Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue
  • Patent number: 9460357
    Abstract: Embodiments disclosed facilitate robust, accurate, and reliable recovery of words and/or characters in the presence of non-uniform lighting and/or shadows. In some embodiments, a method to recover text from image may comprise: expanding a Maximally Stable Extremal Region (MSER) in an image, the neighborhood comprising a plurality of sub-blocks; thresholding a subset of the plurality of sub-blocks in the neighborhood, the subset comprising sub-blocks with text, wherein each sub-block in the subset is thresholded using a corresponding threshold associated with the sub-block; and obtaining a thresholded neighborhood.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: October 4, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Hemanth P. Acharya, Pawan Kumar Baheti, Kishor K. Barman
  • Patent number: 9448716
    Abstract: A process for the management of a graphical user interface (10) includes application software graphical components (26, 40, 40?), such as windows that display computer applications, displaying the data of an associated application software function. The process includes the stages of: tracing (E102), on the interface, a graphical shape (31, 31?) in such a way as to create a graphical component (30, 30?), combining (E128) an application software function with the graphical component that was created for assigning the graphical component to the display of the application software function, and determining (E110) a direction (S) of the graphical component that is created in such a way as to display data of the associated application software function according to the determined direction.
    Type: Grant
    Filed: October 28, 2010
    Date of Patent: September 20, 2016
    Assignee: Orange
    Inventors: François Coldefy, Mohammed Belatar
  • Patent number: 9444990
    Abstract: The present disclosure provides a system and method of setting the focus of a digital image based on social relationship. In accordance with embodiments of the present disclosure, a scene is imaged with an electronic device and a face present in the imaged scene is detected. An identity of an individual having the detected face is recognized by determining that the detected face is the face of an individual having a social relationship with the user of the electronic device. The focus of the image is set to focus on the face of the recognized individual.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: September 13, 2016
    Assignee: Sony Mobile Communications Inc.
    Inventors: Mathias Jensen, Vishal Kondabathini, Sten Wendel, Stellan Nordström
  • Patent number: 9438891
    Abstract: Aspects of the present invention comprise holocam systems and methods that enable the capture and streaming of scenes. In embodiments, multiple image capture devices, which may be referred to as “orbs,” are used to capture images of a scene from different vantage points or frames of reference. In embodiments, each orb captures three-dimensional (3D) information, which is preferably in the form of a depth map and visible images (such as stereo image pairs and regular images). Aspects of the present invention also include mechanisms by which data captured by two or more orbs may be combined to create one composite 3D model of the scene. A viewer may then, in embodiments, use the 3D model to generate a view from a different frame of reference than was originally created by any single orb.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: September 6, 2016
    Assignee: Seiko Epson Corporation
    Inventors: Michael Mannion, Sujay Sukumaran, Ivo Moravec, Syed Alimul Huda, Bogdan Matei, Arash Abadpour, Irina Kezele