Abstract: The present invention discloses a method and apparatus of storage control for depth perception computation. The method comprises: sequentially reading each part of image data for splicing a binarized spliced image according to a preset write mapping rule, the image data being originated from each frame of image in a group of binarized structured-light encoded image sequences; writing, by a read/write controller, the each part that is spliced into a binarized spliced image into a memory for storage, so as to generate a frame of complete binarized spliced image; then, through changing an address mapping, solidifying the generated binarized spliced image at a certain position within the memory; when in use, one or more frames of binarized spliced images are read out in sequence as reference encoded images for depth perception computation.
Abstract: A method of determining eyeglass fitting measurements from an image for a vision treatment analyzes a source image of an individual wearing a known pair of eyeglass frames. Dimensions specifications are acquired from the manufacturer of the frames, and corresponding dimensions are measured from the image using image recognition software. Various scaling and trigonometric calculations are performed in order to acquire measurements specific to the patient for fitted eyeglass frames.
Abstract: An image processing apparatus includes: a holding unit that holds a plurality of images; a condition checking unit that checks imaging conditions of the plurality of images; a collation determining unit that determines whether to collate images among the plurality of images based on the imaging conditions of the images; a collation unit that collates the images determined to be collated by the collation determining unit to obtain a degree of similarity; and a classifying unit that classifies the collated images into a same category when the degree of similarity is equal to or greater than a predetermined threshold.
Abstract: In one embodiment, a method for detecting faces in video image frames includes comparing a current image frame to a previously processed image frame to determine similarity, discarding the current image frame if the current image frame and the previously processed image frame are, detecting at least one detected facial image in the current image frame, comparing the at least one detected facial image to at least one most recently stored facial image stored in a most recently used (MRU) cache to determine similarity, discarding the at least one detected facial image if the at least one detected facial image and the at least one most recently stored facial image are similar; and storing the at least one detected facial image in the MRU cache if the at least one detected facial image and the at least one most recently stored facial image are not similar.
Abstract: A search device according to an embodiment maps a feature vector onto a hyper-sphere on the basis of parameters which include an intersection and a distance, with the intersection at which an m-dimensional feature space and a straight line passing through the hyper-sphere present in a space greater in dimension than m intersect and the distance being from the north pole of the hyper-sphere to the feature space. In this case, the search device searches for the parameters which allow the positions of feature vectors mapped onto the hyper-sphere to be concentrated on a predetermined hemisphere of the hyper-sphere.
Abstract: Provided are a system for search and a method for operating thereof. The system for search includes: a preliminary data analysis part which extracts a variety of attributes through analysis of images being input, analyzes a trend about a category as information requested by a client with the image analysis result, and stores the trend analysis result as metadata; an index part which stores the image analysis result, and structuralizes, organizes and stores the stored metadata in order to easily search the metadata; and a search part which extracts trend information matching a category input by a client, from the index part and provides the trend information in a predetermined format.
Type:
Grant
Filed:
July 24, 2014
Date of Patent:
January 2, 2018
Assignee:
Hanwha Techwin Co., Ltd.
Inventors:
Dong Jun Park, Yeon Geol Ryu, Hak Chul Shin, Dong Whan Jung
Abstract: Provided are techniques for image search for a location. Street view data is extracted to identify a path for a region. Points of interest for the path are identified. Images for the points of interest for a direction of the path are identified. The images are used to create a sequence of images representing a view of the points of interest along the direction of the path. The sequence of images are displayed adjacent to a map that includes the path.
Type:
Grant
Filed:
May 18, 2015
Date of Patent:
January 2, 2018
Assignee:
International Business Machines Corporation
Abstract: Provided is an apparatus and method for detecting a key point using a high-order Laplacian of Gaussian (LoG) kernel. The high-order LoG kernel is generated based on an LoG operator which is calculated by sequentially differentiating an LoG operator with respect to x and y of an image. A scale space is generated based on the high-order LoG kernel and the key point is detected by comparing a current pixel in the scale space to pixels adjacent to the current pixel.
Type:
Grant
Filed:
March 9, 2016
Date of Patent:
December 12, 2017
Assignee:
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
Inventors:
Yong Ju Cho, Unsang Park, Joo Myoung Seok, Sang Woo Ahn, Ji Hun Cha
Abstract: Systems and methods prevent or restrict the mining of content on a mobile device. For example, a method may include identifying a mining-restriction mark in low order bits or high order bits in a frame buffer of a mobile device and determining whether the mining-restriction mark prevents mining of content. Mining includes non-transient storage of a copy or derivations of data in the frame buffer. The method may also include preventing the mining of data in the frame buffer when the mining-restriction mark prevents mining.
Type:
Grant
Filed:
December 14, 2016
Date of Patent:
December 5, 2017
Assignee:
Google Inc.
Inventors:
Alfred Zalmon Spector, David Petrou, Blaise Aguera-Arcas, Matthew Sharifi
Abstract: A system for analyzing remotely sensed photos of a forest or other areas of interest uses a computer system to increase the variation in NIR data having values that represent items of interest. In one embodiment, a computer system applies a stretching function to the NIR data to increase their variation. The objective spectral stretched NIR data is used to differentiate different types of vegetation in the remotely sensed image. Objective-based Vegetation Index (OVI) values are calculated from the objective spectral stretched NIR data that allow different types of vegetation to be distinguished. In one embodiment, the OVI values are used to differentiate hardwoods from conifers in a digital aerial photo of a forest.
Abstract: The disclosure relates to digital watermarking, steganography, and specifically to message coding protocols used in conjunction with digital watermarking and steganographic encoding/decoding and payload interpretation methods. One claim recites a method for interpreting a data structure having fixed and variable message portions, the method comprising: processing the fixed message portion to determine a version of the variable message portion; decoding the entire payload field of the variable message portion according to the determined version; and interpreting only a portion of the decoded payload field according to the determined version. Of course, other features and claims are provided too.
Abstract: Provided in the present disclosure is a fingerprint bookmark system which may be implemented in a vehicle with one or more configurable interior settings. The fingerprint bookmark system may contain a scanner which may be configured to record a fingerprint from a vehicle occupant such that a fingerprint image showing the fingerprint and a duration data for the fingerprint image are recorded by the scanner. The system may contain one or more processors which may be configured to compare the duration data associated with the fingerprint image with a duration threshold. The one or more processors may be configured to initiate a search to obtain a bookmark for the fingerprint shown in the fingerprint image when the duration data is less than the duration threshold. The one or more processors may be further configured to create a new bookmark when the duration data exceeds the duration threshold.
Type:
Grant
Filed:
April 4, 2017
Date of Patent:
November 28, 2017
Assignee:
THUNDER POWER NEW ENERGY VEHICLE DEVELOPMENT COMPANY LIMITED
Abstract: An image processing apparatus includes an input section, a specification section, an extraction section, and a calculation section. The input section receives fluorescence image information obtained by picking up an image of fluorescence based on application of excitation light to a subject provided with a fluorescent substance with a specific effect on a living tissue and therapeutic light position image information including an application position of therapeutic light. The specification section specifies an application region of the therapeutic light. The extraction section extracts a luminance value corresponding to the application region of the therapeutic light and a luminance value corresponding to a region other than the application region of the therapeutic light. The calculation section calculates and outputs a ratio of the extracted luminance value corresponding to the application region to the luminance value corresponding to the region other than the application region.
Abstract: Provided are techniques for image search for a location. Street view data is extracted to identify a path for a region. Points of interest for the path are identified. Images for the points of interest for a direction of the path are identified. The images are used to create a sequence of images representing a view of the points of interest along the direction of the path. The sequence of images are displayed adjacent to a map that includes the path.
Type:
Grant
Filed:
November 18, 2014
Date of Patent:
October 31, 2017
Assignee:
International Business Machines Corporation
Abstract: A method for denoising a range image acquired by a time-of-flight (ToF) camera by first determining locations of edges, and a confidence value of each pixel, and based on the locations of the edges, determining geodesic distances of neighboring pixels. Based on the confidence values, reliabilities of the neighboring pixels are determined and scene dependent noise is reduced using a filter.
Type:
Grant
Filed:
February 12, 2015
Date of Patent:
October 31, 2017
Assignee:
Mitsubishi Electric Research Laboratories, Inc.
Abstract: A device to extract a biometric feature vector includes a memory and a circuitry. The circuitry is configured to obtain a biometric image, to generate a plurality of small region images from the biometric image so that variability of biometric information amounts among the plurality of small region images is equal to or less than a predetermined value, to extract biometric local feature amounts from the small region images and to generate a biometric feature vector by combining the biometric local feature amounts in accordance with a predetermined rule, the biometric feature vector indicating a feature for identifying the biometric image.
Abstract: Various embodiments herein each include at least one of systems, methods, and software to enable depth-based image element removal. Some embodiments may be implemented in a store checkout context, while other embodiments may be implemented in other contexts such as at price-checking kiosks or devices that may be deployed within a store or other retail establishment, a library at a checkout terminal, and the like. Some embodiments include removing elements of images based at least in part on depth data.
Abstract: An information processing apparatus that obtains content data and first position information corresponding to the content data; obtains second position information corresponding to a second person having a predetermined relationship with a first person associated with creating the content data; and associates identification corresponding to the second person with the content data based on a predetermined relationship between the first position information and the second position information.
Abstract: The location of a user's head, for purposes such as head tracking or motion input, can be determined using a two-step process. In a first step, at least one image is captured including a representation of at least a portion of the person, such as a head portion of the person. In a second step, a contour of the head portion can be determined, and a two-dimensional model, for example, an ellipse or other similar shape can be used to approximate the head portion of the person represented in the image. The ellipse, for example, can be modeled using a number of shapes, such as rectangles, and the portion of the person can be tracked by locating an ellipse that bounds a maximum intensity gradient of pixel values in each one of a series of images.