Patents Examined by Iman K Kholdebarin
  • Patent number: 9639736
    Abstract: An image processing apparatus displays an object adopted and an object not adopted by an adoption/non-adoption process in a distinguishable manner. A user designates an object whose adoption/non-adoption result is desired to be reversed among the objects displayed by the image processing apparatus. The image processing apparatus changes an allowable range stored in a storage part so that the adoption/non-adoption result of the designated object is reversed. That is, the user views the adoption/non-adoption result of the objects to change the allowable range of a parameter so that the adoption/non-adoption result becomes proper. This makes the allowable range of the parameter for use in the adoption/non-adoption process proper with ease.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: May 2, 2017
    Assignee: SCREEN Holdings Co., Ltd.
    Inventors: Hiroki Fujimoto, Jiro Tsumura
  • Patent number: 9639948
    Abstract: Methods, apparatuses, and computer readable storage media are provided for determining a depth measurement of a scene using an optical blur difference between two images of the scene. Each image is captured using an image capture device with different image capture device parameters. A corresponding image patch is identified from each of the captured images, motion blur being present in each of the image patches. A kernel of the motion blur in each of the image patches is determined. The kernel of the motion blur in at least one images patch is used to generate a difference convolution kernel. A selected first image patch is convolved with the generated difference convolution kernel to generate a modified image patch. A depth measurement of the scene is determined from an optical blur difference between the modified image patch and the remaining image patch.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: May 2, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: David Peter Morgan-Mar, Matthew Raphael Arnison
  • Patent number: 9638678
    Abstract: A method for monitoring crop health of a geographic region includes receiving an image comprising a set of image elements, the image corresponding to a time unit, mapping an image element of the set of image elements to a geographic sub-region of the geographic region, determining a geographic region performance value for the image element, determining a geographic region performance value change, and identifying a crop health anomaly based on the geographic region performance change and an expected geographic region performance value change. Determining the geographic region performance value for the image element can include determining a vegetative performance value for the image element, mapping the image element to a crop type, and normalizing the vegetative performance value.
    Type: Grant
    Filed: February 1, 2016
    Date of Patent: May 2, 2017
    Assignee: AgriSight, Inc.
    Inventors: John Shriver, Mayank Agarwal
  • Patent number: 9639747
    Abstract: People detection can provide valuable metrics that can be used by businesses, such as retail stores. Such information can be used to influence any number of business decisions such a employment hiring and product orders. The business value of this data hinges upon its accuracy. Thus, a method according to the principles of the current invention outputs metrics regarding people in a video frame within a stream of video frames through use of an object classifier configured to detect people. The method further comprises automatically updating the object classifier using data in at least a subset of the video frames in the stream of video frames.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 2, 2017
    Assignee: Pelco, Inc.
    Inventors: Hongwei Zhu, Farzin Aghdasi, Greg M. Millar, Stephen J. Mitchell
  • Patent number: 9639951
    Abstract: Methods and systems for detecting and/or tracking one or more objects utilize depth data. An example method of detecting one or more objects in image data includes receiving depth image data corresponding to a depth image view point relative to the one or more objects. A series of binary threshold depth images are formed from the depth image data. Each of the binary threshold depth images is based on a respective depth. One or more depth extremal regions in which image pixels have the same value are identified for each of the binary depth threshold images. One or more depth maximally stable extremal regions are selected from the identified depth extremal regions based on change in area of the one or more respective depth extremal regions for different depths.
    Type: Grant
    Filed: October 23, 2014
    Date of Patent: May 2, 2017
    Assignee: KHALIFA UNIVERSITY OF SCIENCE, TECHNOLOGY & RESEARCH
    Inventors: Ehab Najeh Salahat, Hani Hasan Mustafa Saleh, Safa Najeh Salahat, Andrzej Stefan Sluzek, Mahmoud Al-Qutayri, Baker Mohammad, Mohammed Ismail Elnaggar
  • Patent number: 9639954
    Abstract: A computer implemented method of object extraction from video images, the method comprising steps a computer is programmed to perform, the steps comprising: receiving a plurality of video images, deriving a plurality of background templates from at least one of the received video images, calculating a plurality of differences from an individual one of the received video images, each one of the differences being calculated between the individual video image and a respective and different one of the background templates, and extracting an object of interest from the individual video image, using a rule applied on the calculated differences.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: May 2, 2017
    Assignee: PLAYSIGH INTERACTIVE LTD.
    Inventors: Evgeni Khazanov, Chen Shachar
  • Patent number: 9639757
    Abstract: A system, method and computer program product cooperate to extract a building footprint from other data associated with a property. Imagery data of real property is input to a computing device, the imagery data containing a plurality of parcels. A processing circuit detects contrasts of candidate man-made structures on a parcel of the plurality of parcels. The candidate man-made structures are then associated with the parcel. A building footprint is then extracted by distinguishing a man-made structure on said parcel from natural terrain, recognizing that man-made structures when viewed from above generally show a strong contrast from background terrain. Remaining candidate man-made structures are removed by observing that they having features inconsistent with predetermined extraction logic.
    Type: Grant
    Filed: September 23, 2011
    Date of Patent: May 2, 2017
    Assignee: CoreLogic Solutions, LLC
    Inventors: Wei Du, Thomas C. Jeffery, Howard Botts
  • Patent number: 9639773
    Abstract: Methods and systems for predicting light probes for outdoor images are disclosed. A light probe database is created to learn a mapping from the outdoor image's features to predicted outdoor light probe illumination parameters. The database includes a plurality of images, image features for each of the plurality of images, and a captured light probe for each of the plurality of images. A light probe illumination model based on a sun model and sky model is fitted to the captured light probes. The light probe for the outdoor image may be predicted based on the database dataset and fitted light probe models.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: May 2, 2017
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Jean-Francois Lalonde, Iain Matthews
  • Patent number: 9639775
    Abstract: A template matching module is configured to program a processor to apply multiple differently-tuned object detection classifier sets in parallel to a digital image to determine one or more of an object type, configuration, orientation, pose or illumination condition, and to dynamically switch between object detection templates to match a determined object type, configuration, orientation, pose, blur, exposure and/or directional illumination condition.
    Type: Grant
    Filed: March 30, 2015
    Date of Patent: May 2, 2017
    Assignee: FotoNation Limited
    Inventors: Bogdan Sultana, Stefan Petrescu, Radu Nicolau, Vlad Ionut Ursachi, Petronel Bigioi, Corneliu Zaharia, Peter Corcoran, Szabolcs Fulop, Mihnea Gangea
  • Patent number: 9639808
    Abstract: There is provided a non-transitory computer readable medium storing a program causing a computer to execute a process for attribute estimation. The process includes: extracting, for each user, feature quantities of plural pieces of image information that are associated with attributes of the user; integrating the extracted feature quantities for each user; and performing learning, input of the learning being an integrated feature quantity that has been obtained as a result of integration for each user, output of the learning being one attribute, and generating a learning model.
    Type: Grant
    Filed: October 21, 2014
    Date of Patent: May 2, 2017
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Xiaojun Ma, Yukihiro Tsuboshita, Noriji Kato
  • Patent number: 9639739
    Abstract: Facial image bucketing is disclosed, whereby a query for facial image recognition compares the facial image against existing candidate images. Rather than comparing the facial image to each candidate image, the candidate images are organized or clustered into buckets according to their facial similarities, and the facial image is then compared to the image(s) in most-likely one(s) of the buckets. The organizing uses particular selected facial features, computes distance between the facial features, and selects ones of the computed distances to determine which facial images should be organized into the same bucket.
    Type: Grant
    Filed: May 28, 2016
    Date of Patent: May 2, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Poplavski, Scott Schumacher, Prachi Snehal, Sean J. Welleck, Alan Xia, Yinle Zhou
  • Patent number: 9633186
    Abstract: Systems and methods for controlling output of content based on human recognition data captured by one or more sensors of an electronic device are provided. The control of the output of particular content may be based on an action of a rule defined for the particular content, and may be performed when at least one human feature detection related condition of the rule is satisfied. In some embodiments, the action may include granting access to requested content when detected human feature data satisfies at least one human feature detection related condition of a rule defined for the requested content. In other embodiments the action may include altering a presentation of content, during the presentation of the content, when detected human feature data satisfies at least one human feature detection related condition of a rule defined for the presented content.
    Type: Grant
    Filed: April 23, 2012
    Date of Patent: April 25, 2017
    Assignee: APPLE INC.
    Inventors: Michael I. Ingrassia, Jr., Nathaniel Paine Hramits
  • Patent number: 9576188
    Abstract: Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: February 21, 2017
    Assignee: Atheer, Inc.
    Inventor: Allen Yang Yang
  • Patent number: 9563822
    Abstract: A first extracting unit extracts partial images from a learning image. A first calculator calculates a feature amount of the partial image. A retrieving unit retrieves objects included in the partial image, and gives, as a label, a vector to the feature amount. The vector represents relative positions between a first position in the partial image and each object included in the partial image. A voting unit generates a voting histogram for each partial image. A learning unit divides the feature amount of each partial image into clusters to reduce variation of the corresponding voting histogram, so as to learn a regression model representing a relationship between the feature amount of the partial image and the relative position of the object included in the partial image. A first predicting unit predicts, for each cluster, a representative label from the label given to the feature amount belonging to the cluster.
    Type: Grant
    Filed: February 9, 2015
    Date of Patent: February 7, 2017
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Quoc Viet Pham
  • Patent number: 9558423
    Abstract: A computer implemented method for predicting preferences of an observer for two images, the method comprising the steps of: receiving the first image and an associated salience map indicating regions of the first image that are likely to be scrutinized by the observer; receiving a content masking map indicating differences between the first image and the second image that the observer is likely to be able to perceive; determining a number of preference measures; and processing the salience map and the content masking map to determine a distribution of a set of values of the preference measures predicting the preferences of the observer for the first image and the second image, the set of values of the preference measures having a number of degrees of freedom.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: January 31, 2017
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Stuart William Perry
  • Patent number: 9536293
    Abstract: Deep convolutional neural networks receive local and global representations of images as inputs and learn the best representation for a particular feature through multiple convolutional and fully connected layers. A double-column neural network structure receives each of the local and global representations as two heterogeneous parallel inputs to the two columns. After some layers of transformations, the two columns are merged to form the final classifier. Additionally, features may be learned in one of the fully connected layers. The features of the images may be leveraged to boost classification accuracy of other features by learning a regularized double-column neural network.
    Type: Grant
    Filed: July 30, 2014
    Date of Patent: January 3, 2017
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Zhe Lin, Hailin Jin, Jianchao Yang
  • Patent number: 9530067
    Abstract: Method for a wearable device worn by a first user to generate or retrieve a personal contact record when encountering a second user is disclosed. The method of generating the personal contact record includes capturing a facial photograph of the second user and generating a face information; capturing a card photograph of a business card and performing OCR to obtain a card information; retrieving time and location information to obtain an encounter information; and generating the personal contact information. The method of retrieving the personal contact record includes capturing a facial photograph of the second user; searching through a contact database comprising facial images associated with identities of persons, and attempting to match the captured facial photograph with one of the facial images in the contact database to determine the identity of the second user, and providing messages. Wearable devices for performing the above methods are also disclosed.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: December 27, 2016
    Assignee: ULSee Inc.
    Inventor: Zhou Ye
  • Patent number: 9530172
    Abstract: A method of encoding a hidden image in high frequency spatial frequencies of a line pattern of a host image. A set of host image spatial frequencies is generated based on a predefined mapping of a domain of a set of representative scalar values of the hidden image and a domain of the host image spatial frequencies. The line pattern of the host image is generated based on the set of host image spatial frequencies. The host image may be composed of tiles containing parallel line segments, with each tile encoding a corresponding one of the scalar values. The host image may be composed of a stochastic line pattern generated from a white noise image convolved with a space variable kernel based on the predefined domain mapping. The hidden image may be decoded algorithmically or optically in a single step.
    Type: Grant
    Filed: June 27, 2011
    Date of Patent: December 27, 2016
    Assignee: Canadian Bank Note Company, Limited
    Inventors: Silviu Crisan, Marc Gaudreau, Tadeusz Rygas
  • Patent number: 9524448
    Abstract: Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object (713) tracked in a first field of view and a second object tracked (753) in a second field of view. The method determines a first area (711) in the first field of view, based on the location and size of the first object (713). The method utilizes a predetermined area relationship between the first area (711) in the first field of view and at least one area (751) in the second field of view to determine a second area (751) in the second field of view. In one embodiment, the method determines the second area (751) in the second field of view by comparing predetermined area relationships between the first area (711) and any areas (751) in the second field to determine a best match.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: December 20, 2016
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Daniel John Wedge
  • Patent number: 9508011
    Abstract: A video visual and audio query system for quickly identifying video within a large known corpus of videos being played on any screen or display. In one embodiment, the system can record via a mobile phone camera and microphone a live video clip from the TV and transcode it into a sequence of frame-signatures. The signatures representative of the clips can then be matched against the signatures of the TV content in a corpus across a network to identify the correct TV show or movie.
    Type: Grant
    Filed: May 10, 2011
    Date of Patent: November 29, 2016
    Assignee: VIDEOSURF, INC.
    Inventors: Eitan Sharon, Asael Moshe, Praveen Srinivasan, Mehmet Tek, Eran Borenstein, Achi Brandt