Feature Extraction Patents (Class 382/190)
  • Patent number: 10560645
    Abstract: Various methods and systems are disclosed for near-infrared video compositing techniques and an associated immersive video environment. In an example, a video environment includes: a visible light camera and an infrared detection camera arranged to capture video from a performance area; a visible light source and infrared light source arranged to emit visible light onto the performance area; a display source to provide video output; and a display screen arranged in the performance area between the camera system and the infrared light source. The display screen is further arranged to reflect visible light originating from the display source, while permitting infrared and visible light from the performance area to reach the cameras. In a further example, the system includes a backdrop integrating the infrared light source, as infrared light from the infrared light source passes through the backdrop into the performance area.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: February 11, 2020
    Assignee: FEEDBACK, LLC
    Inventors: Hamilton Lovemelt, Geoffrey Schuler
  • Patent number: 10545092
    Abstract: Techniques are described for using an optical sensor to monitor light produced by an indicator light of a monitoring system. The optical sensors can monitor attributes of the indicator light in order to detect a blink pattern outputted by the indicator light. In some implementations, video data indicating a blink pattern of a device within a property is obtained. One or more attributes of the blink pattern are identified based on processing the obtained video data. A status for the device is determined based at least on the one or more identified attributes. A system action that corresponds to the status is determined. The system action corresponding to the determined status is performed.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: January 28, 2020
    Assignee: Alarm.com Incorporated
    Inventors: Daniel Marc Goodman, Craig Carl Heffernan, Harrison Wayne Donahue
  • Patent number: 10540392
    Abstract: Processing an image includes acquiring, by the image processing apparatus, a target image, extracting a shape of a target object included in the target image, determining a category including the target object based on the extracted shape, and storing the target image by mapping the target image with additional information including at least one keyword related to the category.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: January 21, 2020
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Seong-taek Hwang, Sang-doo Yun, Ha-wook Jeong, Jin-young Choi, Byeong-ho Heo, Woo-sung Kang
  • Patent number: 10528798
    Abstract: An eye shadow analysis method adopted by a body information analysis apparatus includes following steps: performing positioning actions on each part of a face after the face is recognized by an image recognition module of the apparatus; obtaining positions of at least a left eyebrow, a left eye, a right eyebrow, and a right eye after the positioning actions; determining a position of a left eye shadow according to the left eyebrow and the left eye; determining another position of a right eye shadow according to the right eyebrow and the right eye; analyzing average color values of the two eye shadows; comparing two average color values of the two eye shadows or comparing each average color value respectively with a default color value; displaying a comparison result; and, re-executing above steps before an assisting function is terminated.
    Type: Grant
    Filed: January 14, 2018
    Date of Patent: January 7, 2020
    Assignee: CAL-COMP BIG DATA, INC.
    Inventors: Shyh-Yong Shen, Min-Chang Chi, Eric Budiman Gosno
  • Patent number: 10529066
    Abstract: A method, system and computer program product for assessing quality of images or videos. A quality assessment of an image or video to be processed is performed using a no-reference reference quality assessment algorithm. A quality measurement, such as a score, reflecting the quality of the image or video, is generated from the no-reference reference quality assessment algorithm. The image or video is then processed and a quality assessment of the processed image or video is performed using a reference quality assessment algorithm that is conditional on the quality measurement provided by the no-reference quality assessment algorithm. In this manner, a more accurate quality measurement of the image or video is provided by the reference quality assessment algorithm.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: January 7, 2020
    Assignee: Board of Regents, The University of Texas Systems
    Inventor: Alan Bovik
  • Patent number: 10528805
    Abstract: A biometric authentication apparatus acquires biometric information of a user, extracts a boundary candidate where a state of the biometric information changes, to extract a region in a vicinity of the boundary candidate and having a threshold area or greater, extracts a state feature quantity having a value that changes according to a change in the state of the biometric information, from the extracted region, and judges the state of the biometric information using the state feature quantity of the extracted region.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: January 7, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Hajime Nada, Satoshi Semba, Soichi Hama, Satoshi Maeda, Takashi Morihara, Yukihiro Abiko
  • Patent number: 10529064
    Abstract: One aspect of the present invention includes artificial vision system. The system includes an image system comprising a video source that is configured to capture sequential frames of image data of non-visible light and at least one processor configured as an image processing system. The image processing system includes a wavelet enhancement component configured to normalize each pixel of each of the sequential frames of image data and to decompose the normalized image data into a plurality of wavelet frequency bands. The image processing system also includes a video processor configured to convert the plurality of wavelet frequency bands in the sequential frames into respective visible color images. The system also includes a video display system configured to display the visible color images.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: January 7, 2020
    Assignee: NORTHROP GRUMMAN SYSTEMS CORPORATION
    Inventors: Bruce J. Schachter, Dustin D. Baumgartner
  • Patent number: 10521475
    Abstract: A method, system and computer-usable medium for performing cognitive computing operations comprising receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching for incorporation into a cognitive graph; defining a travel-related cognitive persona within the cognitive graph, the travel-related cognitive persona corresponding to an archetype user model, the travel-related cognitive persona comprising a set of nodes in the cognitive graph; associating a user with the travel-related cognitive persona; defining a travel-related cognitive profile within the cognitive graph, the travel-related cognitive profile comprising an instance of the travel-related cognitive persona that references personal data associated with the user; associating the user with the travel-related cognitive profile; and, performing a cognitive computing operation based upon the tra
    Type: Grant
    Filed: June 5, 2015
    Date of Patent: December 31, 2019
    Assignee: REALPAGE, INC.
    Inventors: John N. Faith, Kyle W. Kothe
  • Patent number: 10521705
    Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media that automatically select an image from a plurality of images based on the multi-context aware rating of the image. In particular, systems described herein can generate a plurality of probability context scores for an image. Moreover, the disclosed systems can generate a plurality of context-specific scores for an image. Utilizing each of the probability context scores and each of the corresponding context-specific scores for an image, the disclosed systems can generate a multi-context aware rating for the image. Thereafter, the disclosed systems can select an image from the plurality of images with the highest multi-context aware rating for delivery to the user. The disclosed system can utilize one or more neural networks to both generate the probability context scores for an image and to generate the context-specific scores for an image.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: December 31, 2019
    Assignee: Adobe Inc.
    Inventors: Xin Lu, Zejun Huang, Jen-Chan Jeff Chien
  • Patent number: 10521878
    Abstract: Implementations of the present disclosure include receiving a training image, providing a hash pattern that is representative of the training image, applying a plurality of filters to the training image to provide a respective plurality of filtered training images, identifying a filter to be associated with the hash pattern based on the plurality of filtered training images, and storing a mapping of the filter to the hash pattern within a set of mapping in a data store.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: December 31, 2019
    Assignee: United Services Automobile Association (USAA)
    Inventor: Reynaldo Medina, III
  • Patent number: 10523843
    Abstract: An image processing apparatus includes circuitry. The circuitry irreversibly compresses an input image to generate an irreversibly compressed image. The circuitry decompresses the irreversibly compressed image to generate a decompressed image. The circuitry corrects a surround of a target area in the decompressed image to generate a corrected image. The target area corresponds to a line drawing image included in the input image. The circuitry generates first to third image layers from the corrected image. The first image layer is a binary image including a line drawing alone. The second image layer includes a line drawing area. The third image layer includes a background area. The circuitry reversibly compresses the first image layer and irreversibly compress the second and third image layers. The circuitry generates an output file based on the first to third image layers compressed.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: December 31, 2019
    Assignee: RICOH COMPANY, LTD.
    Inventor: Takuji Kamada
  • Patent number: 10522134
    Abstract: Systems, methods, and devices for verifying a user are disclosed. A speech-controlled device captures a spoken command, and sends audio data corresponding thereto to a server. The server performs ASR on the audio data to determine ASR confidence data. The server, in parallel, performs user verification on the audio data to determine user verification confidence data. The server may modify the user verification confidence data using the ASR confidence data. In addition or alternatively, the server may modify the user verification confidence data using at least one of a location of the speech-controlled device within a building, a type of the speech-controlled device, or a geographic location of the speech-controlled device.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 31, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Spyridon Matsoukas, Aparna Khare, Vishwanathan Krishnamoorthy, Shamitha Somashekar, Arindam Mandal
  • Patent number: 10515281
    Abstract: The innovation discloses systems and methods of authenticating users using vein or blood vessel characteristics. The innovation retrieves, based on an authentication request, an authentication image associated with a user. The authentication image comprises a first predetermined point and a second predetermined point. The innovation projects the authentication image onto a body part of the user and reads a blood vessel from the body part. The user orients the blood vessel such that it corresponds to the predetermined points in the projected authentication image. The user is authenticated when the blood vessel appears to connect the predetermined points in the direction of the blood flow.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: December 24, 2019
    Assignee: WELLS FARGO BANK, N.A.
    Inventors: Saipavan K. Cherala, Rameshchandra Bhaskar Ketharaju
  • Patent number: 10516799
    Abstract: Systems and methods in accordance with the invention allow automatic recording, sharing, and communicating of different parameters associated with images and their imager to define a specific system behavior of a display device or an algorithm unit. Examples of information include imager parameters, environment parameters, image processing and enhancement parameters, coordinates of a section of wide-angle scene image content, display parameters, defined user experience, defined system behavior or any information to be recorded, shared, and communicated. To avoid loss of information, the information is encoded directly in the picture using a marker. This way, the information is robustly transferred from the imager to the display unit. According to the information, the final image can be automatically corrected and enhanced before display, different associated parameters can be displayed on final image or be used with another output.
    Type: Grant
    Filed: March 25, 2015
    Date of Patent: December 24, 2019
    Assignee: ImmerVision, Inc.
    Inventors: Pierre Konen, Pascale Nini, Jocelyn Parent, Patrice Roulet, Simon Thibault, Hu Zhang, Marie-Eve Gosselin, Valentin Bataille, Xiaojun Du
  • Patent number: 10515289
    Abstract: Various embodiments may include a computing device analyzing an image to identify one or more elements of interest in the image, identifying concepts associated with elements of interest in the image, and identifying potential elements of interest and potential concepts that are not included in the image using other information. Various embodiments may include presenting the one or more elements of interest, the one or more potential elements of interest, and the one or more concepts, receiving a user input that selects one or more of the one or more elements of interest, the one or more potential elements of interest, and the one or more concepts identified in the identified elements of interest and concepts or any combination thereof for a target image, and generating the semantic representation of the target image based on the selected elements of interest and concepts.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: December 24, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Govindarajan Krishnamurthi, Arun Raman
  • Patent number: 10515471
    Abstract: Provided are an apparatus and method for generating a best-view image centered on a object of interest in multiple camera images. The apparatus and method for generating a best-view image centered on a object of interest in multiple camera images accurately provide a best-view image clearly showing a figure, a motion, or the like of a object of interest desired by a viewer by selecting the best-view camera image on the basis of a criterion of a judgment about the best-view image centered on the object of interest.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: December 24, 2019
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gi Mun Um, Kee Seong Cho
  • Patent number: 10509800
    Abstract: Visually interactive identification of a cohort of similar data objects is disclosed. One example is a system including a data processor to access a plurality of data objects, each data object comprising a plurality of numerical components, where each component represents a data feature of a plurality of data features, and to identify, for each data feature, a feature distribution of the numerical components. A selector selects a sub-plurality of the data features of a query object, where a given data feature is selected if the component representing the given data feature is a peak for the feature distribution. An evaluator determines a similarity measure based on the sub-plurality of the data features. An interaction processor iteratively processes selection of a sub-plurality of the data features based on domain knowledge, and identifies, based on the similarity measures, a cohort of data objects similar to the query object.
    Type: Grant
    Filed: January 23, 2015
    Date of Patent: December 17, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ming C Hao, Wei-Nchih Lee, Nelson L Chang, Michael Hund, Daniel Keim
  • Patent number: 10509952
    Abstract: An exemplary embodiment relates to the field of Automatic Face Recognition (AFR) systems. More specifically one exemplary embodiment relates at least to a method and a system capable of recognizing the face of a person using a device equipped with a camera of any kind and an associated computer, such as an embedded computer. The system is alternatively suitable to be implemented as an embedded system with minimal processing hardware capabilities, consuming very low power.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: December 17, 2019
    Assignee: IRIDA LABS S.A.
    Inventors: Dimitris Kastaniotis, Ilias Theodorakopoulos, Nikos Fragoulis
  • Patent number: 10506157
    Abstract: An image pickup apparatus includes an image pickup unit for picking up an image of a subject and generating a plurality of pickup images, an orientation obtaining unit for setting an image pickup position at a time of picking up one pickup image among the plurality of generated pickup images as a reference and obtaining an orientation related to the pickup image, an image combining unit for combining the plurality of generated pickup images and generating a panoramic image, a representative position calculation unit for calculating a representative position in a horizontal direction in the generated panoramic image, an orientation calculation unit for calculating an orientation at the calculated representative position on the basis of characteristic information of the image pickup unit, the calculated representative position, and the obtained orientation, and a recording control unit for recording the calculated orientation while being associated with the generated panoramic image.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: December 10, 2019
    Assignee: SONY CORPORATION
    Inventor: Yasuhito Shikata
  • Patent number: 10482351
    Abstract: Provided are a feature transformation device and others enabling feature transformation with high precision.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: November 19, 2019
    Assignee: NEC CORPORATION
    Inventor: Masato Ishii
  • Patent number: 10475159
    Abstract: A first enlargement circuit enlarges an image of a hierarchical layer on a lower side among images of hierarchical layers in which an image of a lower hierarchical layer is a reduced image generated by reducing an image of an upper hierarchical layer. A region extraction circuit extracts a region according to a size of the block of the hierarchical layer on the upper side from a region in the image enlarged by the first enlargement circuit that is at least wider than the block of the hierarchical layer on the upper side. A first hierarchical addition circuit combines an image of the region extracted by the region extraction circuit to the block of the hierarchical layer on the upper side block by block for the blocks.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: November 12, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Daisuke Wakamiya
  • Patent number: 10460470
    Abstract: Various embodiments include systems and methods structured to provide recognition of an object in an image using a learning module trained using decomposition of the object into components in a number of training images. The training can be based on an overall objectness score of the object, an objectness score of each component of the object, a pose of the object, and a pose of each component of the object for each training image input. Additional systems and methods can be implemented in a variety of applications.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: October 29, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Lifeng Liu, Xiaotian Yin, Yingxuan Zhu, Jun Zhang, Jian Li
  • Patent number: 10460210
    Abstract: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: (a) a computing device, if a test image is acquired, instructing a non-object detector to acquire non-object location information for testing and class information of the non-objects for testing by detecting the non-objects for testing on the test image; (b) the computing device instructing the grid generator to generate section information by referring to the non-object location information for testing; (c) the computing device instructing a neural network to determine parameters for testing; (d) the computing device instructing the neural network to apply the neural network operations to the test image by using each of the parameters for testing, to thereby generate one or more neural network outputs.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: October 29, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10460723
    Abstract: A computer-implemented method is provided. The computer-implemented method is performed by a speech recognition system having at least a processor. The method includes estimating sound identification information from a neural network having periodic indications and components of a frequency spectrum of an audio signal data inputted thereto. The method further includes performing a speech recognition operation on the audio signal data to decode the audio signal data into a textual representation based on the estimated sound identification information. The neural network includes a plurality of fully-connected network layers having a first layer that includes a plurality of first nodes and a plurality of second nodes. The method further comprises training the neural network by initially isolating the periodic indications from the components of the frequency spectrum in the first layer by setting weights between the first nodes and a plurality of input nodes corresponding to the periodic indications to 0.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: October 29, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Takashi Fukuda, Osamu Ichikawa, Bhuvana Ramabhadran
  • Patent number: 10460487
    Abstract: Methods and apparatus for automatically synthesizing images are disclosed. The methods may include receiving a plurality of input frames with a common background. The methods may also include determining a number of the input frames. The methods may also include selecting, based on the number, a method to detect foregrounds of the input frames. The methods may further include using the selected method to generate an output frame comprising a combination of a plurality of the foregrounds.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: October 29, 2019
    Assignee: SHANGHAI XIAOYI TECHNOLOGY CO., LTD.
    Inventors: Dajun Ding, Lili Zhao
  • Patent number: 10449956
    Abstract: A computing device in a vehicle can determine one or more objects based on 3D data points by determining a joint Bayesian probability of each of the one or more objects, conditioned on previously determined objects, and pilot the vehicle based on the determined one or more objects, wherein the objects have parameters including locations, sizes, poses, speeds, directions and predicted paths.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: October 22, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventor: Kevin Wyffels
  • Patent number: 10438050
    Abstract: An image analysis device according to the present invention includes a storage unit storing an image and information of a detected object included in the image, an input unit receiving a target image serving as a target in which an object is detected, a similar image search unit searching for a similar image having a feature quantity similar to a feature quantity extracted from the target image and the information of the object included in the similar image from the storage unit, a parameter deciding unit deciding a parameter used in a detection process performed on the target image based on the information of the object included in the similar image, a detecting unit detecting an object from the target image according to the decided parameter, a registering unit accumulating the target image in the storage unit, and a data output unit outputting the information of the detected object.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: October 8, 2019
    Assignee: Hitachi, Ltd.
    Inventors: Yuki Watanabe, Atsushi Hiroike
  • Patent number: 10430972
    Abstract: A method of calibrating a pan, tilt, zoom (PTZ) camera with a fixed camera utilizing an overview image of a scene captured by the fixed camera, and an image of the scene captured by the PTZ camera when directed in a first direction. By matching features in the overview image and the PTZ camera image, a first calibration is carried out by correlating the first direction to matching features in the overview image. A mapping between the PTZ camera image and the overview image is defined based on the matching features. The mapping is used to map an object from the PTZ camera image to the overview image. Based on an appearance of the mapped object, a quality of the mapping is calculated. If the quality is not good enough, the PTZ camera is redirected to a second direction, and a further calibration is carried out by again.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: October 1, 2019
    Assignee: Axis AB
    Inventors: Robin Dahlström, Mattias Gylin
  • Patent number: 10430015
    Abstract: Mechanisms for displaying an ordered sequence of images are provided. The mechanisms receive a search query as input from a user. The search query includes a start point and an end point of a virtual tour. The start point and the end point determine a boundary of the virtual tour. Based on the search query, images that are within the boundary of the virtual tour defined in the search query are collected. At least a subset of the collected images are displayed in an ordered sequence in accordance with the boundary of the virtual tour.
    Type: Grant
    Filed: August 9, 2013
    Date of Patent: October 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Sandeep R. Patil, Sarbajit K. Rakshit
  • Patent number: 10430951
    Abstract: The present disclosure discloses a method and device for straight line detection and image processing. The straight line detection method includes: dividing a horizontal axis and a vertical axis of a straight line parameter space equally, so as to divide the straight line parameter space into a plurality of parameter areas; voting for the plurality of parameter areas utilizing a coordinate of each sample pixel to obtain a vote amount of each of the parameter areas; extracting a straight line parameter and the vote amount of each of the parameter areas having the vote amount larger than a voting threshold, and grouping the straight line parameters into a group; and weighting and averaging the straight line parameter of each group and the vote amount respectively to obtain the straight line parameter of a detected straight line. The present disclosure also discloses a robot and a numerical control machine.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: October 1, 2019
    Assignee: BEIJING A&E TECHNOLOGIES CO., LTD.
    Inventor: Li Wang
  • Patent number: 10424052
    Abstract: An image representation method and processing device based on local PCA whitening. A first mapping module maps words and characteristics to a high-dimension space. A principal component analysis module conducts principal component analysis in each corresponding word space, to obtain a projection matrix. A VLAD computation module computes a VLAD image representation vector; a second mapping module maps the VLAD image representation vector to the high-dimension space. A projection transformation module conducts projection transformation on the VLAD image representation vector obtained by means of projection. A normalization processing module conducts normalization on characteristics obtained by means of projection transformation, to obtain a final image representation vector.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Mingmin Zhen, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Wen Gao
  • Patent number: 10424039
    Abstract: A first instance of a digital watermark may be embedded into an image. The first instance may encode information. A second instance of the digital watermark may be embedded into the image. The second instance may encode the information of the first instance. The second instance may be a sized differently or may be larger than the first instance.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: September 24, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Clayton L. Holstun
  • Patent number: 10423773
    Abstract: Systems and methods are provided for calculating authenticity of a human user. One method comprises receiving, via a network, an electronic request from a user device, instantiating a video connection with the user device; generating, using a database of questions, a first question; providing, via the network, the generated question to the user device; analyzing video and audio data received via the connection to extract facial expressions, calculating, using convolutional neural networks, first data and second data corresponding predetermined emotions based on facial expressions and audio data; generating candidate emotion data using the first and second data; determining whether the candidate emotion data predicts a predetermined emotion, and generating a second question to collect additional data for aggregating with the first and second data or determining the authenticity of the user and using the determined authenticity to decide on the user request.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: September 24, 2019
    Assignee: COUPANG, CORP.
    Inventor: Xiaojun Huang
  • Patent number: 10423827
    Abstract: A method and system for analyzing text in an image. Classification and localization information is identified for the image at a word and character level. A detailed profile is generated that includes attributes of the words and characters identified in the image. One or more objects representing a predicted source of the text are identified in the image. In one embodiment, neural networks are employed to determine localization information and classification information associated with the identified object of interest (e.g., a text string, a character, or a text source).
    Type: Grant
    Filed: July 5, 2017
    Date of Patent: September 24, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Jonathan Wu, Meng Wang, Wei Xia, Ranju Das
  • Patent number: 10412316
    Abstract: The present disclosure relates to systems and methods for image capture. Namely, an image capture system may include a camera configured to capture images of a field of view, a display, and a controller. An initial image of the field of view from an initial camera pose may be captured. An obstruction may be determined to be observable in the field of view. Based on the obstruction, at least one desired camera pose may be determined. The at least one desired camera pose includes at least one desired position of the camera. A capture interface may be displayed, which may include instructions for moving the camera to the at least one desired camera pose. At least one further image of the field of view from the at least one desired camera pose may be captured. Captured images may be processed to remove the obstruction from a background image.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: September 10, 2019
    Assignee: Google LLC
    Inventors: Michael Rubinstein, William Freeman, Ce Liu
  • Patent number: 10388024
    Abstract: An optical tracking system comprises a marker part, an image forming part, and a processing part. The marker part includes a pattern having particular information and a first lens which is spaced apart from the pattern and has a first focal length. The image forming part includes a second lens having a second focal length and an image forming unit which is spaced apart from the second lens and forms an image of the pattern by the first lens and the second lens. The processing part determines the posture of the marker part from a coordinate conversion formula between a coordinate on the pattern surface of the pattern and a pixel coordinate on the image of the pattern, and tracks the marker part by using the determined posture of the marker part. Therefore, the present invention can accurately track a marker part by a simpler and easier method.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: August 20, 2019
    Assignees: KOH YOUNG TECHNOLOGY INC., KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Hyun Ki Lee, You Seong Chae, Min Young Kim
  • Patent number: 10372984
    Abstract: A system and various methods for processing an image to produce a hierarchical image representation model, segment the image model using shape criteria to produce positive and negative training data sets as well as a search-space data set comprising shapes matched to a search query provided as input, and using the training data sets to train a machine learning model to improve recognition of shapes that are similar to an input query without being exact matches, to improve object recognition.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: August 6, 2019
    Assignee: DigitalGlobe, Inc.
    Inventors: Georgios Ouzounis, Kostas Stamatiou, Nikki Aldeborgh
  • Patent number: 10375009
    Abstract: An augmented reality based social network is described. In an example scenario, an augmented reality (AR) service stores user generated upon receiving the content from the user. The content includes two and/or three article(s). The AR service also receives a selected location from the user as another input. The user content is next processed for an overlay on the selected location. The user content is subject to an expiration after an initial time period. In addition, the user content is provided for a presentation on an AR display in relation to the selected location. An evaluation of the user content is also received from a viewer viewing the user content on the AR display. The time period is extended or reduced based on the evaluation.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: August 6, 2019
    Inventor: Richard Fishman
  • Patent number: 10366302
    Abstract: CNN based integrated circuit is configured with a set of pre-trained filter coefficients or weights as a feature extractor of an input data. Multiple fully-connected networks (FCNs) are trained for use in a hierarchical category classification scheme. Each FCN is capable of classifying the input data via the extracted features in a specific level of the hierarchical category classification scheme. First, a root level FCN is used for classifying the input data among a set of top level categories. Then, a relevant next level FCN is used in conjunction with the same extracted features for further classifying the input data among a set of subcategories to the most probable category identified using the previous level FCN. Hierarchical category classification scheme continues for further detailed subcategories if desired.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: July 30, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10360441
    Abstract: Embodiments of the present disclosure provide an image processing method and apparatus. The method includes detecting a human face region in each frame of an image in a to-be-processed video; locating a lip region in the human face region; extracting feature column pixels in the lip region from each frame of the image; building a lip change graph based on the feature column pixels; and recognizing a lip movement according to a pattern feature of the lip change graph.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: July 23, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hui Ni, Chengjie Wang
  • Patent number: 10360736
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include receiving a model definition of a virtual object. The virtual object may be rendered in a 3D space based on the model definition. The system may display the virtual object in the 3D space from a first perspective in a first display area of a user interface. The system may display the virtual object in the 3D space from a second perspective, different from the first, in a second display area of the user interface. The system may receive a user command input by a user through the first display area for adjusting the virtual object. The virtual object may be adjusted according to the user command. The system may display the adjusted virtual object in the 3D space from the first perspective in the first display area and from the second perspective in the second display area.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: July 23, 2019
    Assignee: Facebook, Inc.
    Inventors: Stef Marc Smet, Dolapo Omobola Falola, Michael Slater, Samantha P. Krug, Volodymyr Gigniak, Hannes Luc Herman Verlinde, Sergei Viktorovich Anpilov, Danil Gontovnik, Yu Hang Ng, Siarhei Hanchar, Milen Georgiev Dzhumerov
  • Patent number: 10356384
    Abstract: An image processing apparatus includes an image acquiring unit configured to acquire an input image generated by image capturing, a specifying unit configured to specify a first area in which unnecessary light contained in the input image is detected, a processing unit configured to perform, for the input image, a reducing process configured to reduce the first area specified by the specifying unit, and a providing unit configured to display the first area on a display unit, specified by the specifying unit in the input image, so that the first area can be recognized in the input image.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: July 16, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Fumihito Wachi
  • Patent number: 10349135
    Abstract: A method and program product includes receiving at least a media from a user of a social networking system. At least one or more humans captured in the media are identified. The humans that are not the user are removed from the media. An invitation is communicated to the removed humans. The invitation requests permission to be included in the media. The humans that have granted permission are added back into the media.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: July 9, 2019
    Inventor: Patrick Vazquez
  • Patent number: 10339367
    Abstract: One or more images including a user's face are captured, and at least one of these images is displayed to the user. These image(s) are used by a face-recognition algorithm to identify or recognize the face in the image(s). The face-recognition algorithm recognizes various features of the face and displays an indication of at least one of those features while performing the face-recognition algorithm. These indications of features can be, for example, dots displayed on the captured image. Additionally, an indication of progress of the face-recognition algorithm is displayed near the user's face. This indication of progress of the face-recognition algorithm can be, for example, a square or other geometric shape in which at least a portion of the user's face is located.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Brigitte Evelyn Eder, Aaron Naoyoshi Sheung Yan Woo, Remi Wesley Ogundokun, Benjamin Alan Goosman
  • Patent number: 10339372
    Abstract: Analog written content of handwritten drawings or written characters can be transformed into digital ink via an analog-to-ink service. The analog-to-ink service can receive a static image of the analog written content, extract analog strokes from other information, such as background, in the static image, and then convert the analog strokes to digital ink strokes, for example, by populating an ink container with at least two parameters for defining the digital ink strokes. The at least two parameters can include a pressure, at tilt, a direction, a beginning point, an end point, a direction, a color, an order, an overlap, a language, and a time. The analog-to-ink service can provide the ink container to a content creation application that supports inking so that a user can manipulate the content in an inking environment.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: July 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ian Mikutel, Lisa C. Cherian, Nassr Albahadly, Gilles L. Peron
  • Patent number: 10339463
    Abstract: A computerized method for creating a function model based on a non-parametric, data-based model, e.g., a Gaussian process model, includes: providing training data including measuring points having one or multiple input variables, the measuring points each being assigned an output value of an output variable; providing a basic function; modifying the training data with the aid of difference formation between the function values of the basic function and the output values at the measuring points of the training data; creating the data-based model based on the modified training data; and providing the function model as a function of the data-based model and the basic function.
    Type: Grant
    Filed: April 7, 2014
    Date of Patent: July 2, 2019
    Assignee: ROBERT BOSCH GMBH
    Inventors: Heiner Markert, Rene Diener, Ernst Kloppenburg, Felix Streichert, Michael Hanselmann
  • Patent number: 10332634
    Abstract: Methods and apparatus for using a biomarker signature to determine whether a breast tumor is malignant by comparing imaging data for the breast tumor to the signature, the signature derived using Quantitative Textural Analysis (QTA) and expressed in the form: Y=XCx+B; where: Y is a predictive indicator ranging from 0 to 1; B is a constant; Cx is a coefficient; and X is the mean positive pixel (MPP) value associated with the breast tumor under inspection.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: June 25, 2019
    Assignee: Imaging Endpoints II LLC
    Inventor: Ronald L. Korn
  • Patent number: 10325174
    Abstract: Systems and methods for performing Census Transforms that includes an input from an image, with a support window created within the image, and a kernel within the support window. The Census Transform calculations and comparisons are performed within the kernel windows. One disclosed method allows for previously performed comparison to be calculated and compared as an if not equal invert or if equal use pervious comparison hardware design. Alternatively, a new Census Transform is disclosed which always inverts a previously made comparison. This new approach can be demonstrated to be equivalent to, applying the original Census Transform, on a pre-processed input kernel, where the pre-processing step adds a fractional position index to each pixel within the N×N kernel. The fractional positional index ensures that no two pixels are equal to one another, and thereby makes the Original Census algorithm on pre-processed kernel same as the new Census algorithm on original kernel.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: June 18, 2019
    Assignee: Texas Instruments Incorporated
    Inventors: Anish Reghunath, Hetul Sanghvi, Michael Lachmayr, Mihir Mody
  • Patent number: 10319131
    Abstract: A processor-implemented method and system of this disclosure are configured to correct or resolve artifacts in a received digital image, using a selection of one or more encompassment measures, each encompassing the largest artifact and an optional second smaller artifact. The processor herein is configured to calculate difference in Gaussians using blurred versions of the input digital image and optionally, the input digital image itself; to composite the resulting difference in Gaussians and the input digital image; and to determine pixels with properties of invariant values, also referred to as invariant pixels. The values in the properties of the invariant pixels are then applied to the artifacts regions to correct these artifacts, thereby generating the modified digital image in which the artifact regions are more or less differentiable to a human eye when compared with the digital image.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: June 11, 2019
    Inventor: David Sarma
  • Patent number: 10318577
    Abstract: A method for identifying a need of a user based on an image word interpretation, the method comprising: assigning prime keywords and style keywords to a plurality of images, wherein the images having the prime and style keywords assigned thereto are stored by at least one image category of the images; receiving a target keyword from the user depending on a need of a user; displaying images within the at least one image category, and receiving a preferred image by the at least one category from the user; extracting a target image that matches the target keyword based on the received preferred image; and displaying the extracted target image in accordance with at least one way.
    Type: Grant
    Filed: April 19, 2015
    Date of Patent: June 11, 2019
    Assignee: PERCEPTION
    Inventor: Kisuk Oh