Feature Extraction Patents (Class 382/190)
  • Patent number: 11025907
    Abstract: Convolutional neural networks (CNN) that determine a mode decision (e.g., block partitioning) for encoding a block include feature extraction layers and multiple classifiers. A non-overlapping convolution operation is performed at a feature extraction layer by setting a stride value equal to a kernel size. The block has a N×N size, and a smallest partition output for the block has a S×S size. Classification layers of each classifier receive feature maps having a feature dimension. An initial classification layer receives the feature maps as an output of a final feature extraction layer. Each classifier infers partition decisions for sub-blocks of size (?S)×(?S) of the block, wherein ? is a power of 2 and ?=2, . . . , N/S, by applying, at some successive classification layers, a 1×1 kernel to reduce respective feature dimensions; and outputting by a last layer of the classification layers an output corresponding to a N/(?S)×N/(?S)×1 output map.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: June 1, 2021
    Assignee: GOOGLE LLC
    Inventors: Shan Li, Claudionor Coelho, Aki Kuusela, Dake He
  • Patent number: 11019317
    Abstract: A method of photographing a subject includes storing a library of photographic scene designs in a computer memory, training a photographic scene detection model by a computer processing device using machine learning from sample portrait images comprising known photographic scenes defined in the library of photographic scene designs, capturing a production portrait photograph, using a digital camera, of a subject in a photographic scene that is defined by a photographic scene design in the library of photographic scene designs, automatically detecting the photographic scene in the production portrait photograph using the photographic scene detection model operating on one or more computer processors, and processing the production portrait photograph by an image processing system to personalize the photographic scene detected in the production portrait photograph.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: May 25, 2021
    Assignee: Shutterfly, LLC
    Inventors: Leo Cyrus, Keith A. Benson
  • Patent number: 11017296
    Abstract: The present invention extends to methods, systems, and computer program products for classifying time series image data. Aspects of the invention include encoding motion information from video frames in an eccentricity map. An eccentricity map is essentially a static image that aggregates apparent motion of objects, surfaces, and edges, from a plurality of video frames. In general, eccentricity reflects how different a data point is from the past readings of the same set of variables. Neural networks can be trained to detect and classify actions in videos from eccentricity maps. Eccentricity maps can be provided to a neural network as input. Output from the neural network can indicate if detected motion in a video is or is not classified as an action, such as, for example, a hand gesture.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: May 25, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Gaurav Kumar Singh, Pavithra Madhavan, Bruno Jales Costa, Gintaras Vincent Puskorius, Dimitar Petrov Filev
  • Patent number: 11017271
    Abstract: Examples of techniques for interactive generation of labeled data and training instances are provided. According to one or more embodiments of the present invention, a computer-implemented method for interactive generation of labeled data and training instances includes presenting, by the processing device, control labeling options to a user. The method further includes selecting, by a user, one or more of the presented control labeling options. The method further includes selecting, by a processing device, a representative set of unlabeled data samples based at least in part on the control labeling options selected by the user. The method further includes generating, by a processing device, a set of suggested labels for each of the unlabeled data samples.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: May 25, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nirmit V. Desai, Dawei Li, Theodoros Salonidis
  • Patent number: 11010643
    Abstract: A system comprising a database and a user device. The database may be configured to (i) store metadata generated in response to objects detected in a video, (ii) store a confidence level associated with the metadata, (iii) provide to a plurality of users (a) data portions of the video and (b) a request for feedback, (iv) receive the feedback and (v) update the confidence level associated with the metadata in response to the feedback. The user device may be configured to (i) view the data portions, (ii) accept input to receive the feedback from one of said plurality of users and (iii) communicate the feedback to the database. The confidence level may indicate a likelihood of correctness of the objects detected in response to video analysis performed on the video. The database may track user statistics for the plurality of users based on the feedback.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: May 18, 2021
    Assignee: WAYLENS, INC
    Inventor: Jeffery R. Campbell
  • Patent number: 11012592
    Abstract: An image analyzing method of detecting a dimension of a region of interest inside an image is applied to an image analyzing device. The image analyzing method includes positioning an initial triggering pixel unit within a detective identifying area inside the image, and assigning a first detection region via a center of the initial triggering pixel unit, positioning a first based pixel unit conforming to a first target value inside the first detection region, applying a mask via a center of the first based pixel unit to determine whether a first triggering pixel unit exists inside the mask, and utilizing a determination result of the initial triggering pixel unit and the first triggering pixel unit to decide a maximal dimension of the region of interest.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: May 18, 2021
    Assignee: VIVOTEK INC.
    Inventors: Hsiang-Sheng Wang, Shih-Hsuan Chen
  • Patent number: 11012579
    Abstract: An image processing apparatus receives destination information for use in data transmission, performs control, based on the received destination information including a destination in an email address format, so that a first screen, which is used to transmit data external to the image processing apparatus, and on which a transmission destination of the data is displayed, based on the received destination information, is displayed on the operation unit, and performs control, based on the received destination information including only a destination in a fax format so that a second screen, different from the first screen and used to perform fax transmission, on which a transmission destination of the fax transmission is displayed, based on the received destination information, is displayed on the operation unit.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: May 18, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yosui Naito
  • Patent number: 11003867
    Abstract: Approaches for cross-lingual regularization for multilingual generalization include a method for training a natural language processing (NLP) deep learning module. The method includes accessing a first dataset having a first training data entry, the first training data entry including one or more natural language input text strings in a first language; translating at least one of the one or more natural language input text strings of the first training data entry from the first language to a second language; creating a second training data entry by starting with the first training data entry and substituting the at least one of the natural language input text strings in the first language with the translation of the at least one of the natural language input text strings in the second language; adding the second training data entry to a second dataset; and training the deep learning module using the second dataset.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 11, 2021
    Assignee: salesforce.com, inc.
    Inventors: Jasdeep Singh, Nitish Shirish Keskar, Bryan McCann
  • Patent number: 11004239
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: May 11, 2021
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
  • Patent number: 11004205
    Abstract: A hardware accelerator for histogram of oriented gradients computation is provided that includes a gradient computation component configured to compute gradients Gx and Gy of a pixel, a bin identification component configured to determine a bin id of an angular bin for the pixel based on a plurality of representative orientation angles, Gx, and signs of Gx and Gy, and a magnitude component configured to determine a magnitude of the gradients Gmag based on the plurality of representative orientation angles and the bin id.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: May 11, 2021
    Assignee: Texas Instruments Incorporated
    Inventor: Aishwarya Dubey
  • Patent number: 11006046
    Abstract: The embodiment of the disclosure discloses a method and an apparatus for image processing, and a mobile terminal. The method may include: acquiring image parameters of a real-time preview image displayed in a preview interface; evaluating, based on the image parameters and a pre-established image evaluation model, the real-time preview image to obtain an evaluation result; and displaying the evaluation result. The method enables the user of the mobile terminal to obtain the evaluation result of the real-time preview image displayed in the preview interface in real time, so that the user can get the quality of the current real-time preview image in real time, and the user can adjust the real-time preview image as needed, in order to obtain images with better evaluation results, thereby improving the overall quality of the images captured by the mobile terminal.
    Type: Grant
    Filed: August 11, 2019
    Date of Patent: May 11, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Yaoyong Liu, Yan Chen
  • Patent number: 10998211
    Abstract: In a semiconductor fabrication apparatus composed of a plurality of components, such as fluid control devices, a manager is to be enabled to identify components by intuition. Information on the identified component is to be provided to the manager in an easy-to-understand manner. In a system in which a manager terminal 3 and an information processor 2 are communicably configured via networks NW1 and NW2, the manager terminal 3 receives component information on a semiconductor fabrication apparatus 1 from the information processor 2. Upon the identification of the position of a component constituting the semiconductor fabrication apparatus 1 on the captured image of the semiconductor fabrication apparatus 1 using an identification processing unit 32, a compositing processing unit 33 creates a composite image in which component information is composited with the captured image at the position of the component identified, and an image display unit 34 displays the composite image.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: May 4, 2021
    Assignee: Fujikin Inc.
    Inventors: Ryutaro Tanno, Takahiro Mastuda, Tsutomu Shinohara
  • Patent number: 10997232
    Abstract: A system and method for automated detection of figure element reuse. The system can receive articles or other publications from a user input or an automated input. The system then extracts images from the articles and compares them to reference images from a historical database. The comparison and detection of matches occurs via a copy-move detection algorithm implemented by a processor of the system. The processor first locates and extracts keypoints from a submission image and finds matches between those keypoints and the keypoints from a reference image using a near neighbor algorithm. The matches are clustered and the clusters are compared for keypoint matching. Matched clusters are further compared for detectable transformations. The processor may additionally implement natural language processing to filter matches based on the context of the use of the submission image in the submission and a patch detector for removing false positive features.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: May 4, 2021
    Assignees: SYRACUSE UNIVERSITY, Northwestern University, Rehabilitation Institute of Chicago
    Inventors: Daniel Ernesto Acuna, Konrad Kording
  • Patent number: 10990845
    Abstract: Disclosed is a method for determining a relational imprint between two images including the following steps: —the implementation of a first image and of a second image, —a phase of calculating vectors of similarity between tiles belonging respectively to the first and second images, the similarity vectors forming a field of imprint vectors, the field of imprint vectors including at least one haphazard region disordered in the sense of an entropy criterion, —a phase of recording in the guise of relational imprint of a representation of the calculated field of imprint vectors. Also disclosed is a method for authenticating a candidate image with respect to an authentic image implementing the method for determining a relational imprint.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: April 27, 2021
    Assignee: KERQUEST
    Inventors: Yann Boutant, Thierry Fournel
  • Patent number: 10989600
    Abstract: Embodiments herein disclose automated methods and systems to fill background and interstitial space in the visual object layout with one or more colors that bleed/blend into each other. Embodiments herein automate the creation of multi-colored backgrounds for filling the interstitial space.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: April 27, 2021
    Inventors: Laurent Francois Martin, Narendra Dubey, Jean Pierre Gehrig
  • Patent number: 10984228
    Abstract: Implementations of the present specification provide an interaction behavior detection method, apparatus, system, and device. The method includes the following: obtaining a to-be-detected depth image photographed by a depth photographing device, extracting a foreground image used to represent a moving object from the to-be-detected depth image, obtaining spatial coordinate information of the moving object based on the foreground image, comparing the spatial coordinate information of the moving object with spatial coordinate information of a shelf in a rack, and determining an article touched by the moving object based on a comparison result and one or more articles on the shelf.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 20, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Kaiming Huang, Xiaobo Zhang, Chunlin Fu, Hongbo Cai, Li Chen, Le Zhou, Xiaodong Zeng, Feng Lin
  • Patent number: 10986328
    Abstract: A device, method and system for utilizing an optical array generator to generate dynamic patterns in a dental camera for projection onto the surface of an object, while reducing noise and increasing data density for three-dimensional (3D) measurement. Projected light patterns are used to generate optical features on the surface of the object to be measured and optical 3D measuring methods which operate according to triangulation principles are used to measure the object.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: April 20, 2021
    Assignee: DENTSPLY SIRONA INC.
    Inventor: Michael Tewes
  • Patent number: 10984610
    Abstract: The present invention relates to methods for interacting with virtual objects comprising placing an flat image of an augmented reality object in the field of view of the video camera of the device for creating and viewing virtual objects of augmented reality, determining colors and recognizing patterns on the images received from the video camera device to create and view objects of augmented reality. Coloring the augmented reality object in accordance with the colors defined on the painted image obtained from camera devices. A correspondence is established between the patterns and colors of the painted image and actions of the augmented reality objects, depending on the color, color combination, pattern or colored pattern in the images obtained from the video camera of the device for creating and viewing the augmented reality objects.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: April 20, 2021
    Assignee: DEVAR ENTERTAINMENT LIMITED
    Inventors: Andrei Valerievich Komissarov, Anna Igorevna Belova
  • Patent number: 10977516
    Abstract: A method for identifying objects, in particular substrates, in particular wafers, includes: a prioritization process for generating a prioritized list of identification strategies including at least one identification strategy in at least one prioritization step; and an identification process for capturing at least one image of at least one object in at least one image capturing step according to at least one highest priority identification strategy of the prioritized list and processing said image in at least one image processing step according to the highest priority identification strategy of the prioritized list.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: April 13, 2021
    Assignee: IOSS Intelligente Optische Sensoren & Systeme GmbH
    Inventors: Joachim Gaessler, Harald Richter, Christian Konz
  • Patent number: 10976549
    Abstract: Systems and methods for generating a face model for a user of a head-mounted device are disclosed. The head-mounted device can include one or more eye cameras configured to image the face of the user while the user is putting the device on or taking the device off. The images obtained by the eye cameras may be analyzed using a stereoscopic vision technique, a monocular vision technique, or a combination, to generate a face model for the user.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: April 13, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Gholamreza Amayeh, Adrian Kaehler, Douglas Lee
  • Patent number: 10977515
    Abstract: An image retrieving apparatus includes a pose estimating unit which recognizes pose information of a retrieval target including a plurality of feature points from an input image, a features extracting unit which extracts features from the pose information and the input image, an image database which accumulates the features in association with the input image, a query generating unit which generates a retrieval query from pose information specified by a user, and an image retrieving unit which retrieves images including similar poses according to the retrieval query from the image database.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: April 13, 2021
    Assignee: HITACHI, LTD.
    Inventors: Yuki Watanabe, Kenichi Morita, Tomokazu Murakami, Atsushi Hiroike, Quan Kong
  • Patent number: 10977520
    Abstract: Provided is a process that includes: determining that a training set lacks an image of an object with a given pose, context, or camera; composing, based on the determination, a video capture task; obtaining a candidate video; selecting a subset of frames of the candidate video as representative; determining that a given frame among the subset depicts the object from the given pose, context, or camera; and augmenting the training set with the given frame.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: April 13, 2021
    Assignee: Slyce Acquisition Inc.
    Inventors: Adam Turkelson, Kyle Martin, Christopher Birmingham, Sethu Hareesh Kolluru
  • Patent number: 10970522
    Abstract: The present disclosure provides a data processing method, an electronic device and a computer-readable storage medium. The method includes: acquiring first image data of images stored in a local device and second image data of images stored in another device; comparing the first image data with the second image data to determine a storage type of an image contained in the first image data and/or contained in the second image data; establishing a mapping relation between a first face group contained in the first image data and a second face group contained in the second image data according to the storage type; and processing the first image data and the second image data for the first face group and the second face group having the mapping relation with each other.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: April 6, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Deyin Chen
  • Patent number: 10970896
    Abstract: In order to select a suitable background image for an image subjected to privacy protection, an image processing apparatus acquires a captured image, and extracts a subject region corresponding to a predetermined subject from the captured image. The image processing apparatus selects a background image to be used for processing from a plurality of background images, based on the captured image, and performing processing for abstracting the extracted subject region for the selected background image.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: April 6, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Atsushi Kawano
  • Patent number: 10970863
    Abstract: The present invention generally relates to human feature analysis. Specifically, embodiments of the present invention relate to a system and method for utilizing one or more overlay grids in conjunction with imagery of a human face or breast area in order to analyze beauty and attractiveness of the face or breast area in the underlying imagery. In an exemplary embodiment, the system utilizes computerized image capture features and processing features to analyze a human face or breast area in relation to a plurality of overlay grids in order to identify and empirically measure beauty and attractiveness based on the alignment of said overlay grids with specific features of the human face or breast area and whether a successful fit exists with specifically defined facial or breast grids or by how close the individual's features align with specifically defined facial or breast grids.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: April 6, 2021
    Inventor: Andrew John-Haidukewych Hayduke
  • Patent number: 10970523
    Abstract: There is provided an application stored in a computer-readable storage medium for a first terminal to perform a method of providing a video call service, the method including: receiving a first video stream of a first user of the first terminal when the application that provides the video call service is executed; extracting facial feature points of the first user from the first video stream; predicting whether the first user is a bad user by applying distribution information of the facial feature points of the first user to a learning model for bad user identification based on facial feature points of a plurality of users; and controlling display of a component on an execution screen of the application based on a result of the predicting.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: April 6, 2021
    Assignee: HYPERCONNECT, INC.
    Inventors: Sang Il Ahn, Hyeok Choi
  • Patent number: 10969342
    Abstract: A pesticide droplet leaf transmembrane absorption observation apparatus: an outer anti-mist glass cover (13) is shrouded over a lower base plate (14) to form an outer anti-mist chamber, used for accommodating a whole plant, a support frame (12) being arranged inside an outer atomising chamber, an inner anti-mist glass cover (9) being shrouded over an upper top plate of the support frame (12) to form an observation chamber, an outer atomising nozzle (6) and an inner atomising nozzle (8) respectively being inserted into the outer anti-mist chamber and the observation chamber, a temperature controller (2) respectively being connected to the outer atomising nozzle (6) and the inner atomising nozzle (8); a temperature sensor (7) is arranged inside the observation chamber and is connected to a data collection computer (1), a leaf pressing mechanism (10) being arranged inside the observation chamber and being used for pressing the leaves of the plant; a first digital camera (3) and a first microscope (4) and a secon
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: April 6, 2021
    Assignee: JIANGSU UNIVERSITY
    Inventors: Jianmin Gao, Xu Liu
  • Patent number: 10970631
    Abstract: Provided is a method of machine learning for a convolutional neural network (CNN). The method includes: receiving input target data; determining whether to initiate incremental learning on the basis of a difference between a statistical characteristic of the target data with respect to the CNN and a statistical characteristic of previously used training data with respect to the CNN; determining a set of kernels with a high degree of mutual similarity in each convolution layer included in the CNN when the incremental learning is determined to be initiated; and updating a weight between nodes to which kernels included in the set of kernels with a high degree of mutual similarity are applied.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: April 6, 2021
    Assignee: AUTOCRYPT CO., LTD.
    Inventors: Sang Gyoo Sim, Seok Woo Lee, Seung Young Park, Duk Soo Kim
  • Patent number: 10965975
    Abstract: A wearable apparatus is provided for identifying a person in an environment of a user of the wearable apparatus based on non-facial information. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from the environment of the user, and a processing device programmed to analyze a first image of the plurality of images to determine that a face appears in the first image. The processing device also analyzes a second image of the plurality of images to identify an item of non-facial information appearing in the second image that was captured within a time period including a time when the first image is captured. The processing device also determines identification information of a person associated with the face based on the item of non-facial information.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: March 30, 2021
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Patent number: 10963697
    Abstract: A distributed systems and methods for generating composite media including receiving a media context that defines media that is to be generated, the media context including: a definition of a sequence of media segment specifications and, an identification of a set of remote devices. For each media segment specification, a reference segment may be generated and transmitted to at least one remote device. A media segment may be received from each of the remote device, the media segment having been recorded by a camera. Verified media sequences may replace the corresponding reference segment. The media segments may be aggregated and an updated sequence of media segments may be defined. An instance of the media context that includes a subset of the updated sequence of media segments may then be generated.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: March 30, 2021
    Inventor: Philip Martin Meier
  • Patent number: 10963759
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang
  • Patent number: 10956783
    Abstract: An image processing method and apparatus, and a computer readable medium are provided. The method includes obtaining an image. The image is processed using a preset training model that is a function relationship model of a feature sample image and an activation function of the feature sample image. The feature sample image includes an image satisfying an image feature value extraction condition. A target image is obtained that corresponds to the image according to a processing result of the preset training model.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: March 23, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yong Sen Zheng, Kai Ning Huang
  • Patent number: 10949664
    Abstract: Methods and apparatus for training and utilizing an artificial neural network (ANN) are provided. A computing device can receive training documents including text. The computing device can parse the training documents to determine training data items. Each training data item can include a training label related to text within the training documents and location information indicating a location of text related to the training label. An ANN can be trained to recognize text using the training data items and training input that includes the training documents. After training the ANN, a request to predict text in application documents that differ from the training documents can be received. The application documents can include second text. A prediction of the second text can be determined by applying the trained ANN to the application documents. After determining the prediction of the second text, information related to the second text can be provided.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: March 16, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Dongpei Su
  • Patent number: 10948975
    Abstract: Media guidance application that allows users to associate input schemes with physical objects in an augmented reality environment is disclosed. Specifically, the media guidance application may recognize physical objects in an augmented reality environment and allow users to identify input schemes to associate with the physical objects. Such input schemes may define ways in which the users may control presentation of media content by interacting with the physical objects.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: March 16, 2021
    Assignee: ROVI GUIDES, INC.
    Inventor: Edison Lin
  • Patent number: 10948281
    Abstract: A distance information processing apparatus includes a memory and a processor that function as an acquirer configured to acquire a distance image signal constituted by a plurality of distances to an object in a depth direction in regions of an image signal. The memory and processor also function as a determiner configured to determine positional confidence of the distances for the regions of the image signal based on differences among a plurality of positions in an in-plane direction perpendicular to the depth direction, the plurality of positions respectively corresponding to the plurality of distance in the depth direction. The determiner is further configured to determine the positional confidence based on an image SN ratio of the image signal.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: March 16, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kazuya Nobayashi
  • Patent number: 10949674
    Abstract: An apparatus for video summarization using sematic information is described herein. The apparatus includes a controller, a scoring mechanism, and a summarizer. The controller is to segment an incoming video stream into a plurality of activity segments, wherein each frame is associated with an activity. The scoring mechanism is to calculate a score for each frame of each activity, wherein the score is based on a plurality of objects in each frame. The summarizer is to summarize the activity segments based on the score for each frame.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: Myung Hwangbo, Krishna Kumar Singh, Teahyung Lee, Omesh Tickoo
  • Patent number: 10943141
    Abstract: An image feature map generating unit (3) generates, on the basis of feature amounts extracted from a plurality of images successively captured by a camera (109), an image feature map which is an estimated distribution of the object likelihood on each of the images. An object detecting unit (4) detects an object on the basis of the image feature map generated by the image feature map generating unit (3).
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: March 9, 2021
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Tomoya Sawada, Hidetoshi Mishima, Hideaki Maehara, Yoshimi Moriya, Kazuyuki Miyazawa, Akira Minezawa, Momoyo Hino, Mengxiong Wang, Naohiro Shibuya
  • Patent number: 10937150
    Abstract: A method and system, the method including receiving semantic descriptions of features of an asset extracted from a first set of images; receiving a model of the asset, the model constructed based on a second set of a plurality images of the asset; receiving, based on an optical flow-based motion estimation, an indication of a motion for the features in the first set of images; determining a set of candidate regions of interest for the asset; determining a region of interest in the first set of images; iteratively determining a matching of features in the set of candidate regions of interest and the determined region of interest in the first set of images to generate a record of matches in features between two images in the first set of images; and displaying a visualization of the matches in features between two images in the first set of images.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: March 2, 2021
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Huan Tan, Arpit Jain, Gyeong Woo Cheon, Ghulam Ali Baloch, Jilin Tu, Weina Ge, Li Zhang
  • Patent number: 10929718
    Abstract: An apparatus includes an acquisition unit that acquires a first image based on a first parameter, and a second image based on a second parameter, a segmentation unit that segments each of the first and second images into a plurality of segments, an acquisition unit that acquires feature quantities from each of the plurality of segments formed by segmenting the first and second images, respectively, a calculation unit that calculates a reliability of each of the plurality of segments of the first image based on the feature quantities acquired from the first image, a classification unit that classifies the plurality of segments of the first image into a first field having a relatively high reliability and a second field having a relatively low reliability, and a determination unit that determines categories for the first and second fields based on the feature quantities acquired from the first and second images.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: February 23, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Takamasa Tsunoda, Masakazu Matsugu
  • Patent number: 10929714
    Abstract: A method of acquiring and processing visual data is provided, which includes: directing a light of a particular color to at least one of the plurality of landmarks on an object to illuminate the at least one of the plurality of landmarks; obtaining a first image of the object when the at least one of the plurality of landmarks on the object is illuminated; and extracting coordinates of the at least one of the plurality of landmarks from the first image.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: February 23, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Iman Soltani Bozchalooi, Francis Assadian
  • Patent number: 10931761
    Abstract: A graph of combinations of entities and parameters corresponding to the combinations of entities may be stored as two tables. The first table may comprise a table that includes all entity combinations, as well as each parameter that corresponds to the entity combinations. Each entity combination may additionally be parseable, such that each entity combination may be parsed to allow for identification of each entity included within a given entity combination. The second table may include an entity combination node corresponding to (and linked to) each entity combination stored within the first table. Each given entity combination node of the second table may then be linked within the second table to each nearest neighbor node of the given node to thereby allow for identifying each entity combination within the first table that includes a particular relevant entity (or set of entities).
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: February 23, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeffrey David Fitzgerald, Valentine Ngwabo Fontama
  • Patent number: 10929983
    Abstract: A system and method of confirming administration of medication is provided. The method comprises the steps of receiving information identifying a particular medication prescription regimen, determining one or more procedures for administering such prescription regimen and identifying one or more activity sequences associated with such procedures. Activity sequences of actual administration of such prescription regimen are captured and then compared to the identified activity sequences to determine differences therebetween. A notice is provided if differences are determined.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: February 23, 2021
    Assignee: Ai Cure Technologies LLC
    Inventors: Adam Hanina, Gordon Kessler
  • Patent number: 10930068
    Abstract: An estimation apparatus is configured to obtain shape information containing information about multiple line segments that depict a shape of an object; detect multiple feature lines in an image of the object captured by an imaging apparatus; receive a first instruction for associating a feature line selected from the multiple feature lines with a line segment selected from the multiple line segments and a second instruction for associating two points selected in the image with two end points selected from end points of the multiple line segments; generate a first line segment connecting the two points and a second line segment connecting the two end points; and estimate a position and orientation of the imaging apparatus in three-dimensional space by using a combination of the selected feature line and the selected line segment and a combination of the first line segment and the second line segment.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: February 23, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Ayu Karasudani, Tomohiro Aoyagi
  • Patent number: 10922350
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for associating still images and videos. One method includes receiving a plurality of images and a plurality of videos and determining whether the images are related to the videos. The determining includes, for an image and a video, extracting features from the image and extracting features frames of the video, and comparing the features to determine whether the image is related to the video. The method further includes maintaining a data store storing data associating each image with each video determined to be related to the image.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: February 16, 2021
    Assignee: Google LLC
    Inventors: Ming Zhao, Yang Song, Hartwig Adam, Ullas Gargi, Yushi Jing, Henry Allan Rowley
  • Patent number: 10914569
    Abstract: A system and method for measuring three-dimensional (3D) coordinate values of an environment is provided. The method including moving a 2D scanner through the environment. A 2D map of the environment is generated using the 2D scanner. A path is defined through the environment using the 2D scanner. 3D scan locations along the path are defined using the 2D scanner. The 2D scanner is operably coupled to a mobile base unit. The mobile base unit is moved along the path based at least in part on the 2D map and the defined path. 3D coordinate values are measured at the 3D scan locations with a 3D scanner, the 3D scanner being coupled to the mobile base unit.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: February 9, 2021
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Oliver Zweigle, João Santos, Aleksej Frank, Ahmad Ramadneh, Muhammad Umair Tahir, Tobias Boehret
  • Patent number: 10915735
    Abstract: One of the aspects of the present invention discloses a feature point detection method. The method comprises: acquiring a face region in an input image; acquiring first positions of first feature points and second feature points according to a pre-generated first model; estimating second positions of the first feature points according to the first positions of the first feature points and pre-generated second models; detecting third positions of the first feature points and the second feature points according to the second positions of the first feature points, the first positions of the second feature points and pre-generated third models. According to the present invention, the final detected face shape could approach to the actual face shape much more.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: February 9, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Dongyue Zhao, Yaohai Huang, Xian Li
  • Patent number: 10909024
    Abstract: A system and method are provided for testing electronic visual user interface outputs. The method includes obtaining a baseline set of one or more screen shots of a user interface, the user interface comprising one or more elements; generating an updated set of one or more screen shots of the user interface, the updated set comprising one or more changes to the user Interface; comparing the baseline set to the updated set to generate a differential set of one or more images illustrating differences in how at least one of the user interface elements is rendered.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: February 2, 2021
    Assignee: Think Research Corporation
    Inventors: Ji Ping Li, Benjamin Thomas Hare
  • Patent number: 10902056
    Abstract: Processing an image includes acquiring, by the image processing apparatus, a target image, extracting a shape of a target object included in the target image, determining a category including the target object based on the extracted shape, and storing the target image by mapping the target image with additional information including at least one keyword related to the category.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: January 26, 2021
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Seong-taek Hwang, Sang-doo Yun, Ha-wook Jeong, Jin-young Choi, Byeong-ho Heo, Woo-sung Kang
  • Patent number: 10902682
    Abstract: An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: January 26, 2021
    Assignee: SONY CORPORATION
    Inventors: Akihiko Kaino, Masaki Fukuchi, Tatsuki Kashitani, Kenichiro Ooi, Jingjing Guo
  • Patent number: 10902245
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for facial recognition. A specific embodiment of the method comprises: extracting a to-be-recognized dark light image captured in a dark light environment; inputting the dark light image into a pre-trained first convolutional neural network to obtain a target image after the dark light image is preprocessed, the first convolutional neural network being used to preprocess the dark light image; and inputting the target image into a pre-trained second convolutional neural network to obtain a facial recognition result, the second convolutional neural network being used to represent a corresponding relationship between the image and the facial recognition result. This embodiment improves accuracy of the facial recognition on the image captured in the dark light environment.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: January 26, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventor: Kang Du