Abstract: A method, computer system, and computer program product for consolidating and recording elements on a physical display board is provided. The embodiment may include capturing an initial image of a visual display mechanism, whereby the initial image contains elements. The embodiment may also include determining an initial state of the visual display mechanism based on the captured image. The embodiment may further include recognizing characters of the elements in the initial state. The embodiment may also include capturing a subsequent image of the visual display mechanism, wherein an auditory cue is sent to a user when there is an unsuccessful attempt to capture the subsequent image. The embodiment may further include comparing the initial image and the subsequent image of the visual display mechanism. The embodiment may include identifying updates to the visual display mechanism based on the comparison of the initial image and the subsequent image.
Type:
Grant
Filed:
April 6, 2020
Date of Patent:
September 6, 2022
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
George Blue, Doina L. Klinger, Rebecca Quaggin-Mitchell
Abstract: A driver recognition system of a vehicle identifies a person approaching the vehicle as a driver or other user. In one approach, the driver recognition system collects data using one or more cameras of the vehicle. The collected data corresponds to the person approaching the vehicle. Based on the collected data, a computing device determines (e.g., using a machine-learning model) whether the person is a user (e.g., driver) associated with the vehicle. If the person is a user associated with the vehicle, then the computing device causes one or more actions to be performed for the vehicle (e.g., controller configuration, boot up of a computing device, updating software using over-the-air update, etc.).
Type:
Grant
Filed:
July 14, 2020
Date of Patent:
September 6, 2022
Assignee:
Micron Technology, Inc.
Inventors:
Michael Tex Burk, Robert Richard Noel Bielby
Abstract: The managing of sensed signals used to sense features of physical entities over time. A computer-navigable graph of sensed features is generated. For each sensed feature, a signal segment that was used to sense that feature is computer-associated with the sensed feature. Later, the graph of sensed features may be navigated to that features. The resulting signal segment(s) may then be access allowing for rendering of the signal evidence that resulted in the sensed feature. Accordingly, the principles described herein allow for sophisticated and organized navigation to sensed features of physical entities in the physical world, and allow for rapid rendering of the signals that evidence that sensed features.
Type:
Grant
Filed:
November 18, 2019
Date of Patent:
August 9, 2022
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Vijay Mital, Olivier Colle, Robin Abraham
Abstract: The present invention discloses a system integrating video communication and physical sign analysis, comprising at least one front-end device. The front-end device comprises a camera device, a display device, an audio device, a button device and a processor. The camera device, the display device, the audio device and the button device are all connected to the processor, and the processor can connect to the Internet network and the mobile device via wired or wireless means. The front-end device and the mobile device can perform video communication, and the front-end device can perform physical sign analysis according to the images collected by the camera device. Based on some applications in the prior art, the present invention combines video collection technology and human face analysis technology to perform physical sign analysis and obtain related indexes.
Type:
Grant
Filed:
April 1, 2020
Date of Patent:
July 26, 2022
Assignee:
Joyware Electronics Co., Ltd.
Inventors:
Jie Yu, Jiangfeng Yu, Weiping Zhu, Xugang Shi
Abstract: Disclosed are an apparatus and method of providing a vehicle service based on individual emotion recognition for providing a vehicle service based on individually customized emotion recognition by learning emotion for each user. The apparatus includes a processor configured to determine whether an event occurs based on information on a driving environment and image acquirer configured to acquire a user facial image in response to event occurrence. The processor is further configured to learn user facial expression based on the user facial image in response to the event occurrence, and determine whether the user experiences specific emotion based on the learned user facial expression. The apparatus further includes a service provider configured to provide a vehicle service corresponding to the driving environment when determining that the user experiences the specific emotion.
Type:
Grant
Filed:
November 5, 2020
Date of Patent:
July 19, 2022
Assignees:
Hyundai Motor Company, Kia Motors Corporation
Abstract: An image processing apparatus includes a first processor configured to obtain, from a color image, an illumination element image and an albedo element image corresponding to the color image, and a second processor configured to divide the illumination element image into a plurality of subelement images each corresponding to the color image.
Type:
Grant
Filed:
April 4, 2020
Date of Patent:
July 12, 2022
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Inwoo Ha, Hyong Euk Lee, Young Hun Sung, Minsu Ahn
Abstract: The present disclosure provides a pedestrian re-identification method and apparatus, computer device and readable medium. The method comprises: collecting a target image and a to-be-identified image including a pedestrian image; obtaining a feature expression of the target image and a feature expression of the to-be-identified image respectively, based on a pre-trained feature extraction model; wherein the feature extraction model is obtained by training based on a self-attention feature of a base image as well as a co-attention feature of the base image relative to a reference image; identifying whether a pedestrian in the to-be-identified image is the same pedestrian as that in the target image according to the feature expression of the target image and the feature expression of the to-be-identified image.
Type:
Grant
Filed:
March 12, 2020
Date of Patent:
July 5, 2022
Assignee:
BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
Abstract: A system for identifying a user using microexpressions presents training media items to the user. The system captures a first set of microexpressions of the user in reaction to the training media items. The system, based on the first set of microexpressions, determines baseline features indicating reactions of the user to the training media items. The system presents test media items to a person. The system captures a second set of microexpressions of the person in reaction to the test media items. The system, based on the second set of microexpressions, determines test features indicating reactions of the person to the test media items. The system determines whether the person is the same as the user by comparing the baseline features to the test features. The system determines that the person is the same as the user if the test features correspond to the baseline features.
Type:
Grant
Filed:
March 2, 2021
Date of Patent:
June 14, 2022
Assignee:
Bank of America Corporation
Inventors:
Michael Emil Ogrinz, Mark Alan Odiorne, Gerard P. Gay, Jeremiah Wiley Fellows, Regina Peyfuss, Siddhesh Vinayak Wadikar, Allison Dolores Baker
Abstract: The present disclosure is related to methods and systems for image reconstruction including accelerated forward transformation with an Artificial Neural Network (ANN).
Abstract: A system that recommends cosmetic, dermatological, or fashion items based on photos taken of a person based in part or entirely on their skin color and other defining characteristics like hair color, eye color, and/or face shape. The system includes a device with a camera and a light source capable of producing multiple intensities of light, the device running a program that instructs the user with real time feedback on how to adjust their face, phone positioning or location in order for the application to capture a set of two or more optimal photos of their face. When optimal ambient lighting is found, the program captures multiple photos, varying the light source over the different captures. Calibrated color data is calculated by comparing how the brightness and color of the diffuse reflection on the skin of the user changes compared to the brightness and color of the specular reflection of the light source in the user's eye.
Abstract: An electronic device according to one embodiment of the present invention comprises: at least one communication interface; a display; a memory; and at least one processor electrically connected to the at least one communication interface, the display, and the memory, wherein the memory may store instructions which, when executed, cause the at least one processor to: acquire, in response to receiving a request for service information related to broadcast content sent from a content server, at least one image frame included in the broadcast content; determine whether at least one face region has been detected within the at least one image frame; perform, if the at least one face region is determined to have been detected, image analysis on the basis of the detected at least one face region; and provide, through the display, service information corresponding to a result of the image analysis. In addition, various embodiments are possible.
Abstract: A method and system for synthetic data generation and analysis includes generating a synthetic dataset. A set of parameters is determined and scenarios are generated from the parameters that represent three-dimensional scenes. Synthetic images are rendered for the scenarios. A synthetic dataset may be formed to have a controlled variation in attributes of synthetic images over a synthetic dataset. The synthetic dataset may be used for training or evaluating a machine learning model.
Abstract: A method and apparatus for generating a face model, a storage medium, a processor, and a terminal are provided. The method includes that: feature extraction is performed on a currently input face image from at least one dimension to obtain a plurality of facial features; classification and identification are performed according to the plurality of facial features to obtain a facial feature identification result; a mapping relationship between the multiple facial features and face pinching parameters set in a current face pinching system is acquired; and a corresponding face model is generated according to the facial feature identification result and the mapping relationship. The present disclosure solves the technical problem that a manual face pinching function provided in a game in the related art is time-consuming and laborious, and it is difficult to obtain a face pinching effect that fully meets psychological expectations.
Abstract: An example of apparatus includes a memory to store a first image of a document and a second image of the document. The first image and the second image are Memory captured under different conditions. The apparatus includes a processor coupled to the memory. The processor is to perform optical character recognition on the first image to generate a first output dataset and to perform optical character recognition on the second image to generate a second output dataset. The processor is further to determine whether consensus for a character is achieved based on a comparison of the first output dataset with the second output dataset, and generate a final output dataset based on the consensus for the character.
Type:
Grant
Filed:
July 21, 2017
Date of Patent:
May 10, 2022
Assignee:
Hewlett-Packard Development Compant, L.P.
Abstract: A computer-implemented method of associating an annotation with an object in an image, comprising generating a dictionary including first vectors that associate terms of the annotation with concepts, classifying the image to generate a second vector based on classified objects and associated confidence scores for the classified objects, selecting a term of the terms associated with one of the first vectors having a shortest determined distance to the second vector, identifying a non-salient region of the image, and rendering the annotation associated with the selected term at the non-salient region.
Type:
Grant
Filed:
March 12, 2020
Date of Patent:
May 10, 2022
Assignee:
FUJIFILM Business Innovation Corp.
Inventors:
David Ayman Shamma, Lyndon Kennedy, Anthony Dunnigan
Abstract: A method, an apparatus and an electronic device for face liveness detection based on a neural network model are provided. The method includes: a target visible light image and a target infrared image of a target object to be detected are obtained (S101); a first face image is extracted from the target visible light image, and a second face image is extracted from the target infrared image (S102); a target image array of the target object is generated based on multiple monochromatic components of the first face image and a monochromatic component of the second face image (S103); and feeding the target image array into a pre-trained neural network model for detection, to obtain a face liveness detection result of the target object (S104).
Abstract: An image pickup system includes an input/output modeling section 24, the input/output modeling section 24 creating, as a population, an image group obtained when a specific target is photographed, (access image), and generating an inference model by using, as teacher data, sequential images selected from the image group created as the population, based on whether the specific target can be accessed, wherein each image of the image group is associated with date and time information and/or position information, and the input/output modeling section 24 generates an inference model for determining based on the date and time information and/or the position information whether a process to the specific target is good or bad.
Type:
Grant
Filed:
April 24, 2019
Date of Patent:
May 3, 2022
Assignee:
OM DIGITAL SOLUTIONS CORPORATION
Inventors:
Osamu Nonaka, Kazuhiko Osa, Yoichi Yoshida, Hirokazu Nozaki, Masahiro Fujimoto, Hidekazu Iwaki, Koji Sakai, Keiji Okada, Yoshihisa Ogata
Abstract: Methods and system for guiding user data capture during a scan of a vehicle using a mobile device are disclosed. A user may scan a vehicle using a camera or other sensors of the mobile device to capture data from which a three-dimensional virtual model may be generated. During the scanning process, models may be generated and evaluated according to quality metrics. Visual cues may be determined and presented to the user during scanning to indicate areas sufficiently scanned or areas requiring additional scanning to meet data quality requirements for model generation. Damage to vehicle components may be identified by analysis of the generated model, and addition data capture or user annotation entry may be directed based upon the identified damage.
Type:
Grant
Filed:
May 20, 2020
Date of Patent:
April 26, 2022
Assignee:
State Farm Mutual Automobile Insurance Company
Inventors:
Bryan R. Nussbaum, Rebecca A. Little, Kevin L. Mitchell, Nathan C. Summers, An Ho
Abstract: An image processing apparatus comprises an image obtaining unit that obtains a captured image, an information obtaining unit that obtains analysis data recorded in correspondence with the captured image and including flag information indicating whether an object present in the captured image is a masking target, a detecting unit that detects objects from the captured image, and a mask processing unit that generates an image in which an object, among the objects detected from the captured image, which is indicated as the masking target by the flag information, is masked.
Abstract: An image processing method comprises: acquiring an actual image of a specified target from a video stream collected by a camera; identifying an area not shielded by the VR HMD and an area shielded by the VR HMD of the face of the specified target from the actual image, and acquiring first facial image data corresponding to the area not shielded; obtaining second facial image data matching the first facial image data according to the first facial image data and a preset facial expression model, wherein the second facial image data correspond to the area shielded; and fusing the first facial image data and the second facial image data to generate a composite image. An image processing device comprises a first acquiring unit, an identifying unit, a second acquiring unit and a generating unit, and is for performing the steps of the method described above.