Patents Examined by Wesley J Tucker
-
Patent number: 11862989Abstract: A computer receives determines a mobile device requires a recharge, where the mobile device have a solar cell and an imaging device. The computer identifies an object with a low diffusion rate. The computer recharges the mobile device, based on determining that the mobile device receiving the solar energy from the identified object.Type: GrantFiled: March 26, 2021Date of Patent: January 2, 2024Assignee: International Business Machines Corporation ArmonkInventors: Aaron K. Baughman, Shikhar Kwatra, Diwesh Pandey, Arun Joseph
-
Patent number: 11836965Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.Type: GrantFiled: August 10, 2021Date of Patent: December 5, 2023Assignee: NIANTIC, INC.Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Patent number: 11830289Abstract: Far field devices typically rely on audio only for enabling user interaction and involve only audio processing. Adding a vision-based modality can greatly improve the user interface of far field devices to make them more natural to the user. For instance, users can look at the device to interact with it rather than having to repeatedly utter a wakeword. Vision can also be used to assist audio processing, such as to improve the beamformer. For instance, vision can be used for direction of arrival estimation. Combining vision and audio can greatly enhance the user interface and performance of far field devices.Type: GrantFiled: June 11, 2020Date of Patent: November 28, 2023Assignee: ANALOG DEVICES, INC.Inventors: Atulya Yellepeddi, Kaushal Sanghai, John Robert McCarty, Brian C. Donnelly, Nicolas Le Dortz, Johannes Traa
-
Patent number: 11830259Abstract: State information can be determined for a subject that is robust to different inputs or conditions. For drowsiness, facial landmarks can be determined from captured image data and used to determine a set of blink parameters. These parameters can be used, such as with a temporal network, to estimate a state (e.g., drowsiness) of the subject. To improve robustness, an eye state determination network can determine eye state from the image data, without reliance on intermediate landmarks, that can be used, such as with another temporal network, to estimate the state of the subject. A weighted combination of these values can be used to determine an overall state of the subject. To improve accuracy, individual behavior patterns and context information can be utilized to account for variations in the data due to subject variation or current context rather than changes in state.Type: GrantFiled: August 24, 2021Date of Patent: November 28, 2023Assignee: Nvidia CorporationInventors: Yuzhuo Ren, Niranjan Avadhanam
-
Patent number: 11829404Abstract: Some implementations related to archiving of functional images. In some implementations, a method includes accessing images and determining one or more functional labels corresponding to each of the images and one or more confidence scores corresponding to the functional labels. A functional image score is determined for each of the images based on the functional labels having a corresponding confidence score that meets a respective threshold for the functional labels. In response to determining that the functional image score meets a functional image score threshold, a functional image signal is provided that indicates that one or more of the images that meet the functional image score threshold are functional images. The functional images are determined to be archived, and are archived by associating an archive attribute with the functional images such that functional images having the archive attribute are excluded from display in views of the images.Type: GrantFiled: December 11, 2020Date of Patent: November 28, 2023Assignee: Google LLCInventors: Shinko Cheng, Eunyoung Kim, Shengyang Dai, Madhur Khandelwal, Kristina Eng, David Loxton
-
Patent number: 11809479Abstract: A content push method is provided. The method is applied to a terminal, which includes a first set of cameras. The method includes: when the terminal is in a screen-locked state and an off-screen is woken up, capturing a first set of face images of a user by using the first set of cameras; determining whether the first set of face images matches a registered user; if the first set of face images matches the registered user, performing, by the terminal, an unlocking operation, and determining a facial attribute of the user based on the first set of face images; determining a to-be-pushed media resource based on the facial attribute; and pushing the media resource in an interface displayed after the terminal is unlocked.Type: GrantFiled: September 29, 2021Date of Patent: November 7, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Yingqiang Zhang, Yuanfeng Xiong, Xingguang Song, Tizheng Wang, Maosheng Huang
-
Patent number: 11804032Abstract: A method for identifying one or more faces in an image may include determining a detection region in an image based on at least one of a plurality of detection scales, and identifying one or more faces in the detection region based on the at least one of the plurality of detection scales. The method may further include calibrating the detection region based on one or more identified faces.Type: GrantFiled: May 14, 2020Date of Patent: October 31, 2023Assignee: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.Inventors: Bin Wang, Gang Wang
-
Patent number: 11798297Abstract: The invention relates to a control device (1) for a vehicle for determining the perceptual load of a visual and dynamic driving scene. The control device is configured to: ?receive a sensor output (101) of a sensor (3), the sensor (3) sensing the visual driving scene, ?extract a set of scene features (102) from the sensor output (101), the set of scene features (102) representing static and/or dynamic information of the visual driving scene, ?determine the perceptual load (104) of the set of extracted scene features (102) based on a predetermined load model (103), the load model (103) being predetermined based on reference video scenes each being labelled with a load value, ?map the perceptual load to the sensed driving and ?determine a spatial and temporal intensity distribution of the perceptual load across the sensed driving scene. The invention further relates to a vehicle, a system and a method.Type: GrantFiled: March 21, 2017Date of Patent: October 24, 2023Assignees: TOYOTA MOTOR EUROPE NV/SA, UCL BUSINESS PLCInventors: Jonas Ambeck-Madsen, Ichiro Sakata, Nilli Lavie, Gabriel J. Brostow, Luke Palmer, Alina Bialkowski
-
Patent number: 11798662Abstract: The present invention relates generally to the field of computer-based image recognition. More particularly, the invention relates to methods and systems for the identification, and optionally the quantitation of, discrete objects of biological origin such as cells, cytoplasmic structures, parasites, parasite ova, and the like which are typically the subject of microscopic analysis. The invention may be embodied in the form of a method for training a computer to identify a target biological material in a sample. The method may include accessing a plurality of training images, the training images being obtained by light microscopy of one or more samples containing a target biological material and optionally a non-target biological material. The training images are cropped by a human or a computer to produce cropped images, each of which shows predominantly the target biological material.Type: GrantFiled: March 5, 2019Date of Patent: October 24, 2023Assignee: VERDICT HOLDINGS PTY LTDInventors: Alistair Cumming, Luan Duong Minh Lam, Christopher McCarthy, Michelle Dunn, Luke Gavin, Samar Kattan, Antony Tang
-
Patent number: 11798026Abstract: The disclosure is related to a method and system for evaluating advertising effects of video content. The evaluation method includes presenting video content including a character to a viewer through a display, extracting pieces of facial micro-movement data (MMD) of the character in the video content and the viewer, while the viewer watches the video content, calculating a similarity of the MMD of the character and the MMD of the viewer, and calculating an advertising effect score of the video content on the basis of the similarity.Type: GrantFiled: August 3, 2021Date of Patent: October 24, 2023Assignee: SANGMYUNG UNIVERSITY INDUSTRY-ACADEMY COOPERATION FOUNDATIONInventors: Min Cheol Whang, A Young Cho, Hyun Woo Lee
-
Patent number: 11776268Abstract: Disclosed are systems and methods of capturing sensor data associated with a retail location. The systems and methods further include a point-of-sale device for processing customer transactions and generating an event trigger for capturing sensor data from one or more of the sensors at the retail location. In response to the event trigger, the systems and methods capture data from the one or more sensors at the retail location and provide access to the stored sensor data and event triggers via a user interface of the point-of-sale device for review of the stored sensor data and event trigger.Type: GrantFiled: April 23, 2021Date of Patent: October 3, 2023Assignee: Shopify Inc.Inventors: Daanish Maan, Ricardo Vazquez, Peter Nitsch, Zhi Hui Fang, Djoume Salvetti
-
Patent number: 11776248Abstract: Systems and methods are configured for correcting the orientation of an image data object subject to optical character recognition (OCR) by receiving an original image data object, generating initial machine readable text for the original image data object via OCR, generating an initial quality score for the initial machine readable text via machine-learning models, determining whether the initial quality score satisfies quality criteria, upon determining that the initial quality score does not satisfy the quality criteria, generating a plurality of rotated image data objects each comprising the original image data object rotated to a different rotational position, generating a rotated machine readable text data object for each of the plurality of rotated image data objects and generating a rotated quality score for each of the plurality of rotated machine readable text data objects, and determining that one of the plurality of rotated quality scores satisfies the quality criteria.Type: GrantFiled: October 17, 2022Date of Patent: October 3, 2023Assignee: Optum, Inc.Inventors: Rahul Bhaskar, Daryl Seiichi Furuyama, Daniel William James
-
Patent number: 11762082Abstract: A method, system and computer program product for intelligent tracking and transformation between interconnected sensor devices of mixed type is disclosed. Metadata derived from image data from a camera is compared to different metadata derived from radar data from a radar device to determine whether an object in a Field of View (FOV) of one of the camera and the radar device is an identified object that was previously in the FOV of the other of the camera and the radar device.Type: GrantFiled: January 5, 2022Date of Patent: September 19, 2023Assignee: MOTOROLA SOLUTIONS, INC.Inventors: Shervin Sabripour, John B Preston, Bert Van Der Zaag, Patrick D Koskan
-
Patent number: 11763538Abstract: An image processing apparatus includes an area setter configured to set a target area in a first image among a plurality of captured images acquired by imaging at different time from each other, an area detector configured to detect a corresponding area relevant to the target area in a plurality of second images among the plurality of captured images, a focus state acquirer configured to acquire focus states of the target area and the corresponding area, and a classification processor configured to perform classification processing that classifies the plurality of captured images into a plurality of groups according to the focus states.Type: GrantFiled: August 22, 2019Date of Patent: September 19, 2023Assignee: Canon Kabushiki KaishaInventor: Hideyuki Hamano
-
Patent number: 11749020Abstract: Disclosed is a method and apparatus for multi-face tracking of a face effect, and a computer readable storage medium. The method for multi-face tracking of a face effect comprises steps of: selecting a face effect in response to an effect selection command; selecting a face tracking type of the face effect in response to a face tracking type selection command; generating a face tracking sequence based on the face tracking type; recognizing a face image captured by an image sensor; superimposing the face effect on at least one of the face images according to the face tracking sequence. In the embodiment of the invention, for the face that needs to be tracked as specified by the effect, the number of faces of superimposed faces for the face effect, the superimposition order, and the display duration of the face effect can be arbitrarily set, and different effects can be superimposed on multiple faces, so as to improve the user experience.Type: GrantFiled: December 25, 2018Date of Patent: September 5, 2023Assignee: BEIJING MICROLIVE VISION TECHNOLOGY CO., LTDInventors: Gao Liu, Xin Lin
-
Patent number: 11748446Abstract: A method and system for synthetic data generation and analysis includes generating a synthetic dataset. A set of parameters is determined and scenarios are generated from the parameters that represent three-dimensional scenes. Synthetic images are rendered for the scenarios. A synthetic dataset may be formed to have a controlled variation in attributes of synthetic images over a synthetic dataset. The synthetic dataset may be used for training or evaluating a machine learning model.Type: GrantFiled: April 14, 2022Date of Patent: September 5, 2023Assignee: AURORA OPERATIONS, INC.Inventor: Carl Magnus Wrenninge
-
Patent number: 11727280Abstract: A compact generative neural network can be distilled from a teacher generative neural network using a training network. The compact network can be trained on the input data and output data of the teacher network. The training network train the student network using a discrimination layer and one or more types of losses, such as perception loss and adversarial loss.Type: GrantFiled: March 2, 2021Date of Patent: August 15, 2023Assignee: Snap Inc.Inventors: Sergey Tulyakov, Sergei Korolev, Aleksei Stoliar, Maksim Gusarov, Sergei Kotcur, Christopher Yale Crutchfield, Andrew Wan
-
Patent number: 11714892Abstract: Image processing systems and methods are provided for authorizing the performance at a computer terminal of an age-restricted activity. An estimated human age is determined based on human characteristics of a structure detected in an image captured at the computer terminal. It is determined whether the structure exhibits at least one liveness characteristic indicating the human characteristics from which the estimated human age is determined have been captured directly from a living human at the computer terminal. A positive determination is made as to whether performance of the age-restricted activity is authorized if the estimated human age meets a predetermined age requirement and the structure is determined to exhibit at least one liveness characteristic, and a negative determination is made if: i) the estimated human age does not meet the predetermined age requirement; and/or ii) the structure is not determined to exhibit at least one liveness characteristic.Type: GrantFiled: December 23, 2020Date of Patent: August 1, 2023Assignee: Yoti Holding LimitedInventor: Francisco Angel Garcia Rodriguez
-
Patent number: 11706172Abstract: Disclosed in the embodiments of the present disclose are a method and device for sending information. A particular embodiment of the method comprises: acquiring user input information input to a user terminal; determining, from a target expression image set, at least one expression image to be sent to the user terminal and matching the user input information, and a presentation order of the at least one expression image; and sending presentation information to the user terminal in response to determining that, during a historical time period, the user terminal presents the at least one expression image according to the presentation order less than or equal to a target number of times, wherein the presentation information is for instructing the user terminal to present the at least one expression image according to the presentation order.Type: GrantFiled: December 10, 2020Date of Patent: July 18, 2023Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.Inventor: Fei Tian
-
Patent number: 11700420Abstract: Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.Type: GrantFiled: June 12, 2020Date of Patent: July 11, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Melissa Sue Burke, Andrew Edwin Dreisch, Graham John Page, Panu James Turcot, Evan Kodra