Abstract: A conformable sensor module may conform to skin of a user's face. The sensor module may include multiple piezoelectric strain sensors. The sensor module may measure mechanical strain of facial skin that occurs while the user makes facial gestures. To do so, the sensor module may take a time series of multiple measurements of strain of the user's facial skin at each of multiple locations on the user's face, while the user makes a facial gesture. The resulting spatiotemporal data regarding facial strain may be fed as an input into a trained machine learning algorithm. The trained machine learning algorithm may, based on this input, classify a facial gesture. A computer may determine content associated with the classification. The content may be outputted in audible or visual format. This may facilitate communication by patients with neuromuscular disorders who are unable to vocalize intelligible speech.
Abstract: A camera calibration includes; a camera configured to acquire a first forward image from a first viewpoint and a second forward image from a second viewpoint; an event trigger module configured to determine whether to perform camera calibration; a motion estimation module configured to acquire information related to motion of a host vehicle; a three-dimensional reconstruction module configured to acquire three-dimensional coordinate values based on the first forward image and the second forward image; and a parameter estimation module configured to estimate an external parameter of the camera based on the three-dimensional coordinate values.
Abstract: An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.
Abstract: An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.
Abstract: Provided are communication devices having adaptable features and methods for implementation. One device includes at least one adaptable component and a processor configured to detect an external cue relevant to operation of the at least one adaptable component, to determine a desired state for the at least one adaptable component corresponding to the external cue, and then to dynamically adapt the at least one adaptable component to substantially produce the desired state. One adaptable component comprises at least one adaptable speaker system. Another adaptable component comprises at least one adaptable antenna.
Type:
Grant
Filed:
June 13, 2022
Date of Patent:
October 3, 2023
Assignee:
Avago Technologies International Sales Pte. Limited
Abstract: An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.
Abstract: A detachable remote controller and a remote controlling method of controlling an air conditioner of an autonomous vehicle are provided. The detachable remote controller may be configured to control the air conditioner to be detachable from a console box or a front console box by a detachment button.
Type:
Grant
Filed:
April 24, 2020
Date of Patent:
September 26, 2023
Assignees:
Hyundai Motor Company, Kia Motors Corporation
Abstract: An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.
Abstract: Disclosed are a positron emission tomography system and an image reconstructing method using the same and the positron emission tomography system includes: a collection unit collecting a positron emission tomography sinogram (PET sinogram); an image generation unit applying the positron emission tomography sinogram to an MLAA with TOF and generating a first emission image and a first attenuation image and a NAC image reconstructed without attenuation correction; and a control unit selecting at least one of the first emission image, the first attenuation image and the NAC image generated by the image generation unit as an input image and generating and providing a final attenuation image by applying the input image to the learned deep learning algorithm.
Type:
Grant
Filed:
February 8, 2021
Date of Patent:
September 12, 2023
Assignee:
SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
Inventors:
Jae Sung Lee, Donghwi Hwang, Kyeong Yun Kim
Abstract: A method and device for sensor data fusion for a vehicle as well as a computer program and a computer-readable storage medium are disclosed. At least one sensor device (S1) is associated with the vehicle (F), and in the method, fusion object data is provided representative of a fusion object (OF) detected in an environment of the vehicle (F); sensor object data is provided representative of a sensor object (OS) detected by the sensor device (S1) in the environment of the vehicle (F); indicator data is provided representative of an uncertainty in the determination of the sensor object data; reference point transformation candidates of the sensor object (OS) are determined depending on the indicator data; and an innovated fusion object is determined depending on the reference point transformation candidates.
Type:
Grant
Filed:
October 18, 2019
Date of Patent:
September 12, 2023
Assignee:
Bayerische Motoren Werke Aktiengesellschaft
Inventors:
Michael Himmelsbach, Luca Trentinaglia, Dominik Bauch, Daniel Meissner, Josef Mehringer, Marco Baumgartl
Abstract: A method, a computer program product, and a computer system determine abnormal motion from a patient. The method includes receiving sensory data of the patient and a location in which the patient is present. The sensory data includes video data over a period of time the patient is being monitored. The method includes generating contextual information based on the sensory data. The contextual information is indicative of surroundings of the patient and characteristics of the location. The method includes generating motion information based on the sensory data. The motion information is indicative of movement of the patient in the location. The method includes generating contextual motion data by incorporating the contextual information with the motion information. The method includes determining the abnormal motion based on the contextual motion data.
Type:
Grant
Filed:
October 22, 2020
Date of Patent:
September 12, 2023
Assignee:
International Business Machines Corporation
Inventors:
Umar Asif, Stefan von Cavallar, Jianbin Tang, Stefan Harrer
Abstract: A retrieval apparatus (2000) is accessible to a storage region (50) in which a plurality of pieces of object information (100) are stored. The object information (100) includes a feature value set (104) being a set of a plurality of feature values acquired regarding an object. The retrieval apparatus (2000) acquires a feature value set (retrieval target set (60)) being a retrieval target, and determines the object information (100) having the feature value set (104) similar to the retrieval target set (60) by comparing the retrieval target set (60) with the feature value set (104). Herein, in a case where a feature value set satisfies a predetermined condition, the retrieval apparatus (2000) performs comparison between the feature value set and another feature value set by using a part of feature values within the feature value set. Further, the retrieval apparatus (2000) outputs output information relating to the determined object information (100).
Abstract: A system and method for identifying a subject using imaging are provided. In some aspects, the method includes receiving an image depicting a subject to be identified, and applying a trained Disentangled Representation learning-Generative Adversarial Network (DR-GAN) to the image to generate an identity representation of the subject, wherein the DR-GAN comprises a discriminator and a generator having at least one of an encoder and a decoder. The method also includes identifying the subject using the identity representation, and generating a report indicative of the subject identified.
Type:
Grant
Filed:
September 18, 2018
Date of Patent:
August 22, 2023
Assignee:
BOARD OF TRUSTEES OF MICHIGAN STATE UNIVERSITY
Abstract: A system that incorporates aspects of the subject disclosure may perform operations including, for example, receiving, via an antenna, a signal generated by a communication device, detecting passive intermodulation interference in the signal, the interference generated by one or more transmitters unassociated with the communication device, and the interference determined from signal characteristics associated with a signaling protocol used by the one or more transmitters. Other embodiments are disclosed.
Abstract: An information processing device of the present invention includes: an image processing means that extracts a feature value of an object within a captured image obtained by capturing a pre-passing region of a gate, and stores matching information relating to matching of the object based on the feature value; a distance estimating means that estimates a distance from the gate to the object within the captured image; and a matching means that executes matching determination based on the estimated distance and the stored matching information of the object that the distance has been estimated.
Abstract: A system and a method for automatic assessment of comparative negligence of vehicle(s) involved in an accident. The system receives one or more of a video input, an accelerometer data, a gyroscope data, a magnetometer data, a GPS data, a Lidar data, a Radar data, a radio navigation data and a vehicle state data for vehicle(s). The system automatically detects an occurrence of an accident and its timestamp. The system then detects an accident type of the accident, and a trajectory of the vehicle(s) based on the received data for the detected timestamp. A scenario of the accident is generated and compared with a parametrized accident guideline to generate a comparative negligence assessment for the vehicle(s) involved in the accident.
Abstract: Embodiments of this application provide a living body detection method that can include traversing a plurality of images of a to-be-detected object, and using a currently traversed image as a current image, performing face feature extraction on the current image to obtain an eigenvector corresponding to the current image, the eigenvector describing a structure of a face feature of the object in the current image. The method can further include capturing an action behavior of the to-be-detected object according to a change of the eigenvector corresponding to the current image relative to an eigenvector corresponding to a historical image in a feature sequence, the historical image being a traversed image in the plurality of images, and the feature sequence including an eigenvector corresponding to at least one historical image, and determining the to-be-detected object as a living body in response to capturing the action behavior of the object.
Type:
Grant
Filed:
October 14, 2020
Date of Patent:
August 8, 2023
Assignee:
Tencent Technology (Shenzhen) Company Limited
Abstract: An information processing apparatus (100) includes an acquisition unit (122) that acquires a first image from which person region feature information regarding a region including other than a face of a retrieval target person is extracted, a second image in which a collation result with the person region feature information indicates a match, and a facial region is detected, and result information indicating a collation result between face information stored in a storage unit and face information extracted from the facial region, and a display processing unit (130) that displays at least two of the first image, the second image, and the result information on an identical screen.
Abstract: Techniques for a perception system of a vehicle that can detect and track objects in an environment are described herein. The perception system may include a machine-learned model that includes one or more different portions, such as different components, subprocesses, or the like. In some instances, the techniques may include training the machine-learned model end-to-end such that outputs of a first portion of the machine-learned model are tailored for use as inputs to another portion of the machine-learned model. Additionally, or alternatively, the perception system described herein may utilize temporal data to track objects in the environment of the vehicle and associate tracking data with specific objects in the environment detected by the machine-learned model. That is, the architecture of the machine-learned model may include both a detection portion and a tracking portion in the same loop.
Type:
Grant
Filed:
December 3, 2021
Date of Patent:
July 25, 2023
Assignee:
Zoox, Inc.
Inventors:
Cheng-Hsin Wuu, Subhasis Das, Po-Jen Lai, Qian Song, Benjamin Isaac Zwiebel
Abstract: An information processing apparatus (100) includes an acquisition unit (122) that acquires a first image from which person region feature information regarding a region including other than a face of a retrieval target person is extracted, a second image in which a collation result with the person region feature information indicates a match, and a facial region is detected, and result information indicating a collation result between face information stored in a storage unit and face information extracted from the facial region, and a display processing unit (130) that displays at least two of the first image, the second image, and the result information on an identical screen.