Patents Examined by Md K Talukder
  • Patent number: 11869641
    Abstract: In some instances, a user device for determining whether an individual is sick is provided. The user device is configured to obtain a facial image of an individual; obtain an audio file comprising a voice recording of the individual; determine a facial recognition confidence value associated with whether the individual is sick based on inputting the facial image into a facial recognition machine learning dataset that is individualized for the individual; determine a voice recognition confidence value associated with whether the individual is sick based on inputting the audio file into a voice recognition machine learning dataset that is individualized for the individual; determine whether the individual is sick based on the facial recognition confidence value and the voice recognition confidence value; and causing display of a prompt indicating whether the individual is sick.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: January 9, 2024
    Assignee: Aetna Inc.
    Inventors: Dwayne Kurfirst, Robert E. Bates, III
  • Patent number: 11861912
    Abstract: The present disclosure provide a method and an Internet of Things system for counting and regulating pedestrian volume in a public place of a smart city. The method includes receiving, based on the user platform, a query request for an intended place initiated by a user; transmitting, based on the service platform, the query request to the management platform, and generating, based on the management platform, a query instruction; issuing, based on the management platform, the query instruction to a corresponding sensor network sub-platform according to the regional location; sending, based on the sensor network sub-platform, the query instruction to the corresponding object platform; obtaining, based on the object platform, a query result according to the query instruction; and feeding back, based on the object platform, the query result to the user platform through the corresponding sensor network sub-platform, the management platform, and the service platform respectively.
    Type: Grant
    Filed: October 9, 2022
    Date of Patent: January 2, 2024
    Assignee: CHENGDU QINCHUAN IOT TECHNOLOGY CO., LTD.
    Inventors: Zehua Shao, Haitang Xiang, Bin Liu, Yaqiang Quan, Yongzeng Liang
  • Patent number: 11855737
    Abstract: A method, an apparatus, and a computer-readable medium for wireless communication are provided. In some aspects, a base station may send a beam modification command that indicates a set of transmit beam indexes corresponding to a set of transmit beams of a base station, and each transmit beam index of the set of transmit beam indexes may indicate at least a transmit direction for transmitting a transmit beam by the base station. The base station may send, in association with the beam modification command, a reference signal using at least one transmit beam of the set of transmit beams, where a first portion of the reference signal is sent in a first set of symbols and a second portion of the reference signal is received in a second set of symbols.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: December 26, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Muhammad Nazmul Islam, Bilal Sadiq, Tao Luo, Sundar Subramanian, Junyi Li
  • Patent number: 11847840
    Abstract: A distracted driver can be informed of his or her distraction by a visual notification. It can be detected whether a driver of a vehicle is focused on a non-critical object located within the vehicle. In response to detecting that the driver of the vehicle is focused on a non-critical object located within the vehicle, an amount of time the driver is focused on the non-critical object can be determined. When the amount of time exceeds a threshold amount of time, a visual notification of distracted driving can be caused to be presented on or visually adjacent to the non-critical object.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: December 19, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Hiroshi Yasuda, Manuel Ludwig Kuehner
  • Patent number: 11847821
    Abstract: A method for training a deep learning network for face recognition includes: utilizing a face landmark detector to perform face alignment processing on at least one captured image, thereby outputting at least one aligned image; inputting the at least one aligned image to a teacher model to obtain a first output vector; inputting the at least one captured image a student model corresponding to the teacher module to obtain a second output vector; and adjusting parameter settings of the student model according to the first output vector and the second output vector.
    Type: Grant
    Filed: January 2, 2022
    Date of Patent: December 19, 2023
    Assignee: Realtek Semiconductor Corp.
    Inventors: Chien-Hao Chen, Shih-Tse Chen
  • Patent number: 11847838
    Abstract: Provided is a recognition device that can accurately recognize a position where a vehicle is to be stopped. The recognition device according to the present invention includes: a distance calculation unit 210 that calculates a map information distance between a host vehicle and a target feature based on map information; a stop line detection processing unit 130a that, in a case where the map information distance is equal to or less than a determination value, sets a detection area for detection of the target feature based on the map information distance, detects the target feature within the detection area, and calculates an actually-measured distance between the host vehicle and the target feature; and a stop line position unifying unit 310 that unifies data of the map information distance and data of the actually-measured distance to calculate a unification result of a distance between the host vehicle and the target feature.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: December 19, 2023
    Assignee: Hitachi Astemo, Ltd.
    Inventor: Akira Kuriyama
  • Patent number: 11842550
    Abstract: A smart driving posture control system includes: an image sensor installed in a vehicle, a memory for storing instructions, and a processor connected to the image sensor and the memory. The processor recognizes a user boarding the vehicle through the image sensor to extract human body feature information of the user, extracts recommended posture information based on the extracted human body feature information, and controls a convenience device based on the recommended posture information to adjust a driving posture of the user.
    Type: Grant
    Filed: September 1, 2021
    Date of Patent: December 12, 2023
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION
    Inventors: Seong Gyu Kang, Hyung Kee Kim, Byeong Yeon Hwang
  • Patent number: 11836988
    Abstract: A method for recognizing an object from input data is disclosed. Raw detections are carried out in which at least one attribute in the form of a detection quality is determined for each raw detection. At least one further attribute for each raw detection is determined. A temporally or spatially resolved distance measure is determined for at least one attribute of the raw detections. Raw detections of a defined distance measure are combined to form a group of raw detections. The object is recognized from a group with at least one raw detection with the smallest distance measure of the at least one attribute in comparison with another raw detection, or from a group with at least one raw detection which were combined by combining at least two raw detections with the smallest distance measure of the at least one attribute to form said one raw detection.
    Type: Grant
    Filed: August 5, 2021
    Date of Patent: December 5, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Matthias Kirschner, Thomas Wenzel
  • Patent number: 11830253
    Abstract: A method for keypoint matching includes receiving an input image obtained by a sensor of an agent. The method also includes identifying a set of keypoints of the received image. The method further includes augmenting the descriptor of each of the keypoints with semantic information of the input image. The method also includes identifying a target image based on one or more semantically augmented descriptors of the target image matching one or more semantically augmented descriptors of the input image. The method further includes controlling an action of the agent in response to identifying the target.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: November 28, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jiexiong Tang, Rares Andrei Ambrus, Vitor Guizilini, Adrien David Gaidon
  • Patent number: 11823498
    Abstract: The disclosed computer-implemented method may include (1) receiving a present frame of a video stream, the present frame comprising a present depiction of a multi-segment articulated body system, (2) identifying a previous frame of the video stream that comprises a previous depiction of the multi-segment articulated body system, (3) analyzing the present frame and the previous frame to determine whether the multi-segment articulated body system remained substantially rigid between the previous frame and the present frame, and (4) estimating a pose of the multi-segment articulated body system in the present frame using a first pose estimation computation that treats the multi-segment articulated body system as rigid and that is selected in contrast to a second pose estimation computation based on determining that the multi-segment articulated body system remained substantially rigid between the previous frame and the present frame. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: November 21, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Chengde Wan, Randi Cabezas, Xinqiao Liu, Ziyun Li
  • Patent number: 11818925
    Abstract: The invention relates to: a light-emitting device which includes a first flexible substrate having a first electrode, a light-emitting layer over the first electrode, and a second electrode with a projecting portion over the light-emitting layer and a second flexible substrate having a semiconductor circuit and a third electrode electrically connected to the semiconductor circuit, in which the projecting portion of the second electrode and the third electrode are electrically connected to each other; a method for manufacturing the light-emitting device; and a cellular phone which includes a housing incorporating the light-emitting device and having a longitudinal direction and a lateral direction, in which the light-emitting device is disposed on a horn side and in an upper portion in the longitudinal direction of the housing.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: November 14, 2023
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventors: Kaoru Hatano, Satoshi Seo, Shunpei Yamazaki
  • Patent number: 11816593
    Abstract: Embodiments may include novel techniques for Task-Adaptive Feature Sub-Space Learning (TAFSSL). For example, in an embodiment, a method may be implemented in a computer system comprising a processor, memory accessible by the processor, and computer program instructions stored in the memory and executable by the processor, the method comprising: training a machine learning system to classify features in images by: generating a sample set comprising one or a few labeled training samples and one or a few additional samples containing instances of target classes, performing dimensionality reduction computed on the samples in the sample set to form a dimension reduced sub-space, generating class representatives in the dimension reduced sub-space using clustering, and classifying features in images using the trained machine learning system.
    Type: Grant
    Filed: August 23, 2020
    Date of Patent: November 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Leonid Karlinsky, Joseph Shtok, Eliyahu Schwartz
  • Patent number: 11816269
    Abstract: Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for gesture recognition for a wearable multimedia device using real-time data streams. In an embodiment, a method comprises: detecting a trigger event from one or more real-time data streams running on a wearable multimedia device; taking one or more data snapshots of the one or more real-time data streams; inferring user intent from the one or more data snapshots; and selecting a service or preparing content for the user based on the inferred user intent. In an embodiment, a hand and finger pointing direction is determined from a depth image, a 2D bounding box for the hand/finger is projected into a 2D image space and compared to bounding boxes for identified/labeled objects in the 2D image space to identify an object that the hand is holding or the finger is pointing toward.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: November 14, 2023
    Assignee: Humane, Inc.
    Inventors: Imran A. Chaudhri, Bethany Bongiorno, Patrick Gates, Wangju Tsai, Monique Relova, Nathan Lord, Yanir Nulman, Ralph Brunner, Lilynaz Hashemi, Britt Nelson
  • Patent number: 11816902
    Abstract: A vehicle external environment recognition apparatus to be applied to a vehicle includes one or more processors and one or more memories configured to be coupled to the one or more processors. The one or more processors are configured to: calculate three-dimensional positions of respective blocks in a captured image; group the blocks to put any two or more of the blocks that have the three-dimensional positions differing from each other within a predetermined range in a group and thereby determine three-dimensional objects; identify each of a preceding vehicle of the vehicle and a sidewall on the basis of the determined three-dimensional objects; and track the preceding vehicle. The one or more processors are configured to determine, upon tracking the preceding vehicle, whether the preceding vehicle to track is to be hidden by the sidewall on the basis of a border line between a blind region and a viewable region.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: November 14, 2023
    Assignee: SUBARU CORPORATION
    Inventor: Toshimi Okubo
  • Patent number: 11816880
    Abstract: A face recognition method includes: obtaining a first feature image that describes a face feature of a target face image and a first feature vector corresponding to the first feature image; obtaining a first feature value that represents a degree of difference between a face feature in the first feature image and that in the target face image; obtaining a similarity between the target face image and a template face image according to the first feature vector, the first feature value, and a second feature vector and a second feature value corresponding to a second feature image of the template face image, the second feature value describing a degree of difference between a face feature in the second feature image and that in the template face image; and determining, when the similarity is greater than a preset threshold, that the target face image matches the template face image.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: November 14, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jianqing Xu, Pengcheng Shen, Shaoxin Li
  • Patent number: 11810373
    Abstract: Provided are a vehicle outside information acquiring unit to acquire vehicle outside information, a face information acquiring unit to acquire face information, a biological information acquiring unit to acquire biological information, a vehicle information acquiring unit to acquire vehicle information, a vehicle outside information feature amount extracting unit to extract a vehicle outside information feature amount on the basis of the vehicle outside information, a face information feature amount extracting unit to extract a face information feature amount in accordance with the vehicle outside information feature amount, a biological information feature amount extracting unit to extract a biological information feature amount in accordance with the vehicle outside information feature amount, a vehicle information feature amount extracting unit to extract a vehicle information feature amount in accordance with the vehicle outside information feature amount, and a cognitive function estimation unit to estim
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: November 7, 2023
    Assignee: Mitsubishi Electric Corporation
    Inventor: Shusaku Takamoto
  • Patent number: 11808607
    Abstract: A ranging apparatus capable of suppressing reduction of ranging accuracy at a long distance end of a distance measurement range, thereby making it possible to perform high-accuracy ranging over a wide distance range. An image pickup device receives light fluxes from a fixed focus optical system. A distance information acquisition unit acquires distance information of an object based on image signals from the image pickup device. This section acquires the distance information based on parallax between a first image based on a light flux having passed a first region of an exit pupil, and a second image based on a light flux having passed a second region of the exit pupil. The optical system is configured such that parallax of an object existing at a predetermined distance is smaller than parallax of an object existing at a shorter distance than the predetermined distance.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: November 7, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Akinari Takagi, Kazuya Nobayashi
  • Patent number: 11807188
    Abstract: Vehicles and a method for initiating, maintaining, and terminating an image capture session with an external image capture component. The method includes initiating an image capture session with an image capture component that is external to the vehicle, receiving, from the image capture component, an image, determining whether image data of the image includes identifying information associated with the vehicle, and instructing the image capture component to maintain the image capture session associated with the vehicle in response to determining that the image data includes identifying information associated with the vehicle. Additionally, the method includes terminating the image capture session with the image capture component in response to determining that the image data does not include the identifying information.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: November 7, 2023
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventor: Masashi Nakagawa
  • Patent number: 11798264
    Abstract: Dictionary learning method and means for zero-shot recognition can establish the alignment between visual space and semantic space at category layer and image level, so as to realize high-precision zero-shot image recognition. The dictionary learning method includes the following steps: (1) training a cross domain dictionary of a category layer based on a cross domain dictionary learning method; (2) generating semantic attributes of an image based on the cross domain dictionary of the category layer learned in step (1); (3) training a cross domain dictionary of the image layer based on the image semantic attributes generated in step (2); (4) completing a recognition task of invisible category images based on the cross domain dictionary of the image layer learned in step (3).
    Type: Grant
    Filed: January 29, 2022
    Date of Patent: October 24, 2023
    Assignee: Beijing University of Technology
    Inventors: Lichun Wang, Shuang Li, Shaofan Wang, Dehui Kong, Baocai Yin
  • Patent number: 11798266
    Abstract: A multi-dimensional task facial beauty prediction method and system, and a storage medium are disclosed. The method includes the steps of: at a training phase, using first facial images to optimize a shared feature extraction network for extracting shared features and to train a plurality of sub-task networks for performing facial beauty classification tasks; at a testing phase, extracting shared features of second facial images; inputting the shared features to the trained plurality of sub-task networks; and obtaining a first beauty prediction result based on first output results of the plurality of sub-task networks.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: October 24, 2023
    Assignee: WUYI UNIVERSITY
    Inventors: Junying Gan, Bicheng Wu, Yikui Zhai, Guohui He