Patents Examined by Shaghayegh Azima
  • Patent number: 11978263
    Abstract: A method for determining a safe state for a vehicle includes disposing a camera at a vehicle and disposing an electronic control unit (ECU) at the vehicle. Frames of image data are captured by the camera and provided to the ECU. An image processor of the ECU processes frames of image data captured by the camera. A condition is determined via processing at the image processor of the ECU frames of image data captured by the camera. The condition includes a shadow present in the field of view of the camera within ten frames of image data captured by the camera or a damaged condition of the imager within two minutes of operation of the camera. The ECU determines a safe state for the vehicle responsive to determining the condition.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: May 7, 2024
    Assignee: MAGNA ELECTRONICS INC.
    Inventors: Horst D. Diessner, Richard C. Bozich, Aleksandar Stefanovic, Anant Kumar Lall, Nikhil Gupta
  • Patent number: 11971482
    Abstract: A system for providing six-dimensional position data of an object in a three-dimensional (3D) space, the system including a light source configured to emit a light beam in an x-direction, a mirror including a mirror plane disposed in an x-y plane and a beam splitter configured for reflecting the light beam from the light source onto the mirror before being directed to the object, the light beam reflected by the object onto the beam splitter before being directed through a lens to an image plane to form an image.
    Type: Grant
    Filed: August 18, 2023
    Date of Patent: April 30, 2024
    Assignee: MLOptic Corp.
    Inventors: Sophia Shiaoyi Wu, Gary Fu, Sean Huentelman, Wei Zhou
  • Patent number: 11967174
    Abstract: The cloud server holds the main database for storing all the registration data that is handled in the present system, and the edge server that is arranged close to the sensor holds a sub-database for storing part of the registration data. The sub-database in the edge server stores only the registration data having a high probability of being verified in the edge server. In the case where the edge server verifies the detection data that has been acquired by the sensor with the registration data within the sub-database and determines that the registration data that matches the detection data does not exist within the sub-database, the configuration allows the detection data to be transmitted to the cloud server and requests the detection data to be verified with the registration data within the main database.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: April 23, 2024
    Assignee: Hitachi Kokusai Electric Inc.
    Inventors: Keigo Hasegawa, Hiroto Sasao
  • Patent number: 11961268
    Abstract: Methods and devices for encoding a point cloud. More than one frame of reference is identified and a transform defines the relative motion of a second frame of reference to a first frame of reference. The space is segmented into regions and each region is associated with one of the frames of reference. Local motion vectors within a region are expressed relative to the frame of reference associated with that region. Occupancy of the bitstream is entropy encoded based on predictions determined using the location motion vectors and the transform associated with the attached frame of reference.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: April 16, 2024
    Assignee: BlackBerry Limited
    Inventors: Sébastien Lasserre, David Flynn, Gaëlle Christine Martin-Cocher
  • Patent number: 11950014
    Abstract: The present invention relates to a method for differentiating between background and foreground in images or films of scenery recorded by an electronic camera. The invention relates in addition to a method for replacing the background in recorded images or films of scenery whilst maintaining the foreground.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: April 2, 2024
    Assignee: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNGDER ANGEWANDTEN FORSCHUNG E.V
    Inventors: Wolfgang Vonolfen, Rainer Wollsiefen
  • Patent number: 11948390
    Abstract: The present disclosure provides a dog nose print recognition method and system. The dog nose print recognition method includes: collecting a nose image of a dog, acquiring the nose image, and processing the nose image to obtain a plurality of regional images to be recognized; performing key point detection on the plurality of regional images to be recognized to obtain key points corresponding to the regional images to be recognized, and using the key points to perform alignment processing of the regional images to be recognized to obtain aligned regional images to be recognized; and performing dog nose print feature vector extraction and recognition on the aligned regional images to be recognized, and determining a dog identity recognition result through the dog nose print feature vector extraction and recognition. The system includes modules corresponding to the steps of the method.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: April 2, 2024
    Assignee: XINGCHONG KINGDOM (BEIJING) TECHNOLOGY CO., LTD
    Inventors: Yiduan Wang, Cheng Song, Baoguo Liu, Weipeng Guo
  • Patent number: 11945106
    Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.
    Type: Grant
    Filed: January 23, 2023
    Date of Patent: April 2, 2024
    Assignee: Google LLC
    Inventors: Michael Quinlan, Sean Kirmani
  • Patent number: 11935262
    Abstract: A method where one or more objects of a three-dimensional scene are determined in accordance with provided raw data of the three-dimensional scene representing a predefined environment inside and/or outside the vehicle. A two-dimensional image is determined in accordance with the provided raw data of the three-dimensional scene such that the two-dimensional image depicts the determined objects of the three-dimensional scene on a curved plane. The two-dimensional image has a quantity of pixels, each representing at least one part or several of the determined objects of the three-dimensional scene. Data is provided which represents at least one determined field of view of the driver. For at least one of the determined objects, the probability with which the at least one object will be located in the field of view of the driver is determined in accordance with the provided data and the two-dimensional image.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: March 19, 2024
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventors: Florian Bade, Moritz Blume, Martin Buchner, Carsten Isert, Julia Niemann, Michael Wolfram, Joris Wolters
  • Patent number: 11935283
    Abstract: Disclosed are a cranial CT-based grading method and a corresponding system, which relate to the field of medical imaging. The cranial CT-based grading method as disclosed solves the problems of relatively great subjective disparities and poor operability in eye-balling ASPECTS assessment. The grading method includes: determining frames where target image slices are located from to-be-processed multi-frame cranial CT data; extracting target areas; performing infarct judgment on each target area included in the target areas to output an infarct judgment outcome regarding the target area; and outputting a grading outcome based on infarct judgment outcomes regarding all target areas.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: March 19, 2024
    Assignee: UNION STRONG (BEIJING) TECHNOLOGY CO. LTD.
    Inventors: Hailan Jin, Ling Song, Yin Yin, Guangming Yang, Yangyang Yao, Pengxiang Li, Lan Qin
  • Patent number: 11914639
    Abstract: This application discloses a multimedia resource matching method performed at a computing device. The method includes: searching a first media resource set among a multimedia resource set, first target image frames of all media resources in the first media resource set meeting a target condition, and features of the first target image frames matching features in image frames of a to-be-matched multimedia resource according to a first matching condition; determining, among the first target image frames, second target image frames whose features match the features in the image frames of the to-be-matched multimedia resource according to a second matching condition; and obtaining matching information of the second target image frames and an identifier of a target media resource among the multimedia resource set, the matching information being used for indicating a total duration and a playback moment of the second target frame image in the target media resource.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: February 27, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xuyuan Xu, Guoping Gong, Tao Wu
  • Patent number: 11915495
    Abstract: A sitting height estimation ECU is mounted on a vehicle and includes: an acquisition unit configured to acquire an image of a driver sitting in a driver's seat; a position detection unit configured to detect a position of a landmark on a face of the driver from the image; a crown estimation unit configured to estimate a position of a crown of the driver on the basis of the position of the landmark; and a sitting height estimation unit configured to estimate a sitting height of the driver on the basis of an estimation result obtained by the crown estimation unit.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: February 27, 2024
    Assignee: Faurecia Clarion Electronics Co., Ltd.
    Inventors: Norikazu Nara, Tetsuro Murakami, Naoto Sakata
  • Patent number: 11908103
    Abstract: There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform obtaining an input low resolution (LR) image comprising a height, a width, and a number of channels, implementing a feature learning deep neural network (DNN) configured to compute a feature tensor based on the input LR image, generating, by an upscaling DNN, a high resolution (HR) image, having a higher resolution than the input LR image, based on the feature tensor computed by the feature learning DNN, wherein a networking structure of the upscaling DNN differs depending on different scale factors, and wherein a networking structure of the feature learning DNN is a same structure for each of the different scale factors.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: February 20, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Wei Jiang, Wei Wang, Shan Liu
  • Patent number: 11908232
    Abstract: A prism of an approximately quadrangle-frustum shape is arranged so that a bottom side, out of two parallel surfaces of the prism, is a placing surface side for a finger. A first imaging unit arranged below a top surface parallel to the bottom surface images an image of the finger transmitted through the top surface. A light source radiates light to at least one side surface of a first set of side surfaces, out of two sets of side surfaces of the approximately quadrangle-frustum shape that face each other. A second imaging unit images the image of the finger transmitted through a second set of side surfaces, out of the two sets of side surfaces. An infrared ray light source radiates infrared ray light into the finger so that the infrared ray light is scattered inside the finger and is received by the imaging unit.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: February 20, 2024
    Assignee: NEC CORPORATION
    Inventor: Teruyuki Higuchi
  • Patent number: 11900707
    Abstract: A skeletal information threshold setting apparatus includes: a joint information input unit that accepts an input of important joints among joints of a subject and a confidence threshold for the important joints; and a threshold setting unit that acquires a confidence threshold for each of multiple joints of the subject, including the important joints, based on the important joints and the confidence threshold for the important joints that were input, and sets the acquired confidence thresholds for the joints as thresholds to be used in making a determination regarding a skeletal estimation result for the subject.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: February 13, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akio Kameda, Megumi Isogai, Hideaki Kimata
  • Patent number: 11900731
    Abstract: A biometrics imaging device for capturing image data of a body part of a person comprises a visible light sensor for capturing image data of the body part in the visible light spectrum and/or a near infrared light sensor for capturing image data of the body part in the near infrared light spectrum. The biometrics imaging device comprises a time of flight camera for capturing three dimensional image data of the body part. The biometrics imaging device executes a procedure which includes: capturing three dimensional image data of a current body part posture; determining based of the image data a difference between a desired body part posture and the current body part posture; providing based on the determined difference user guidance to enable the person to adapt the body part posture in direction of the desired posture; and capturing image data in the visible light spectrum and/or image data in the infrared light spectrum.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: February 13, 2024
    Assignee: QAMCOM INNOVATION LABS AB
    Inventors: Johan Bergqvist, Arnold Herp
  • Patent number: 11887353
    Abstract: The present disclosure relates to deep learning image classification oriented to heterogeneous computing devices. According to embodiments of the present disclosure, the deep learning model can be modeled as an original directed acyclic graph, with nodes representing operators of the deep learning model and directed edges representing data transmission between the operators. Then, a new directed acyclic graph is generated by replacing the directed edges in the original directed acyclic graph with new nodes and adding two directed edges to maintain a topological structure.
    Type: Grant
    Filed: July 18, 2023
    Date of Patent: January 30, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Beibei Zhang, Feng Gao, Mingge Sun, Chu Zheng
  • Patent number: 11861879
    Abstract: An information processing device is configured to identify a background area of a first image registered from a first account and a background area of a second image registered from a second account, and to output identicalness information indicating whether a user owning the first account and a user owning the second account are identical to each other based on the background area of the first image and the background area of the second image.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: January 2, 2024
    Assignee: RAKUTEN GROUP, INC.
    Inventors: Mitsuru Nakazawa, Takashi Tomooka
  • Patent number: 11842553
    Abstract: Aspects of the technology described herein describe a system for detecting and reducing wear in industrial equipment. Aspects of the technology use 3D image data from a field inspection of industrial equipment to identify and quantify wear. The wear can be detected by providing the images from the field inspection to a computer classifier for recognition. Aspects of the technology also use machine learning to recommend a change to the operation of the equipment to minimize wear. Such a change could include load shedding, lube oil feed rate changes, prompts for maintenance, etc. Through incorporation of the wear data into the control system, the equipment can automatically change operation to improve wear performance and increase the durability and lifetime of the equipment.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: December 12, 2023
    Assignee: ExxonMobil Technology and Engineering Company
    Inventors: Christopher J. Vander Neut, Matthew L. Dinslage, Thomas A. Schiff
  • Patent number: 11836627
    Abstract: Determining that a motor vehicle driver is using a mobile device while driving a motor vehicle. Multiple images of a driver of a motor vehicle are captured through a side window of the motor vehicle. Positive images show a driver using a mobile device while driving a motor vehicle. Negative images show a driver not using a mobile device while driving a motor vehicle. Multiple training images are selected from both the positive images and the negative images. The selected training images and respective labels, indicating that the selected training images are positive images or negative images, are input to a machine (e.g. Convolutional Neural Network, (CNN)). The CNN is trained to classify that a test image, captured through a side window of a motor vehicle, shows a driver using a mobile device while driving the motor vehicle.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: December 5, 2023
    Assignee: REDFLEX TRAFFIC SYSTEMS PTY LTD
    Inventors: Jonathan Devor, Moshe Mikhael Frolov, Igal Muchnik, Herbert Zlotogorski
  • Patent number: 11829153
    Abstract: An apparatus for identifying the state of an object includes a processor configured to input, every time obtaining an image from a camera, the image into a first classifier to detect, for each of one or more predetermined objects represented in the image, an object region including the object; determine a predicted object region in a subsequent image to be obtained from the camera for an object whose position in the subsequent image is predictable; and input characteristics into a second classifier to identify the state of an object involving time-varying changes in outward appearance. When the object has a predicted object region, the characteristics are obtained from pixel values of the predicted object region in the subsequent image. On the other hand, when the object does not have a predicted object region, the characteristics are obtained from pixel values of the object region detected from the subsequent image.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: November 28, 2023
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Daisuke Hashimoto, Ryusuke Kuroda