Patents Examined by Leon-Viet Q. Nguyen
  • Patent number: 11715229
    Abstract: A moving body includes processing circuitry. The processing circuitry is configured to collect an external image of the moving body from an external sensor and collect information associated with an inner state of the moving body from an internal sensor. The processing circuitry is configured to determine whether or not to adopt the external image collected by the external sensor as a first image for estimating a position of the moving body, based on the information associated with the inner state of the moving body collected by the internal sensor. The processing circuitry is configured to estimate a position of the moving body by comparing the first image and a second image associated with a collection position of the first image.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: August 1, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takuya Miyamoto, Kenichi Shimoyama, Keiko Noguchi
  • Patent number: 11714162
    Abstract: An optical tracking system includes optical source devices. The optical source devices are configured to emitting optical signals. A control method, suitable for the optical tracking system, includes following operations. A dimensional scale to be covered by the optical tracking system is obtained. Signal strength of the optical signals provided by the optical source devices is adjusted according to the dimensional scale. The signal strength of the optical signals is positively correlated with the dimensional scale.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: August 1, 2023
    Assignee: HTC Corporation
    Inventors: Mong-Yu Tseng, Sheng-Long Wu
  • Patent number: 11710298
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an event detector. The methods, systems, and apparatus include actions of obtaining frames of a video, determining whether an object of interest is detected within the frames, determining whether motion is detected within the frames, determining whether the frames correspond to motion by an object of interest, generating a training set that includes labeled inter-frame differences based on whether the frames correspond to motion by an object of interest, and training an event detector using the training set.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: July 25, 2023
    Assignee: ObjectVideo Labs, LLC
    Inventors: Narayanan Ramanathan, Allison Beach
  • Patent number: 11703350
    Abstract: A system for automatically annotating a map includes: a robot; a server operably connected to the robot; file storage configured to store files, the file storage operably connected to the server; an annotations database operably connected to the server, the annotations database comprising map annotations; an automatic map annotation service operably connected to the server, the automatic map annotation service configured to automatically do one or more of create a map of an item of interest and annotate a map of an item of interest; a queue of annotation requests operably connected to the automatic annotation service; and a computer operably connected to the server, the computer comprising a graphic user interface (GUI) usable by a human user.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: July 18, 2023
    Assignee: Zebra Technologies Corporation
    Inventors: Levon Avagyan, Jiahao Feng, Alex Henning, Michael Ferguson, Melonee Wise, Derek King
  • Patent number: 11694310
    Abstract: An image processing method includes a first step of acquiring input data including a captured image and optical system information relating to a state of an optical system used for capturing the captured image and a second step of inputting the input data to a machine learning model and of generating an estimated image acquired by sharpening the captured image or by reshaping blurs included in the captured image.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: July 4, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Norihito Hiasa
  • Patent number: 11688166
    Abstract: A multi-mode tracking method according to the present disclosure includes receiving sensor signals from a plurality of positioning sensors located on a plurality of sport participants wherein the sensor signals each include a participant identifier and location data, receiving a sport image captured from a camera located near a playfield wherein the sport image includes at least a target participant among a plurality of sport participants on the playfield, detecting an occlusion related to the target participant in the sports image, determining the severity of the occlusion on the basis of a sensor signal received from a specific positioning sensor installed on a specific sport player located in a region of interest related to the occlusion, and determining a location of the sport participant on the basis of the severity of the occlusion.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: June 27, 2023
    Assignee: Fitogether Inc.
    Inventors: Jinsung Yoon, Jonghyun Lee
  • Patent number: 11688200
    Abstract: Systems and methods for joint feature extraction and quality prediction using a shared machine learning model backbone and a customized training dataset are provided. According to an embodiment, a computer system receives a training dataset including example images each labeled with a particular category of a set of categories, and trains a deep neural network (DNN) based on the training dataset to jointly perform for an input image (i) facial feature extraction in accordance with the facial feature extraction algorithm and (ii) a quality scoring in accordance with a quality prediction algorithm. In the embodiment, the DNN, once trained with the training dataset labeled using a custom labeling scheme is used for the facial feature extraction and the quality prediction. The facial feature extraction algorithm and the quality prediction algorithm share a common DNN backbone of the DNN.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: June 27, 2023
    Assignee: Fortinet, Inc.
    Inventor: Xihua Dong
  • Patent number: 11672255
    Abstract: A method for evaluating (50) the health state of an anatomical element of an animal in a slaughtering plant provided with an image acquisition device. The evaluation method comprises the steps of: verifying the presence of the anatomical element; acquiring the image of the anatomical element; processing (S4) the image of the anatomical element through Deep Learning techniques, generating a lesion image representing lesioned portions of the anatomical element, and a number of processed images, each representing a corresponding non-lesioned anatomical area of the animal; for each of the lesioned portions and for each of the non-lesioned anatomical areas, determining a corresponding quantity indicative of the probability that said lesioned portion corresponds to said non-lesioned anatomical area; determining a score indicative of the health state of the anatomical element, depending on the determined quantities.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: June 13, 2023
    Assignee: FARM4TRADE S.R.L.
    Inventors: Giuseppe Marruchella, Luca Bergamini, Andrea Capobianco Dondona, Ercole Del Negro, Francesco Di Tondo, Angelo Porrello, Simone Calderara
  • Patent number: 11675068
    Abstract: A data processing method, device and multi-sensor fusion method for multi-sensor fusion, which can group data captured by different sensors in different probe dimensions to simultaneous interpreting deep learning data based on pixel elements in the multi-dimensional matrix structure, thereby realize the more effective data mining and feature extraction to support more effective ability of environment perception and target detection.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: June 13, 2023
    Assignee: Shanghai YuGan Microelectronics Co., Ltd
    Inventor: Hong Jiang
  • Patent number: 11669998
    Abstract: Methods and systems are provided for learning a neural network and to determine a pose of a vehicle in an environment. A first processor performs a first feature extraction on sensor-based image data to provide a first feature map. The first processor also performs a second feature extraction on the aerial image data to provide a second feature map. Both feature maps are correlated to provide a correlation result. The first processor learns a neural network using the correlation result and ground-truth data, wherein each of the first feature extraction and the second feature is learned to extract a portion of features from the respective image data. A geo-tagged second feature map can then be retrieved by an on-board processor of the vehicle which, along with on-board processed sensor-based data by the network trained by the first processor, determines the pose of the vehicle.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: June 6, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Han UL Lee, Brent N. Bacchus
  • Patent number: 11670323
    Abstract: System and methods are provided for detecting impairment of an individual. The method involves operating a processor to: receive at least one image associated with the individual; and identify at least one feature in each image. The method further involves operating the processor to, for each feature: generate an intensity representation for that feature; apply at least one impairment analytical model to the intensity representation to determine a respective impairment likelihood; and determine a confidence level for each impairment likelihood based on characteristics associated with at least the applied impairment analytical model and that feature. The method further involves operating the processor to: define the impairment of the individual based on at least one impairment likelihood and the respective confidence level.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: June 6, 2023
    Assignee: PredictMedix Inc.
    Inventors: Rahul Kushwah, Sheldon Kales, Nandan Mishra, Himanshu Ujjawal Singh, Saurabh Gupta
  • Patent number: 11636332
    Abstract: Described herein are embodiments for a feature-scattering-based adversarial training approach for improving model robustness against adversarial attacks. Conventional adversarial training approaches leverage a supervised scheme, either targeted or non-targeted in generating attacks for training, which typically suffer from issues such as label leaking as noted in recent works. Embodiments of the disclosed approach generate adversarial images for training through feature scattering in the latent space, which is unsupervised in nature and avoids label leaking. More importantly, the presented approaches generate perturbed images in a collaborative fashion, taking the inter-sample relationships into consideration. Extensive experiments on different datasets compared with state-of-the-art approaches demonstrate the effectiveness of the presented embodiments.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: April 25, 2023
    Assignee: Baidu USA LLC
    Inventors: Haichao Zhang, Jianyu Wang
  • Patent number: 11636612
    Abstract: An AGV navigation device is provided, which includes a RGB-D camera, a plurality of sensors and a processor. When an AGV moves along a target route having a plurality of paths, the RGB-D camera captures the depth and color image data of each path. The sensors (including an IMU and a rotary encoder) record the acceleration, the moving speed, the direction, the rotation angle and the moving distance of the AGV moving along each path. The processor generates training data according to the depth image data, the color image data, the accelerations, the moving speeds, the directions, the moving distances and the rotation angles, and inputs the training data into a machine learning model for deep learning in order to generate a training result. Therefore, the AGV navigation device can realize automatic navigation for AGVs without any positioning technology, so can reduce the cost of automatic navigation technologies.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: April 25, 2023
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Yong-Ren Li, Chao-Hui Tu, Ching-Tsung Cheng, Ruei-Jhih Hong
  • Patent number: 11632536
    Abstract: A method for generating a three-dimensional (3D) lane model, the method including calculating a free space indicating a driving-allowed area based on a driving image captured from a vehicle camera, generating a dominant plane indicating plane information of a road based on either or both of depth information of the free space and a depth map corresponding to a front of the vehicle, and generating a 3D short-distance road model based on the dominant plane.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: April 18, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hwiryong Jung, Young Hun Sung, KeeChang Lee, Kyungboo Jung
  • Patent number: 11620835
    Abstract: The present disclosure describes a method, an apparatus, and a storage medium for recognizing an obstacle. The method includes acquiring, by a device, point cloud data obtained by scanning surroundings of a target vehicle by a sensor in the target vehicle. The device includes a memory storing instructions and a processor in communication with the memory. The method further includes converting, by the device, the point cloud data into a first image used for showing the surroundings; and recognizing, by the device, from the first image, a first object in the surroundings as an obstacle through a first neural network model.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: April 4, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Ren Chen, Yinjian Sun
  • Patent number: 11610407
    Abstract: A computer implemented method for determining an entry of an occupancy map of a vicinity of a vehicle comprises the following steps carried out by computer hardware components: acquiring first sensor data of a first sensor of the vicinity of the vehicle; acquiring second sensor data of a second sensor of the vicinity of the vehicle; determining a first sensor data portion of the first sensor data which corresponds to a potential object in the vicinity of the vehicle; based on the first sensor data portion, determining a second sensor data portion of the second sensor data which corresponds to a location of the potential object; and determining an entry of the occupancy map based on the first sensor data portion and the second sensor data portion.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: March 21, 2023
    Assignee: Aptiv Technologies Limited
    Inventors: Mateusz Komorkiewicz, Daniel Dworak, Mateusz Wojcik, Filip Ciepiela
  • Patent number: 11610346
    Abstract: A system and method for reconstructing an image of a target object using an iterative reconstruction technique can include a machine learning model as a regularization filter (100). An image data set for a target object generated using an imaging modality can be received, and an image of the target object can be reconstructed using an iterative reconstruction technique that includes a machine learning model as a regularization filter (100) used in part to reconstruct the image of the target object. The machine learning model can be trained prior to receiving the image data using learning datasets that have image data associated with the target object, where the learning datasets providing objective data for training the machine learning model, and the machine learning model can be included in the iterative reconstruction technique to introduce the object features into the image of the target object being reconstructed.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: March 21, 2023
    Assignee: nView medical Inc.
    Inventors: Cristian Atria, Nisha Ramesh, Dimitri Yatsenko
  • Patent number: 11610388
    Abstract: The present application discloses a method and an apparatus for detecting wearing of a safety helmet, a device and a storage medium. The method for detecting wearing of a safety helmet includes: acquiring a first image collected by a camera device, where the first image includes at least one human body image; determining the at least one human body image and at least one head image in the first image; determining a human body image corresponding to each head image in the at least one human body image according to an area where the at least one human body image is located and an area where the at least one head image is located; and processing the human body image corresponding to the at least one head image according to a type of the at least one head image.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: March 21, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Mingyuan Mao, Yuan Feng, Ying Xin, Pengcheng Yuan, Bin Zhang, Shufei Lin, Xiaodi Wang, Shumin Han, Yingbo Xu, Jingwei Liu, Shilei Wen, Hongwu Zhang, Errui Ding
  • Patent number: 11602132
    Abstract: A system configured to receive video and/or images from an image capture device over a livestock path, generate feature maps from an image of the video by applying at least a first convolutional neural network, slide a window across the feature maps to obtain a plurality of anchor shapes, determine if each anchor shape contains an object to generate a plurality of regions of interest, each of the plurality of regions of interest being a non-rectangular, polygonal shape, extract feature maps from each region of interest, classify objects in each region of interest, in parallel with classification, predict segmentation masks on at least a subset of the regions of interest in a pixel-to-pixel manner, identify individual animals within the objects based on classifications and the segmentation masks, and count individual animals based on identification, and provide the count to a digital device for display, processing, and/or reporting.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: March 14, 2023
    Assignee: Sixgill, LLC
    Inventors: Logan Spears, Carlos Anchia, Corey Staten, Wei Xu
  • Patent number: 11601775
    Abstract: A method is provided for generating a personalized Head Related Transfer Function (HRTF). The method can include capturing an image of an ear using a portable device, auto-scaling the captured image to determine physical geometries of the ear and obtaining a personalized HRTF based on the determined physical geometries of the ear. In addition, a system and a method in association with the system are also provided for customizing audio experience. Customization of audio experience can be based on derivation of at least one customized audio response characteristic which can be applied to an audio device used by a person. Finally, methods and systems are provided for rendering audio over headphones with head tracking enabled by, for example, exploiting efficiencies in creating databases and filters for use in filtering 3D audio sources for more realistic audio rendering and also allowing greater head movement to enhance the spatial audio perception.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: March 7, 2023
    Assignee: CREATIVE TECHNOLOGY LTD
    Inventors: Teck Chee Lee, Desmond Hii