Patents Examined by Idowu O Osifade
  • Patent number: 11737670
    Abstract: Methods and apparatus for predicting performance of an individual on a task, the method comprises receiving brain imaging data for the individual, wherein the brain imaging data comprises structural brain data, determining values for at least one characteristic of the structural brain data within regions of interest defined for a population of individuals having different performance levels, and predicting based on the determined values, a performance potential of the individual.
    Type: Grant
    Filed: July 21, 2022
    Date of Patent: August 29, 2023
    Assignee: Voxel AI, Inc.
    Inventors: Benjamin J. A. Gallacher, Douglas J. Cook
  • Patent number: 11734924
    Abstract: Described is a system for onboard, real-time, activity detection and classification. During operation, the system detects one or more objects in a scene using a mobile platform and tracks each of the objects as the objects move in the scene to generate tracks for each object. The tracks are transformed using a fuzzy-logic based mapping to semantics that define group activities of the one or more objects in the scene. Finally, a state machine is used to determine whether the defined group activities are normal or abnormal phases of a predetermined group operation.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: August 22, 2023
    Assignee: HRL LABORATORIES, LLC
    Inventors: Leon Nguyen, Deepak Khosla
  • Patent number: 11727728
    Abstract: A method for monitoring a person performing a physical exercise based on a sequence of image frames showing an exercise activity of the person.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: August 15, 2023
    Assignee: KAIA HEALTH SOFTWARE GMBH
    Inventors: Konstantin Mehl, Maximilian Strobel
  • Patent number: 11727563
    Abstract: Systems, devices, and methods are described for providing patient anatomy models with indications of model accuracy included with the model. Accuracy is determined, for example, by analyzing gradients at tissue boundaries or by analyzing tissue surface curvature in a three-dimensional anatomy model. The determined accuracy is graphically provided to an operator along with the patient model. The overlaid accuracy indications facilitate the operator's understanding of the model, for example by showing areas of the model that may deviate from the modeled patient's actual anatomy.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: August 15, 2023
    Assignee: Smith & Nephew, Inc.
    Inventor: Yangqiu Hu
  • Patent number: 11727562
    Abstract: A computer-implemented image analysis method and system. The method comprises: quantifying one or more features segmented and identified from a medical image of a subject; extracting clinically relevant features from non-image data pertaining to the subject; assessing the features segmented from the medical image and the features extracted from the non-image data with a trained machine learning model; and outputting one or more results of the assessing of the features.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: August 15, 2023
    Assignee: Curvebeam AI Limited.
    Inventor: Yu Peng
  • Patent number: 11722840
    Abstract: Systems and methods for determining user equipment (UE) locations within a wireless network using reference signals of the wireless network are described. The disclosed systems and methods utilize a plurality of in-phase and quadrature (I/Q) samples generated from signals provided by receive channels associated with two or more antennas of the wireless system. Based on received reference signal parameters the reference signal within the signals from each receive channel among the receive channels is identified. Based on the identified reference signal from each receive channel, an angle of arrival between a baseline of the two or more antennas and incident energy from the UE to the two or more antennas is determined. That angle of arrival is then used to calculate the location of the UE. The angle of arrival may be a horizontal angle of arrival and/or a vertical angle of arrival.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: August 8, 2023
    Assignee: QUALCOMM Technologies, Inc
    Inventors: Truman Prevatt, Michael John Buynak
  • Patent number: 11715332
    Abstract: A system for eye-tracking according to an embodiment of the present invention includes a data collection unit that acquires face information of a user and location information of the user from an image captured by a photographing device installed at each of one or more points set within a three-dimensional space and an eye tracking unit that estimates a location of an area gazed at by the user in the three-dimensional space from the face information and the location information, and maps spatial coordinates corresponding to the location of the area to a three-dimensional map corresponding to the three-dimensional space.
    Type: Grant
    Filed: July 21, 2022
    Date of Patent: August 1, 2023
    Assignee: VisualCamp Co., Ltd.
    Inventors: Yun Chan Suk, Tae Hee Lee, Seong Dong Choi
  • Patent number: 11715216
    Abstract: A processor-implemented object tracking method includes: setting a suppressed region in a template image based on a shape of a target box of the template image; refining a template feature map of the template image by suppressing an influence of feature data corresponding to the suppressed region in the template feature map; and tracking an object by determining a bounding box corresponding to the target box in a search image based on the refined template feature map.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: August 1, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ju Hwan Song, Hyunjeong Lee, Changbeom Park, Changyong Son, Byung In Yoo
  • Patent number: 11704917
    Abstract: In an embodiment, a method for estimating a composition of food includes: receiving a first three-dimensional (3D) image; identifying food in the first 3D image; determining a volume of the identified food based on the first 3D image; and estimating a composition of the identified food using a millimeter-wave radar.
    Type: Grant
    Filed: July 6, 2021
    Date of Patent: July 18, 2023
    Assignee: Infineon Technologies AG
    Inventors: Cynthia Kuo, Maher Matta, Avik Santra
  • Patent number: 11699219
    Abstract: A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: July 11, 2023
    Assignee: DUELIGHT LLC
    Inventors: William Guie Rivard, Brian J. Kindle, Adam Barry Feder
  • Patent number: 11699246
    Abstract: Systems and methods for validating drive pose refinement are provided. In some aspects, a method includes receiving image data that depicts an area of interest, and receiving a plurality of virtual points generated using the image data. The method also includes selecting at least one drive in the area of interest that captures the plurality of virtual points, and generating a refined pose track for each of the at least one drive by applying a drive alignment process to drive data from the at least one drive using the plurality virtual points. The method further includes comparing the refined pose track to a control pose track generated using control repoints, and generating, based on the comparison, a report that validates the refined pose track.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: July 11, 2023
    Assignee: HERE GLOBAL B.V.
    Inventors: Anish Mittal, David Johnston Lawlor
  • Patent number: 11688181
    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: June 27, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Patent number: 11682209
    Abstract: A computing system identifies broadcast video for a plurality of games in a first league. The broadcast video includes a plurality of video frames. The computing system generates tracking data for each game from the broadcast video of a corresponding game. The computing system enriches the tracking data. The enriching includes merging play-by-play data for the game with the tracking data of the corresponding game. The computing system generates padded tracking data based on the tracking data. The computing system projects player performance in a second league for each player based on the tracking data and the padded tracking data.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: June 20, 2023
    Assignee: STATS LLC
    Inventors: Andrew Patton, Nathan Walker, Matthew Scott, Alex Ottenwess
  • Patent number: 11682103
    Abstract: Systems, computer-implemented methods, apparatus and/or computer program products are provided that facilitate improving the accuracy of global positioning system (GPS) coordinates of indoor photos. The disclosed subject matter further provides systems, computer-implemented methods, apparatus and/or computer program products that facilitate generating exterior photos of structures based on GPS coordinates of indoor photos.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: June 20, 2023
    Assignee: Matterport, Inc.
    Inventors: Gunnar Hovden, Scott Adams
  • Patent number: 11682134
    Abstract: A detection device including: a detector that detects an object from one viewpoint; a reliability calculator that calculates reliability information on the object at the one viewpoint by using a detection result of the detector; and an information calculator that calculates shape information on the object at the one viewpoint by using the detection result of the detector and the reliability information and calculates texture information on the object at the one viewpoint by using the detection result, the information calculator generates model information on the object at the one viewpoint based on the shape information and the texture information.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: June 20, 2023
    Assignee: NIKON CORPORATION
    Inventors: Takeaki Sugimura, Yoshihiro Nakagawa
  • Patent number: 11676403
    Abstract: In some examples, one or more processors may receive at least one first visible light image and a first thermal image. Further, the processor(s) may generate, from the at least one first visible light image, an edge image that identifies edge regions in the at least one first visible light image. At least one of a lane marker or road edge region may be determined based at least in part on information from the edge image. In addition, one or more first regions of interest in the first thermal image may be determined based on at least one of the lane marker or the road edge region. Furthermore, a gain of a thermal sensor may be adjusted based on the one or more first regions of interest in the first thermal image.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: June 13, 2023
    Assignee: HITACHI ASTEMO, LTD.
    Inventors: Subrata Kumar Kundu, Naveen Kumar Bangalore Ramaiah
  • Patent number: 11675986
    Abstract: An optical scanner captures a plurality of images from a plurality of image-capture devices. In response to the activation signal, an evaluation phase is executed, and in response to the evaluation phase, an acquisition phase is executed. In the evaluation phase, a first set of images is captured and processed to produce a virtual frame comprising a plurality of regions, with each region containing a reduced-data image frame that is based on a corresponding one of the plurality of images. Also in the evaluation phase, attributes of each of the plurality regions of the virtual frame are assessed according to first predefined criteria, and operational parameters for the acquisition phase are set based on a result of the assessment. In the acquisition phase, a second set of at least one image is captured via at least one of the plurality of image-capture devices according to the set of operational parameters.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: June 13, 2023
    Assignee: Datalogic IP Tech, S.r.l.
    Inventors: Federico Canini, Davide Bruni, Luca Perugini
  • Patent number: 11663721
    Abstract: A system for analyzing images is provided. The system includes a computing device having at least one processor in communication with at least one memory device. The at least one processor is programmed to receive an image including a plurality of objects, detect the plurality of objects in the image, determine dependencies between each of the plurality of objects, identify the plurality of objects based, at least in part, on the plurality of dependencies, and determine one or more objects of interest from the plurality of identified objects.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: May 30, 2023
    Assignee: SafeTek, LLC
    Inventors: Michael Graves, Joseph Graves
  • Patent number: 11663806
    Abstract: Various methods for utilizing a saliency heatmaps are described. The methods include obtaining image data corresponding to an image of a scene, obtaining a saliency heatmap for the image of the scene based on a saliency network, wherein the saliency heatmap indicates a likelihood of saliency for a corresponding portion of the scene, and manipulating the image data based on the saliency heatmap. In embodiments, the saliency heatmap may be produced using a trained machine learning model. The saliency heatmap may be used for various image processing tasks, such as determining which portion(s) of a scene to base an image capture device's autofocus, auto exposure, and/or white balance operations upon. According to some embodiments, one or more bounding boxes may be generated based on the saliency heatmap, e.g., using an optimization operation, which bounding box(es) may be used to assist or enhance the performance of various image processing tasks.
    Type: Grant
    Filed: April 15, 2022
    Date of Patent: May 30, 2023
    Assignee: Apple Inc.
    Inventors: Vignesh Jagadeesh, Yingjun Bai, Guillaume Tartavel, Gregory Guyomarc'h
  • Patent number: 11657522
    Abstract: System, methods, and other embodiments described herein relate to determining depths of a scene from a monocular image. In one embodiment, a method includes generating depth features from depth data using a sparse auxiliary network (SAN) by i) sparsifying the depth data, ii) applying sparse residual blocks of the SAN to the depth data, and iii) densifying the depth features. The method includes generating a depth map from the depth features and a monocular image that corresponds with the depth data according to a depth model that includes the SAN. The method includes providing the depth map as depth estimates of objects represented in the monocular image.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: May 23, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Adrien David Gaidon