Patents Examined by Nirav G Patel
  • Patent number: 11055849
    Abstract: A method (400) including: determining (702) a registration function [705, Niirf(T1)] for the particular brain in a coordinate space, determining (706) a registered atlas [708, Ard(T1)] from the registration function and an HCP-MMP1 Atlas (102) containing a standard parcellation scheme, performing (310, 619) diffusion tractography to determine a set [621, DTIp(DTI)] of brain tractography images of the particular brain, for a voxel in a particular parcellation in the registered atlas, determining (1105, 1120) voxel level tractography vectors [1123, Vje, Vjn] showing connectivity of the voxel with voxels in other parcellations, classifying (1124) the voxel based on the probability of the voxel being part of the particular parcellation, and repeating (413) the determining of the voxel level tractography vectors and the classifying of the voxels for parcellations of the HCP-MMP1 Atlas to form a personalised brain atlas [1131, PBs Atlas] containing an adjusted parcellation scheme reflecting the particular brain (Bb
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: July 6, 2021
    Assignee: Omniscient Neurotechnology Pty Limited
    Inventors: Michael Edward Sughrue, Stephane Philippe Doyen, Charles Teo
  • Patent number: 11048978
    Abstract: Methods and systems for meta-learning are described for automating learning of child tasks with a single neural network. The order in which tasks are learned by the neural network can affect performance of the network, and the meta-learning approach can use a task-level curriculum for multi-task training. The task-level curriculum can be learned by monitoring a trajectory of loss functions during training. The meta-learning approach can learn to adapt task loss balancing weights in the course of training to get improved performance on multiple tasks on real world datasets. Advantageously, learning to dynamically balance weights among different task losses can lead to superior performance over the use of static weights determined by expensive random searches or heuristics. Embodiments of the meta-learning approach can be used for computer vision tasks or natural language processing tasks, and the trained neural networks can be used by augmented or virtual reality devices.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: June 29, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Srivignesh Rajendran, Chen-Yu Lee
  • Patent number: 11048942
    Abstract: A method and apparatus for detecting a garbage dumping action in real time on a video surveillance system are provided. A change region, which is a motion region, from an input image is detected, joint information including joint coordinates corresponding to a region in which joints exist is generated, and an object held by a person from the image using the change region and the joint information is detected. Then, an action of dumping the object based on a distance between the object and the joint coordinates is detected.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: June 29, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kimin Yun, Yongjin Kwon, Jin Young Moon, Sungchan Oh, Jongyoul Park
  • Patent number: 11048929
    Abstract: When detecting object areas, it is possible to appropriately evaluate each detection area regardless of the overlapping relationship and positional relationship between detection areas. A human body detection apparatus obtains an image captured by an imaging unit, detects, as detection areas, predetermined object areas from the captured image, and evaluates an evaluation target detection area by comparing, among the detection areas, the evaluation target detection area and other detection areas.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: June 29, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kazunari Iwamoto
  • Patent number: 11037024
    Abstract: In an aspect of the present disclosure relates to a network involving humans and AI based systems working in conjunction to perform tasks such as traffic violation detection, infrastructure monitoring, traffic flow management, crop monitoring etc. from visual data acquired from numerous data acquisition sources. The system includes a network of electronic mobile devices with AI capabilities, connected to a decentralized network working towards capturing high quality data for finding events or objects of interest in the real world, retraining AI models, for processing the high volumes of data on decentralized or centralized processing units and also being used for the verification of the outputs of the AI models. The system talks about many annotation techniques on smartphones for crowdsourced AI Data Labeling for AI Training.
    Type: Grant
    Filed: November 29, 2019
    Date of Patent: June 15, 2021
    Inventor: Jayant Ratti
  • Patent number: 11037295
    Abstract: A method for training a computer-implemented machine learning model for detecting irregularities in medical images, the method including: identifying at least one predetermined type of body region (14) depicted in a medical image (10), said body region (14) having a depicted irregularity (12); defining a plurality of image segments (20) each including at least part of the depicted body region (14), wherein a resolution of the image segments (20) is maintained or not reduced by more than 20% compared to the medical image (10); and using said image segments (20) to train a machine learning model to detect similar irregularities (12) in other medical images (10). Further, the invention relates to a use and to systems for training a computer-implemented machine learning model for detecting irregularities in medical images.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: June 15, 2021
    Assignee: OXIPIT, UAB
    Inventors: Jogundas Armaitis, Darius Baru{hacek over (s)}auskas, Jonas Bialopetravi{hacek over (c)}ius, Gediminas Pek{hacek over (s)}ys, Naglis Ramanauskas
  • Patent number: 11030485
    Abstract: Embodiments of a deep learning enabled generative sensing and feature regeneration framework which integrates low-end sensors/low quality data with computational intelligence to attain a high recognition accuracy on par with that attained with high-end sensors/high quality data or to optimize a performance measure for a desired task are disclosed.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: June 8, 2021
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Lina Karam, Tejas Borkar
  • Patent number: 11026635
    Abstract: Comprehensive systems and methods for managing hair loss are provided which enable an individual experiencing hair loss, and/or the person consulting him or her, to manage it and to determine and efficiently plan any appropriate treatment options. Management of hair loss may comprise quantifying hair loss, determining what hair growth stimulation product or treatment to adopt and the best timing for such products and/or treatments, and allowing to track and manage any progress of the selected hair growth stimulation product or treatment.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: June 8, 2021
    Assignee: Restoration Robotics, Inc.
    Inventors: Gabriele Zingaretti, Miguel G. Canales, James W. McCollum
  • Patent number: 11017221
    Abstract: A classifier receives a document from a multi-document transaction. The classifier analyzes the document to identify one or more embedded dates in the content of the document and context of one or more positions of the one or more embedded dates in the document. The classifier evaluates each of the one or more embedded dates based on the separate context of each of the one or more positions within the document and a relative age of the one or more embedded dates in view of temporal characteristics of multiple categories of documents of a transaction to select a particular category associated with the document from among the multiple categories. The classifier classifies the document within the transaction as a particular logical type identified by the particular category from among multiple logical types.
    Type: Grant
    Filed: July 1, 2018
    Date of Patent: May 25, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew R. Freed, Corville O. Allen
  • Patent number: 11017262
    Abstract: A hardware configuration is constructed for calculating at high speed the co-occurrence of luminance gradient directions between differing resolutions for a subject image. In an image processing device, a processing line for high-resolution images, a processing line for medium-resolution images, and a processing line for low-resolution images are arranged in parallel, and the luminance gradient directions are extracted for each pixel simultaneously in parallel from images having the three resolutions. Co-occurrence matrix preparation units prepare co-occurrence matrices by using the luminance gradient directions extracted from these images having the three resolutions, and a histogram preparation unit outputs a histogram as an MRCoHOG feature amount by using these matrices. To concurrently processing the images having the three resolutions, high-speed processing can be performed, and moving pictures output from a camera can be processed in real time.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: May 25, 2021
    Assignees: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo Yamada, Kazuhiro Kuno, Hakaru Tamukoh, Shuichi Enokida, Shiryu Ooe
  • Patent number: 11012616
    Abstract: An image processing system including a camera, a positioning device, a processor and a display is provided. The camera captures a real view image. The positioning device detects a camera position of the camera. The processor receives a high-precision map and a virtual object, calculates a camera posture of the camera according to the camera position by using a simultaneous localization and mapping (SLAM), projects a depth image according to the camera posture and a three-dimensional information of the high-precision map, superimposes the depth image and the real view image to generate a stack image, and superimposes the virtual object to the stack image according to a virtual coordinate of the virtual object to produce a rendered image. The display displays the rendered image.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: May 18, 2021
    Inventors: Shou-Te Wei, Wei-Chih Chen
  • Patent number: 11010639
    Abstract: An angularly-dependent reflectance of a surface of an object is measured. Images are collected by a sensor at different sensor geometries and different light-source geometries. A point cloud is generated. The point cloud includes a location of a point, spectral band intensity values for the point, an azimuth and an elevation of the sensor, and an azimuth and an elevation of a light source. Raw pixel intensities of the object and surroundings of the object are converted to a surface reflectance of the object using specular array calibration (SPARC) targets. A three-dimensional (3D) location of each point in the point cloud is projected back to each image using metadata from the plurality of images, and spectral band values are assigned to each value in the point cloud, thereby resulting in a multi-angle spectral reflectance data set.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: May 18, 2021
    Assignee: Raytheon Company
    Inventors: John J. Coogan, Stephen J. Schiller
  • Patent number: 11003889
    Abstract: A classifier receives a document and analyzes the document to determine one or more predicted roles of one or more signatories, each predicted role determined based on one or more signature elements in the content of the document executed by the one or more signatories. The classifier evaluates each of the one or more predicted roles in view of a plurality of expected signatory role characteristics of a plurality of categories of documents of a transaction to select a particular category associated with the document from among the plurality of categories. The classifier classifies the document within the transaction as a particular logical type identified by the particular category from among a plurality of logical types for the transaction.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: May 11, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew R. Freed, Corville O. Allen
  • Patent number: 10997440
    Abstract: According to one embodiments, a wearing detection apparatus includes a restraint device, an imaging device, a control unit, and an output device. The restraint device is to restrain a passenger seated in a seat, and includes a strap an exposed area of which changes when the strap is being worn and not worn. The imaging device captures an image of the restraint device including the strap. The control unit detects from the image captured by the imaging device, a recognition image corresponding to a recognition member provided in an area that is exposed when the strap is worn. The output device outputs a result of detection of the recognition image.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: May 4, 2021
    Assignee: Toshiba Digital Solutions Corporation
    Inventors: Akifumi Ohno, Tatsuei Hoshino, Tamotsu Masuda, Satoshi Maesugi, Kousuke Imai, Naoto Shinkawa
  • Patent number: 10997685
    Abstract: An object (e.g., a driver's license) is tested for authenticity using imagery captured by a consumer device (e.g., a mobile phone camera). Corresponding data is sent from the consumer device to a remote system, which has secret knowledge about features indicating object authenticity. The phone, or the remote system, discerns the pose of the object relative to the camera from the captured imagery. The remote system tests the received data for the authentication features, and issues an output signal indicating whether the object is authentic. This testing involves modeling the image data that would be captured by the consumer device from an authentic object—based on the object's discerned pose (and optionally based on information about the camera optics), and then comparing this modeled data with the data sent from the consumer device. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: May 4, 2021
    Assignee: Digimarc Corporation
    Inventor: Geoffrey B. Rhoads
  • Patent number: 10992959
    Abstract: First and second pluralities of residual elements useable to reconstruct first and second respective parts of a representation of a signal are obtained. A transformation operation is performed to generate at least one correlation element. The transformation operation involves at least one residual element in the first plurality and at least one residual element in the second plurality. The at least one correlation element is dependent on an extent of correlation between the at least one residual element in the first plurality and the at least one residual element in the second plurality. The transformation operation is performed prior to the at least one correlation element being encoded.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: April 27, 2021
    Assignee: V-NOVA INTERNATIONAL LIMITED
    Inventor: Ivan Damnjanovic
  • Patent number: 10990827
    Abstract: The invention relates to the area of computer vision video data analysis, in particular to the technologies aimed to search the required objects or events in the analyzed video originally received from a third-party device. An imported video analysis device consists of memory, database for metadata storage, a graphical user interface, and a data processing module. The data processing module is configured to upload a video in any available format into the memory and to import the uploaded video into software of the imported video analysis device. Software decompresses and analyzes the imported video to generate metadata characterizing the data in all objects in the video and to save the metadata in database. The search speed for the required event or object in the imported video received from a third-party device is increased.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: April 27, 2021
    Assignee: OOO ITV Group
    Inventor: Murat K. Altuev
  • Patent number: 10992943
    Abstract: A set of residual elements useable to reconstruct a rendition of a first time sample of a signal is obtained. A set of spatio-temporal correlation elements associated with the first time sample is generated. The set of spatio-temporal correlation elements is indicative of an extent of spatial correlation between a plurality of residual elements and an extent of temporal correlation between first reference data based on the rendition and second reference data based on a rendition of a second time sample of the signal. The set of spatio-temporal correlation elements is used to generate output data.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: April 27, 2021
    Assignee: V-NOVA INTERNATIONAL LIMITED
    Inventor: David Handford
  • Patent number: 10977509
    Abstract: An image processing method implemented by a processor includes receiving an image, acquiring a target image that includes an object from the image, calculating an evaluation score by evaluating a quality of the target image, and detecting the object from the target image based on the evaluation score.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: April 13, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hao Feng, Jae-Joon Han, Changkyu Choi, Chao Zhang, Jingtao Xu, Yanhu Shan, Yaozu An
  • Patent number: 10977765
    Abstract: One or more neural networks generate a first vector field from an input image and a reference image. The first vector field is applied to the input image to generate a first warped image. The training of the neural networks is evaluated via one or more objective functions. The neural networks are updated in response to the evaluating. The neural networks generate a second vector field from the input image and the reference image. A number of degrees of freedom in the first vector field is less than a number of degrees of freedom in the second vector field. The second vector field is applied to the input image to generate a second warped image. The neural networks are evaluated via the one or more objective functions, the reference image and the second warped image. The networks are updated in response to the evaluating.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: April 13, 2021
    Assignee: Eagle Technology, LLC
    Inventors: Derek J. Walvoord, Doug W. Couwenhoven