Patents Examined by David F Dunphy
-
Patent number: 11978249Abstract: A computer-implemented method for identifying features of interest in a data image. The method includes identifying data variations in a data image or set of data images, each data image comprising rendered data, identifying one or more features of interest in the data image or set of data images based on the identified data variations, identifying a feature of interest genus corresponding to each identified feature of interest, reclassifying the rendered data based on each of the identified features of interest genuses so as to eliminate background data in the rendered data thereby producing an eliminated background dataset, and generating a feature of interest map for each identified feature of interest genus. A machine learning method, including a training phase, for automatically identifying features of interest in a data image is further provided.Type: GrantFiled: August 24, 2019Date of Patent: May 7, 2024Assignee: Fugro N.V.Inventors: Christine Devine, William Haneberg
-
Patent number: 11972544Abstract: Optical coherence tomography (OCT) angiography (OCTA) data is generated by one or more machine learning systems to which OCT data is input. The OCTA data is capable of visualization in three dimensions (3D) and can be generated from a single OCT scan. Further, motion artifact can be removed or attenuated in the OCTA data by performing the OCT scans according to special scan patterns and/or capturing redundant data, and by the one or more machine learning systems.Type: GrantFiled: May 12, 2021Date of Patent: April 30, 2024Assignee: TOPCON CORPORATIONInventors: Song Mei, Toshihiro Mino, Dawei Li, Zaixing Mao, Zhenguo Wang, Kinpui Chan
-
Patent number: 11961320Abstract: The invention relates to a method for training a machine learning model to identify a subject having at least one machine readable identifier providing a subject ID, said method comprising: providing a computer vision system with an image capturing system comprising at least one image capturing device, and a reader system comprising at least one reader for reading said at least one machine readable identifier; defining said machine learning model in said computer vision system; capturing a first image using said image capturing system, said first image showing said subject; reading said subject ID using said reader system when capturing said first image, and linking said subject ID with said first image, said linking providing said first image with a linked subject ID, providing a first annotated image; capturing at least one further image showing said subject, linking said linked subject ID to said at least one further image providing at least one further annotated image, and subjecting said first annotatType: GrantFiled: April 18, 2022Date of Patent: April 16, 2024Assignee: KEPLER VISION TECHNOLOGIES B.V.Inventors: Marc Jean Baptist Van Oldenborgh, Cornelis Gerardus Maria Snoek
-
Patent number: 11954853Abstract: This disclosure proposes to speed up computation time of a convolutional neural network (CNN) by leveraging information specific to a pre-defined region, such as a breast in mammography and tomosynthesis data. In an exemplary embodiment, a method for an image processing system is provided, comprising, generating an output of a trained convolutional neural network (CNN) of the image processing system based on an input image, including a pre-defined region of the input image as an additional input into at least one of a convolutional layer and a fully connected layer of the CNN to limit computations to input image data inside the pre-defined region; and storing the output and/or displaying the output on a display device.Type: GrantFiled: July 21, 2021Date of Patent: April 9, 2024Assignee: GE PRECISION HEALTHCARE LLCInventors: Sylvain Bernard, Vincent Bismuth
-
Patent number: 11954942Abstract: Embodiments relate to a human behavior recognition system using hierarchical class learning considering safety, the human behavior recognition system including a behavior class definer configured to form a plurality of behavior classes by sub-setting a plurality of images each including a subject according to pre-designated behaviors and assign a behavior label to the plurality of images, a safety class definer configured to calculate a safety index for the plurality of images, form a plurality of safety classes by sub-setting the plurality of images based on the safety index, and additionally assign a safety label to the plurality of images, and a trainer configured to train a human recognition model by using the plurality of images defined as hierarchical classes by assigning the behavior label and the safety label as training images.Type: GrantFiled: December 30, 2021Date of Patent: April 9, 2024Assignee: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Junghyun Cho, Ig Jae Kim, Hochul Hwang
-
Patent number: 11954826Abstract: This disclosure provides methods, devices, and systems for neural network inferencing. The present implementations more specifically relate to performing inferencing operations on high dynamic range (HDR) image data in a lossless manner. In some aspects, a machine learning system may receive a number (K) of bits of pixel data associated with an input image and subdivide the K bits into a number (M) of partitions based on a number (N) of bits in each operand operated on by an artificial intelligence (AI) accelerator, where N<K. For example, the K bits may represent a pixel value associated with the input image. In some implementations, the AI accelerator may perform an inferencing operation based on a neural network by processing the M partitions, in parallel, as data associated with M channels, respectively, of the input image.Type: GrantFiled: July 21, 2021Date of Patent: April 9, 2024Assignee: Synaptics IncorporatedInventor: Karthikeyan Shanmuga Vadivel
-
Patent number: 11948349Abstract: Provided are a learning method, a learning device, a generative model, and a program that generate an image including high resolution information without adjusting a parameter and largely correcting a network architecture even in a case in which there is a variation of the parts of an image to be input. Only a first image is input to a generator of a generative adversarial network that generates a virtual second image having a relatively high resolution by using the first image having a relatively low resolution, and a second image for learning or the virtual second image and part information of the second image for learning or the virtual second image are input to a discriminator that identifies the second image for learning and the virtual second image.Type: GrantFiled: August 12, 2021Date of Patent: April 2, 2024Assignee: FUJIFILM CorporationInventors: Akira Kudo, Yoshiro Kitamura
-
Patent number: 11950020Abstract: A method of visualising a meeting between one or more participants on a display includes, in an electronic processing device, the steps of: determining a plurality of signals, each of the plurality of signals being at least partially indicative of the meeting; generating a plurality of features using the plurality of signals, the features being at least partially indicative of the signals; generating at least one of: at least one phase indicator associated with the plurality of features, the at least one phase indicator being indicative of a temporal segmentation of at least part of the meeting; and at least one event indicator associated with the plurality of features, the at least one event indicator being indicative of an event during the meeting. The method also includes the step of causing a representation indicative of the at least one phase indicator and/or the at least one event indicator to be displayed on the display to thereby provide visualisation of the meeting.Type: GrantFiled: April 9, 2020Date of Patent: April 2, 2024Assignee: Pinch Labs Pty LtdInventors: Christopher Raethke, Saxon Fletcher, Jaco Du Plessis, Andrew Cupper, Iain McCowan
-
Patent number: 11938614Abstract: The disclosure discloses a control device for a robot to tease a pet and a mobile robot. The primary sensor is configured to continuously collect a preset number of frames of pet motion images in each motion cycle. The state recognizer is configured to judge the matching between the pet motion images continuously collected by the primary sensor and a pre-stored digital image of pet behavior, and then parse a matching result into behavior state parameters of a pet. The behavior interferometer is configured to adjust and control a behavior state of the pet according to the behavior state parameters and an additional road sign image provided by the secondary sensor. The laser projector is configured to project a laser beam to form a structural light spot, so that the pet changes toward the behavior state adjusted by the behavior interferometer.Type: GrantFiled: November 9, 2019Date of Patent: March 26, 2024Assignee: AMICRO SEMICONDUCTOR CO., LTD.Inventors: Dengke Xu, Xinqiao Jiang
-
Patent number: 11941899Abstract: Apparatuses, systems, and techniques generate poses of an object based on image data of the object obtained from a first viewpoint of the object and a second viewpoint of the object. The poses can be evaluated to determine a portion of the image data usable by an estimator to generate a pose of the object.Type: GrantFiled: May 26, 2021Date of Patent: March 26, 2024Assignee: NVIDIA CorporationInventors: Jonathan Tremblay, Fabio Tozeto Ramos, Yuke Zhu, Anima Anandkumar, Guanya Shi
-
Patent number: 11940578Abstract: An imaging method including: a) acquiring N successive positron emission tomography (PET) low resolution images ?i and simultaneously, N successive Ultrafast Ultrasound Imaging (UUI) images Ui of a moving object; b) determining from each UUI image Ui, the motion vector fields Mi that corresponds to the spatio-temporal geometrical transformation of the motion of the object; c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image Hn+1 obtained by applying several correction iterations to a current estimated high resolution image Hn, n being the number of iterations, starting from an initial estimated high resolution image H1 of the object, each correction iteration including at least: i) warping the estimated high resolution image Hn using the motion vector fields Mi to determine a set of low resolution reference images Lni; ii) determining a differential image Di by difference between each PET image ?i and the corresponding low resolution reType: GrantFiled: January 28, 2021Date of Patent: March 26, 2024Assignees: INSERM (INSTITUT NATIONAL DE LA SANTÉ ET DE LA RECHERCHE MÉDICALE), UNIVERSITÉ DE PARIS, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE—CNRS, ECOLE SUPERIEURE DE PHYSIQUE ET DE CHIMIE INDUSTRIELLES DE LA VILLE DE PARISInventors: Bertrand Tavitian, Mickaël Tanter, Mailyn Perez-Liva, Joaquin Lopez Herraiz, Jean Provost
-
Patent number: 11941887Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.Type: GrantFiled: September 13, 2022Date of Patent: March 26, 2024Assignee: NVIDIA CorporationInventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
-
Patent number: 11941581Abstract: The disclosure relates to systems and methods for real-time detection of a very large number of items in a given constrained volume. Specifically, the disclosure relates to systems and methods for retrieving an optimized set of classifiers from a self-updating classifiers' database, configured to selectively and specifically identify products inserted into a cart in real time, from a database comprising a large number of stock-keeping items, whereby the inserted items' captured images serve simultaneously as training dataset, validation dataset and test dataset for the recognition/identification/re-identification of the product.Type: GrantFiled: February 9, 2022Date of Patent: March 26, 2024Assignee: TRACXPOINT LLC.Inventors: Moshe Meidar, Gidon Moshkovitz, Edi Bahous, Itai Winkler
-
Patent number: 11941905Abstract: Systems and methods including one or more processors and one or more non-transitory storage devices storing computing instructions configured to run on the one or more processors and perform acts of receiving one or more digital images; identifying a foreground of the one or more digital images; analyzing the foreground of the one or more digital images to identify a skin region in the foreground of the one or more digital images; when the skin region is identified, clustering a non-skin remainder of the foreground of the one or more digital images into one or more clusters; extracting one or more patches of the one or more digital images from the one or more clusters of the foreground of the one or more digital images; determining one or more scores for the one or more patches of the one or more digital images; and coordinating displaying a patch of the one or more patches on an electronic display based on the one or more scores for the one or more patches. Other embodiments are disclosed herein.Type: GrantFiled: July 6, 2021Date of Patent: March 26, 2024Assignee: WALMART APOLLO, LLCInventors: Qian Li, Samrat Kokkula, Abon Chaudhuri, Ashley Kim, Alessandro Magnani
-
Patent number: 11931909Abstract: Apparatuses, systems, and techniques generate poses of an object based on data of the object observed from a first viewpoint and a second viewpoint. The poses can be evaluated to determine a portion of the data usable by an estimator to generate a pose of the object.Type: GrantFiled: May 26, 2021Date of Patent: March 19, 2024Assignee: NVIDIA CorporationInventors: Jonathan Tremblay, Fabio Tozeto Ramos, Yuke Zhu, Anima Anandkumar, Guanya Shi
-
Patent number: 11934450Abstract: A system and method utilizing three-dimensional (3D) data to identify objects. A database of profiles can be created for goods, product, object, or part information by producing object representations that permit rapid, highly-accurate object identification, matching, and obtaining information about the object. The database can be part of a different recognition system than the system used to identify the object. The profiles can be compared to a profile of an unknown object to identify, match, or obtain information about the unknown object, and the profiles can be filtered to identify or match the profiles of known objects to identify and/or gather information about an unknown object. Comparison, filtering, and identification may be performed prior to, subsequent to, or in conjunction with other systems, such as image-based machine learning algorithms.Type: GrantFiled: April 12, 2021Date of Patent: March 19, 2024Assignee: SKUSUB LLCInventors: Keith H. Meany, Matthew Antone
-
Patent number: 11928840Abstract: A method for analysis of an image comprises: receiving (402) the image to be analyzed; processing (404) the image with a machine-learned model, wherein the machine-learned model is configured to predict at least an intrinsic parameter of the image using at least a first variable of the machine-learned model, wherein the first variable defines a relation between a radial distortion of the image and a focal length of the image; and outputting (406) the intrinsic parameter of the image. Also, methods for forming a 3D reconstruction of a scenery, for training a machine-learned model for analysis of an image and for generating a dataset of images for training a machine-learned model are disclosed.Type: GrantFiled: March 13, 2020Date of Patent: March 12, 2024Assignee: Meta Platforms, Inc.Inventors: Yubin Kuang, Pau Gargallo Piracés, Manuel Antonio López Antequera, Roger Marí Molas, Jan Erik Solem
-
Patent number: 11915485Abstract: A system for monitoring vehicle traffic may include a camera positioned to capture images within a license plate detection zone, the images may represent license plates of vehicles. The system may include an electronic device identification sensor that detects and stores electronic device identifiers of electronic devices located within an electronic device detection zone, and a computing system that detects, using the images, a license plate ID of a vehicle, compare the license plate ID of the vehicle to a database of trusted vehicle license plate IDs, identifies the vehicle as a suspicious vehicle, the identification based at least in part on the comparison of the license plate ID of the vehicle to the database of trusted vehicle license plate IDs, and correlates the license plate ID of the vehicle with at least one of the plurality of stored electronic device identifiers.Type: GrantFiled: March 7, 2022Date of Patent: February 27, 2024Inventor: William Holloway Petrey, Jr.
-
Patent number: 11915484Abstract: A method, an apparatus, device and a storage medium for generating a target re-recognition model are provided. The method may include: acquiring a set of labeled samples, a set of unlabeled samples and an initialization model obtained through supervised training; performing feature extraction on each sample in the set of the unlabeled samples by using the initialization model; clustering features extracted from the set of the unlabeled samples by using a clustering algorithm; assigning, for each sample in the set of the unlabeled samples, a pseudo label to the sample according to a cluster corresponding to the sample in a feature space; and mixing a set of samples with a pseudo label and the set of the labeled samples as a set of training samples, and performing supervised training on the initialization model to obtain a target re-recognition model.Type: GrantFiled: June 17, 2021Date of Patent: February 27, 2024Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventors: Zhigang Wang, Jian Wang, Errui Ding, Hao Sun
-
Patent number: 11907336Abstract: Systems, methods, and computer-readable media are disclosed for visual labeling of training data items for training a machine learning model. Training data items may be generated for training the machine learning model. Visual labels, such as QR codes, may be created for the training data items. The creation of the training data item and the visual label may be automated. The visual labels and the training data items may be combined to obtain a labeled training data item. The labeled training data item may comprise a separator to distinguish the training data item from the visual label. The labeled training data item may be used for training and validation of the machine learning model. The machine learning model may analyze the training data item, attempt to identify the training data item, and compare the identification against the embedded label.Type: GrantFiled: November 2, 2021Date of Patent: February 20, 2024Assignee: SAP SEInventors: Ran M. Bittmann, Hans-Martin Ramsl