Neural Networks Patents (Class 382/156)
  • Patent number: 12141421
    Abstract: An electronic device and a controlling method thereof are provided. An electronic device includes a memory configured to store at least one instruction and a processor configured to execute the at least one instruction and operate as instructed by the at least one instruction. The processor is configured to: obtain a first image; based on receiving a first user command to correct the first image, obtain a second image by correcting the first image; based on the first image and the second image, train a neural network model; and based on receiving a second user command to correct a third image, obtain a fourth image by correcting the third image using the trained neural network model.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: November 12, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chanwon Seo, Youngeun Lee, Eunseo Kim, Myungjin Eom
  • Patent number: 12141952
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for detecting and classifying an exposure defect in an image using neural networks trained via a limited amount of labeled training images. An image may be applied to a first neural network to determine whether the images includes an exposure defect. Detected defective image may be applied to a second neural network to determine an exposure defect classification for the image. The exposure defect classification can includes severe underexposure, medium underexposure, mild underexposure, mild overexposure, medium overexposure, severe overexposure, and/or the like. The image may be presented to a user along with the exposure defect classification.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: November 12, 2024
    Assignee: Adobe Inc.
    Inventors: Akhilesh Kumar, Zhe Lin, William Lawrence Marino
  • Patent number: 12136148
    Abstract: A display method, system, device, and related computer programs can present the classification performance of an artificial neural network in a form interpretable by humans. In these methods, systems, devices, and programs, a probability calculator (i.e., a classifier) based on an artificial neural network calculates a classification result for an input image in the form of a probability. The distribution of classification-result probabilities are displayed using at least one display axis of a graph as a probability axis.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: November 5, 2024
    Assignee: K.K. CYBO
    Inventors: Keisuke Goda, Nao Nitta, Takeaki Sugimura
  • Patent number: 12136251
    Abstract: In accordance with one embodiment of the present disclosure, a method includes receiving an input image having an object and a background, intrinsically decomposing the object and the background into an input image data having a set of features, augmenting the input image data with a 2.5D differentiable renderer for each feature of the set of features to create a set of augmented images, and compiling the input image and the set of augmented images into a training data set for training a downstream task network.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: November 5, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Sergey Zakharov, Rares Ambrus, Vitor Guizilini, Adrien Gaidon
  • Patent number: 12131548
    Abstract: Disclosed is a method for training shallow convolutional neural networks for infrared target detection using a two-phase learning strategy that can converge to satisfactory detection performance, even with scale-invariance capability. In the first step, the aim is to ensure that only filters in the convolutional layer produce semantic features that serve the problem of target detection. L2-norm (Euclidian norm) is used as loss function for the stable training of semantic filters obtained from the convolutional layers. In the next step, only the decision layers are trained by transferring the weight values in the convolutional layers completely and freezing the learning rate. In this step, unlike the first, the L1-norm (mean-absolute-deviation) loss function is used.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: October 29, 2024
    Assignee: ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETI
    Inventors: Engin Uzun, Tolga Aksoy, Erdem Akagunduz
  • Patent number: 12131550
    Abstract: In one example, a method is provided that includes receiving lidar data obtained by a lidar device. The lidar data includes a plurality of data points indicative of locations of reflections from an environment of the vehicle. The method includes receiving images of portions of the environment captured by a camera at different times. The method also includes determining locations in the images that correspond to a data point of the plurality of data points. Additionally, the method includes determining feature descriptors for the locations of the images and comparing the feature descriptors to determine that sensor data associated with at least one of the lidar device, the camera, or a pose sensor is accurate or inaccurate.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: October 29, 2024
    Assignee: Waymo LLC
    Inventors: Colin Braley, Volodymyr Ivanchenko
  • Patent number: 12133030
    Abstract: A method for converting a source video content constrained to a first color space to a video content constrained to a second color space using an artificial intelligence machine-learning algorithm based on a creative profile.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: October 29, 2024
    Assignee: Warner Bros. Entertainment Inc.
    Inventors: Michael Zink, Ha Nguyen
  • Patent number: 12131463
    Abstract: An information processing apparatus calculates an index for a plant growth state with a neural network using fewer images to achieve training of the neural network. The apparatus includes first to N-th image analyzers (N?2) each analyzing a cultivation area image of a plant cultivation area with a neural network to calculate a state index indicating a growth state of the plant in the cultivation area and including the neural network trained using cultivation area images each having a predetermined growth index classified into a corresponding class of first to N-th growth index classes, and a selector receiving an input of a cultivation area image for which the state index is calculated and causing one of the first to N-th image analyzers trained using cultivation area images classified into the same growth index class as the input cultivation area image to analyze the input cultivation area image.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: October 29, 2024
    Assignee: OMRON Corporation
    Inventors: Xiangyu Zeng, Atsushi Hashimoto
  • Patent number: 12124533
    Abstract: Embodiments are generally directed to methods and apparatuses of spatially sparse convolution module for visual rendering and synthesis. An embodiment of a method for image processing, comprising: receiving an input image by a convolution layer of a neural network to generate a plurality of feature maps; performing spatially sparse convolution on the plurality of feature maps to generate spatially sparse feature maps; and upsampling the spatially sparse feature maps to generate an output image.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: October 22, 2024
    Assignee: INTEL CORPORATION
    Inventors: Anbang Yao, Ming Lu, Yikai Wang, Scott Janus, Sungye Kim
  • Patent number: 12124879
    Abstract: Provided is a control method of a deep neural network (DNN) accelerator for optimized data processing. The control method includes, based on a dataflow and a hardware mapping value of neural network data allocated to a first-level memory, calculating a plurality of offsets representing start components of a plurality of data tiles of the neural network data, based on receiving an update request for the neural network data from a second-level memory, identifying a data type of an update data tile corresponding to the received update request among the plurality of data tiles, identifying one or more components of the update data tile, based on the data type of the update data tile and an offset of the update data tile among the calculated plurality of offsets, and updating neural network data of the identified one or more components between the first-level memory and the second-level memory.
    Type: Grant
    Filed: March 29, 2023
    Date of Patent: October 22, 2024
    Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITY
    Inventors: William Jinho Song, Bogil Kim, Chanho Park, Semin Koong, Taesoo Lim
  • Patent number: 12124535
    Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.
    Type: Grant
    Filed: June 7, 2024
    Date of Patent: October 22, 2024
    Assignee: VIZIT LABS, INC.
    Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
  • Patent number: 12112023
    Abstract: An electronic device and a controlling method thereof are provided. An electronic device includes a memory configured to store at least one instruction and a processor configured to execute the at least one instruction and operate as instructed by the at least one instruction. The processor is configured to: obtain a first image; based on receiving a first user command to correct the first image, obtain a second image by correcting the first image; based on the first image and the second image, train a neural network model; and based on receiving a second user command to correct a third image, obtain a fourth image by correcting the third image using the trained neural network model.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: October 8, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chanwon Seo, Youngeun Lee, Eunseo Kim, Myungjin Eom
  • Patent number: 12106829
    Abstract: The technology disclosed relates to artificial intelligence-based base calling. The technology disclosed relates to accessing a progression of per-cycle analyte channel sets generated for sequencing cycles of a sequencing run, processing, through a neural network-based base caller (NNBC), windows of per-cycle analyte channel sets in the progression for the windows of sequencing cycles of the sequencing run such that the NNBC processes a subject window of per-cycle analyte channel sets in the progression for the subject window of sequencing cycles of the sequencing run and generates provisional base call predictions for three or more sequencing cycles in the subject window of sequencing cycles, from multiple windows in which a particular sequencing cycle appeared at different positions, using the NNBC to generate provisional base call predictions for the particular sequencing cycle, and determining a base call for the particular sequencing cycle based on the plurality of base call predictions.
    Type: Grant
    Filed: July 13, 2023
    Date of Patent: October 1, 2024
    Assignee: Illumina, Inc.
    Inventors: Anindita Dutta, Gery Vessere, Dorna KashefHaghighi, Kishore Jaganathan, Amirali Kia
  • Patent number: 12100196
    Abstract: The present disclosure relates to a system and method of performing quantization of a neural network having multiple layers. The method comprises receiving a floating-point dataset as input dataset and determining a first shift constant for first layer of the neural network based on the input dataset. The method also comprises performing quantization for the first layer using the determined shift constant of the first layer. The method further comprises determining a next shift constant for next layer of the neural network based on output of a layer previous to the next layer, and performing quantization for the next layer using the determined next shift constant. The method further comprises iterating the steps of determining shift constant and performing quantization for all layers of the neural network to generate fixed point dataset as output.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: September 24, 2024
    Assignee: Blaize, Inc.
    Inventors: Deepak Chandra Bijalwan, Pratyusha Musunuru
  • Patent number: 12093843
    Abstract: Embodiments relate to performing inference, such as object recognition, based on sensory inputs received from sensors and location information associated with the sensory inputs. The sensory inputs describe one or more features of the objects. The location information describes known or potential locations of the sensors generating the sensory inputs. An inference system learns representations of objects by characterizing a plurality of feature-location representations of the objects, and then performs inference by identifying or updating candidate objects consistent with feature-location representations observed from the sensory input data and location information. In one instance, the inference system learns representations of objects for each sensor. The set of candidate objects for each sensor is updated to those consistent with candidate objects for other sensors, as well as the observed feature-location representations for the sensor.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: September 17, 2024
    Assignee: Numenta, Inc.
    Inventors: Jeffrey C. Hawkins, Subutai Ahmad, Yuwei Cui, Marcus Anthony Lewis
  • Patent number: 12087028
    Abstract: A computer-implemented method for place recognition including: obtaining information identifying an image of a first scene; identifying a plurality of pixel clusters in the image; generating a set of feature vectors associated with the pixel clusters; generating a graph of the scene; adding a first edge between a first node and a second node in response to determining that a first property associated with a first pixel cluster is similar to a second property associated with a second pixel cluster; generating a vector representation of the graph; calculating a measure of similarity between the vector representation of the graph and a reference vector representation associated with a second scene; and determining that the first scene and the second scene are associated with a same place in response to determining that the measure of similarity is less than a threshold.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: September 10, 2024
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Chao Zhang, Ignas Budvytis, Stephan Liwicki
  • Patent number: 12086703
    Abstract: In some examples, a machine learning model may be trained to denoise an image. In some examples, the machine learning model may identify noise in an image of a sequence based at least in part, on at least one other image of the sequence. In some examples, the machine learning model may include a recurrent neural network. In some examples, the machine learning model may have a modular architecture including one or more building units. In some examples, the machine learning model may have a multi-branch architecture. In some examples, the noise may be identified and removed from the image by an iterative process.
    Type: Grant
    Filed: August 18, 2021
    Date of Patent: September 10, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Bambi L DeLaRosa, Katya Giannios, Abhishek Chaurasia
  • Patent number: 12080050
    Abstract: Methods and systems for determining information for a specimen are provided. One system includes a computer subsystem configured for determining a global texture characteristic of an image of a specimen and one or more local characteristics of a localized area in the image. The system also includes one or more components executed by the computer subsystem. The component(s) include a machine learning model configured for determining information for the specimen based on the global texture characteristic and the one or more local characteristics. The computer subsystem is also configured for generating results including the determined information. The methods and systems may be used for metrology (in which the determined information includes one or more characteristics of a structure formed on the specimen) or inspection (in which the determined information includes a classification of a defect detected on the specimen).
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: September 3, 2024
    Assignee: KLA Corp.
    Inventors: David Kucher, Sophie Salomon, Vijay Ramachandran
  • Patent number: 12073582
    Abstract: Apparatuses and methods train a model and then use the trained model to determine a global three dimensional (3D) position and orientation of a fiduciary marker. In the context of an apparatus for training a model, a wider field-of-view sensor is configured to acquire a static image of a space in which the fiducial marker is disposed and a narrower field-of-view sensor is configured to acquire a plurality of images of at least a portion of the fiducial marker. The apparatus also includes a pan-tilt unit configured to controllably alter pan and tilt angles of the narrower field-of-view sensor during image acquisition. The apparatus further includes a control system configured to determine a transformation of position and orientation information determined from the images acquired by the narrower field-of-view sensor to a coordinate system for the space for which the static image is acquired by the wider field-of-view sensor.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: August 27, 2024
    Assignee: THE BOEING COMPANY
    Inventors: David James Huber, Deepak Khosla, Yang Chen, Brandon Courter, Luke Charles Ingram, Jacob Moorman, Scott Rad, Anthony Wayne Baker
  • Patent number: 12072442
    Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: August 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
  • Patent number: 12073304
    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.
    Type: Grant
    Filed: June 16, 2023
    Date of Patent: August 27, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Charles Blundell, Oriol Vinyals
  • Patent number: 12056920
    Abstract: A method of determining a roadway map includes receiving an image from above a roadway. The method further includes generating a skeletonized map based on the received image, wherein the skeletonized map comprises a plurality of roads. The method includes identifying intersections based on joining of multiple roads of the plurality of roads in the skeletonized map. The method includes partitioning the skeletonized map based on the identified intersections, wherein partitioning the skeletonized map defines a roadway data set and an intersection data set. The method includes analyzing the roadway data set to determine a number of lanes in each roadway of the plurality of roads. The method further includes analyzing the intersection data set to lane connections in the identified intersections. The method further includes merging results of the analyzed road data set and the analyzed intersection data set to generate the roadway map.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: August 6, 2024
    Assignee: WOVEN BY TOYOTA, INC.
    Inventor: José Felix Rodrigues
  • Patent number: 12054152
    Abstract: A computer is programmed to determine a training dataset that includes a plurality of images each including a first object and an object label, train a first machine learning program to identify first object parameters of the first objects in the plurality of images based on the object labels and a confidence level based on a standard deviation of a distribution of a plurality of identifications of the first object parameters, receive, from a second machine learning program, a plurality of second images each including a second object identified with a low confidence level, process the plurality of second images with the first machine learning program to identify the second object parameters with a corresponding second confidence level that is greater than a second confidence level, retrain the first machine learning program based on the identified second object parameters.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: August 6, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Gurjeet Singh, Sowndarya Sundar
  • Patent number: 12050976
    Abstract: A method of performing, by an electronic device, a convolution operation at a certain layer in a neural network includes: obtaining N pieces of input channel data; performing a first convolution operation by applying a first input channel data group including K pieces of first input channel data from among the N pieces of input channel data to a first kernel filter group including K first kernel filters; performing a second convolution operation by applying a second input channel data group including K pieces of second input channel data from among the N pieces of input channel data to a second kernel filter group including K second kernel filters; and obtaining output channel data based on the first convolution operation and the second convolution operation, wherein K is a natural number that is less than N.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: July 30, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Tammy Lee
  • Patent number: 12045963
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect, via a graphical user interface of a client device, a user selection of an object portrayed within a digital image. The disclosed systems determine, in response to detecting the user selection of the object, a relationship between the object and an additional object portrayed within the digital image. The disclosed systems receive one or more user interactions for modifying the object. The disclosed systems modify the digital image in response to the one or more user interactions by modifying the object and the additional object based on the relationship between the object and the additional object.
    Type: Grant
    Filed: November 23, 2022
    Date of Patent: July 23, 2024
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Zhe Lin, Zhihong Ding, Luis Figueroa, Kushal Kafle
  • Patent number: 12033375
    Abstract: An object identification unit contains an artificial neural network and is designed to identify human faces. For this purpose, a face is divided into a number of triangles. The relative component of the area of each triangle in the total of the areas of all triangles is ascertained to ascertain a rotational angle of the face. The relative component of the area of each triangle in the total of the area of all triangles is then scaled to a rotation-invariant dimension of the face. The scaled area of the triangles is supplied to the artificial neural network in order to identify a person.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: July 9, 2024
    Assignee: Airbus Defence and Space GmbH
    Inventor: Manfred Hiebl
  • Patent number: 12026621
    Abstract: A computer-implemented method for training a machine-learning network, wherein the network includes receiving an input data from a sensor, wherein the input data includes data indicative of an image, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor, generating an adversarial version of the input data utilizing an optimizer, wherein the adversarial version of the input data utilizes a subset of the input data, parameters associated with the optimizer, and one or more perturbation tiles, determining loss function value in response to the adversarial version of the input data and a classification of the adversarial version of the input data, determining a perturbation tile in response the loss function value associated with one or more subsets of the adversarial version of the input data, and output a perturbation that includes at least the perturbation tile.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: July 2, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Devin T. Willmott, Anit Kumar Sahu, Fatemeh Sheikholeslami, Filipe J. Cabrita Condessa, Jeremy Kolter
  • Patent number: 12019707
    Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.
    Type: Grant
    Filed: January 18, 2024
    Date of Patent: June 25, 2024
    Assignee: VIZIT LABS, INC.
    Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
  • Patent number: 12020414
    Abstract: The present disclosure relates to an object selection system that accurately detects and automatically selects target instances of user-requested objects (e.g., a query object instance) in a digital image. In one or more embodiments, the object selection system can analyze one or more user inputs to determine an optimal object attribute detection model from multiple specialized and generalized object attribute models. Additionally, the object selection system can utilize the selected object attribute model to detect and select one or more target instances of a query object in an image, where the image includes multiple instances of the query object.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: June 25, 2024
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Zhe Lin, Mingyang Ling
  • Patent number: 12019662
    Abstract: A computerized system and methods are provided for the automated extraction of contextually relevant information, and the automatic processing of actionable information from generic document sets. More specifically, automated systems and techniques for the extraction and processing of opportunity documents, are provided, which avoid inaccuracies and inefficiencies resulting from conventional and/or human-based document processing techniques.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: June 25, 2024
    Assignee: RedShred LLC
    Inventors: James Michael Kukla, Jeehye Yun
  • Patent number: 12008821
    Abstract: Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first image depicting a first object and a second image depicting a second object, wherein the first object comprises a first feature set and the second object comprises a second feature set. The method can include processing the first image with a machine-learned image transformation model comprising a plurality of model channels to obtain a first channel mapping indicative of a mapping between the plurality of model channels and the first feature set. The method can include processing the second image with the model to obtain a second channel mapping indicative of a mapping between the plurality of model channels and the second feature set. The method can include generating an interpolation vector for a selected feature.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: June 11, 2024
    Assignee: GOOGLE LLC
    Inventors: Wen-Sheng Chu, Abhishek Kumar, Min Jin Chong
  • Patent number: 12002345
    Abstract: Embodiments of the present disclosure relate to a method and an apparatus for alerting threats to users. The apparatus may capture a plurality of signals including at least one of Electro-Magnetic (E-M) signals and sound signals. The E-M signal and sound signals are used to detect objects around the user. A threat to the user is predicted based on the objects around the user and one or more alerts are generated such that the user avoids the threat. The prediction of the threat enables the user to take an action even before the threat has occurred. Also, the alerts are generated based on the prediction such that the user can avoid the threat well in advance of the occurrence of the threat.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: June 4, 2024
    Assignee: Wipro Limited
    Inventors: Shashidhar Soppin, Chandrashekar Bangalore Nagaraj, Manjunath Ramachandra Iyer
  • Patent number: 12002185
    Abstract: A fluorescent single molecule emitter simultaneously transmits its identity, location, and cellular context through its emission patterns. A deep neural network (DNN) performs multiplexed single-molecule analysis to enable retrieving such information with high accuracy. The DNN can extract three-dimensional molecule location, orientation, and wavefront distortion with precision approaching the theoretical limit of information content of the image which will allow multiplexed measurements through the emission patterns of a single molecule.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: June 4, 2024
    Assignee: Purdue Research Foundation
    Inventors: Peiyi Zhang, Fang Huang, Sheng Liu
  • Patent number: 12001607
    Abstract: An image classification neural network is trained based on images that are the presented to an observer as a visual stimulus while collecting neurophysiological signals from a brain of the observer. The neurophysiological signals are processes to identify a neurophysiological event indicative of a detection of a target by the observer in one or more of the images, and the image classification neural network is trained to identify the target in the image based on the identification of the neurophysiological event.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: June 4, 2024
    Assignee: InnerEye Ltd.
    Inventors: Amir B. Geva, Eitan Netzer, Ran El Manor, Sergey Vaisman, Leon Y. Deouell, Uri Antman
  • Patent number: 11989931
    Abstract: An object classification method and apparatus are disclosed. The object classification method includes receiving an input image, storing first feature data extracted by a first feature extraction layer of a neural network configured to extract features of the input image, receiving second feature data from a second feature extraction layer which is an upper layer of the first feature extraction layer, generating merged feature data by merging the first feature data and the second feature data, and classifying an object in the input image based on the merged feature data.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: May 21, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangil Jung, Seungin Park, Byung In Yoo
  • Patent number: 11987264
    Abstract: A method and activity recognition system for recognising activities in surrounding environment for controlling navigation of an autonomous vehicle is disclosed. The activity recognition system receives first data feed from neuromorphic event-based camera and second data feed from frame-based RGB video camera. The first data feed comprises high-speed temporal information encoding motion associated with change in surrounding environment at each spatial location, and second data feed comprises spatio-temporal data providing scene-level contextual information associated with surrounding environment. An adaptive sampling of second data feed is performed with respect to foreground activity rate based on amount of foreground motion encoded in first data feed. Further, the activity recognition system recognizes activities associated with at least one object in surrounding environment by identifying correlation between both data feed by using two-stream neural network model.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: May 21, 2024
    Assignees: Wipro Limited, Indian Institute of Science
    Inventors: Chetan Singh Thakur, Anirban Chakraborty, Sathyaprakash Narayanan, Bibrat Ranjan Pradhan
  • Patent number: 11989916
    Abstract: Embodiments provide an automated approach for generating unbiased synthesized image-label pairs for colorization training of retro photographs. Modern grayscale images with corresponding color images are translated to images with the characteristics of retro photographs, thereby producing training data that pairs images with the characteristics of retro paragraphs with corresponding color images. This training data can then be employed to train a deep learning model to colorize retro photographs more effectively.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: May 21, 2024
    Assignee: KYOCERA Document Solutions Inc.
    Inventors: Kilho Shin, Dongpei Su
  • Patent number: 11991251
    Abstract: A method may include detecting, within a remote session, a gesture indicative of an intent of a participant in the remote session to share a resource included within content being shared by a first client device participating in the remote session. The resource may be available on a network. In response to detection of the gesture, information for accessing the resource may be extracted from an image of the content. At least a portion of the information may be provided to a second client device participating in the remote session to enable the second device to access the resource. Related systems and articles of manufacture are also provided.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: May 21, 2024
    Inventors: Xuan Liu, Wenshuang Zhang
  • Patent number: 11978239
    Abstract: The disclosure provides a target detection method and apparatus, a model training method and apparatus, a device, and a storage medium. The target detection method includes: obtaining a first image; obtaining a second image corresponding to the first image, the second image belonging to a second domain; and obtaining a detection result corresponding to the second image through a cross-domain image detection model, the detection result including target localization information and target class information of a target object, the cross-domain image detection model including a first network model configured to convert an image from a first domain into an image in the second domain, and the second network model configured to perform region localization on the image in the second domain.
    Type: Grant
    Filed: July 14, 2023
    Date of Patent: May 7, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Ze Qun Jie
  • Patent number: 11965728
    Abstract: An automated method of inspecting a pipe includes: positioning the pipe with respect to a laser scanner using a positioning apparatus; scanning a size of the positioned pipe by the laser scanner; identifying a specification and historical data of the pipe's type by inputting the scanned size to an artificially intelligent module trained through machine learning to match input size data to standardized pipe types and output corresponding specifications and historical data of the pipe types; scanning dimensions of the positioned pipe by the laser scanner using a dimension portion of the identified historical data; comparing the scanned dimensions with standard dimensions from the identified specification; detecting a dimension nonconformity when the scanned dimensions are not within acceptable tolerances of the standard dimensions; and in response to detecting the dimension nonconformity, generating an alert and updating the dimension portion of the identified historical data to reflect the detected dimension n
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: April 23, 2024
    Assignee: SAUDI ARABIAN OIL COMPANY
    Inventors: Mazin M. Fathi, Yousef Adnan Rayes
  • Patent number: 11967144
    Abstract: Methods, apparatuses and systems directed to pattern identification and pattern recognition. In some particular implementations, the invention provides a flexible pattern recognition platform including pattern recognition engines that can be dynamically adjusted to implement specific pattern recognition configurations for individual pattern recognition applications. In some implementations, the present invention also provides for a partition configuration where knowledge elements can be grouped and pattern recognition operations can be individually configured and arranged to allow for multi-level pattern recognition schemes.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: April 23, 2024
    Assignee: DataShapes, Inc.
    Inventor: Jeffrey Brian Adams
  • Patent number: 11954595
    Abstract: Provided is a method, performed by an electronic device, of recognizing an object included in an image, the method including: extracting first object information from a first object included in a first image, obtaining a learning model for generating an image including a second object from the first object information, generating a second image including the second object by inputting the first object information to the learning model, comparing the first image with the second image, and recognizing the first object as the second object in the first image, based on a result of the comparing.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: April 9, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yehoon Kim, Chanwon Seo
  • Patent number: 11955272
    Abstract: A method for generating an object detector based on deep learning capable of detecting an extended object class is provided. The method is related to generating the object detector based on the deep learning capable of detecting the extended object class to thereby allow both an object class having been trained and additional object class to be detected. According to the method, it is possible to generate the training data set necessary for training an object detector capable of detecting the extended object class at a low cost in a short time and further it is possible to generate the object detector capable of detecting the extended object class at a low cost in a short time.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: April 9, 2024
    Assignee: SUPERB AI CO., LTD.
    Inventor: Kye Hyeon Kim
  • Patent number: 11947668
    Abstract: In some embodiments, an apparatus includes a memory and a processor. The processor is configured to extract a set of features from a potentially malicious file and provide the set of features as an input to a normalization layer of a neural network. The processor is configured to implement the normalization layer by calculating a set of parameters associated with the set of features and normalizing the set of features based on the set of parameters to define a set of normalized features. The processor is further configured to provide the set of normalized features and the set of parameters as inputs to an activation layer of the neural network such that the activation layer produces an output based on the set of normalized features and the set of parameters. The output can be used to produce a maliciousness classification of the potentially malicious file.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: April 2, 2024
    Assignee: Sophos Limited
    Inventor: Richard Harang
  • Patent number: 11948088
    Abstract: Method and apparatus are disclosed for image recognition. The method may include performing a vision task on an image by using a multi-scales capsules network, wherein the multi-scales capsules network includes at least two branches and an aggregation block, each of the at least two branches includes a convolution block, a primary capsules block and a transformation block, and a dimension of capsules of the primary capsules block in each of the at least two branches is different.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: April 2, 2024
    Assignee: Nokia Technologies OY
    Inventor: Tiancai Wang
  • Patent number: 11941794
    Abstract: System and methods and computer program code are provided to perform a commissioning process comprising capturing, using an image capture device, an image of an area containing at least a first fixture, identifying location and positioning information associated with the image, performing image processing of the image to identify a location of the at least first fixture in the image, and converting the location of the at least first fixture in the image into physical coordinate information associated with the at least first fixture.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: March 26, 2024
    Assignee: CURRENT LIGHTING SOLUTIONS, LLC
    Inventors: Glenn Howard Kuenzler, Taylor Apolonius Barto
  • Patent number: 11922662
    Abstract: In one or more implementations, the apparatus, systems and methods disclosed herein are directed to configuring a color measurement device to output color measurements that match the expected output of a different color measurement device. In a particular implementation, a method is provided for matching the color measurements made by a color measurement device to the color measurements made by a target color measurement device by implementing a single step color calibration and conversion process using an Artificial Neural Network (ANN). By way of non-limiting example, the raw counts from the color measurement device is converted to a specific color space, such as L*a*b, directly through an ANN. Such ANN is trained to ensure the output of the color measurement from the color measurement device will match with the output of the color measurement from a target color measurement device.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: March 5, 2024
    Assignee: DATACOLOR INC.
    Inventor: Hong Wei
  • Patent number: 11908185
    Abstract: Methods, non-transitory computer-readable storage media, and computer or computer systems directed to detecting, analyzing, and tracking roads and grading activity using satellite or aerial imagery in combination with a machine learned model are described.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: February 20, 2024
    Assignee: Metrostudy, Inc.
    Inventors: Corentin Guillo, Sivakumaran Somasundaram
  • Patent number: 11896360
    Abstract: Systems and methods for generating thin slice images from thick slice images are disclosed herein. In some examples, a deep learning system may calculate a residual from a thick slice image and add the residual to the thick slice image to generate a thin slice image. In some examples, the deep learning system includes a neural network. In some examples, the neural network may include one or more levels, where one or more of the levels include one or more blocks. In some examples, each level includes a convolution block and a non-linear activation function block. The levels of the neural network may be in a cascaded arrangement in some examples.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: February 13, 2024
    Assignee: LVIS Corporation
    Inventors: Zhongnan Fang, Akshay S. Chaudhari, Jin Hyung Lee, Brian A. Hargreaves
  • Patent number: 11893792
    Abstract: Techniques are disclosed for identifying and presenting video content that demonstrates features of a target product. The video content can be accessed, for example, from a media database of user-generated videos that demonstrate one or more features of the target product so that a user can see and hear the product in operation via a product webpage before making a purchasing decision. The product functioning videos supplement any static images of the target product and the textual product description to provide the user with additional context for each of the product's features, depending on the textual product description. The user can quickly and easily interact with the product webpage to access and playback the product functioning video to see and/or hear the product in operation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Gourav Singhal, Sourabh Gupta, Mrinal Kumar Sharma