Neural Networks Patents (Class 382/156)
-
Patent number: 12169955Abstract: An image acquisition unit 110 acquires a plurality of images. The plurality of images include an object to be inferred. An image cut-out unit 120 cuts out an object region including the object from each of the plurality of images acquired by the image acquisition unit 110. An importance generation unit 130 generates importance information by processing the object region cut out by the image cut-out unit 120. The importance information indicates the importance of the object region when an object inference model is generated, and is generated for each object region, that is, for each image acquired by the image acquisition unit 110. A learning data generation unit 140 stores a plurality of object regions cut out by the image cut-out unit 120 and a plurality of pieces of importance information generated by the importance generation unit 130 in a learning data storage unit 150 as at least a part of the learning data.Type: GrantFiled: January 28, 2022Date of Patent: December 17, 2024Assignee: NEC CORPORATIONInventors: Tomokazu Kaneko, Katsuhiko Takahashi, Makoto Terao, Soma Shiraishi, Takami Sato, Yu Nabeto, Ryosuke Sakai
-
Patent number: 12169784Abstract: Today, artificial neural networks are trained on large sets of manually tagged images. Generally, for better training, the training data should be as large as possible. Unfortunately, manually tagging images is time consuming and susceptible to error, making it difficult to produce the large sets of tagged data used to train artificial neural networks. To address this problem, the inventors have developed a smart tagging utility that uses a feature extraction unit and a fast-learning classifier to learn tags and tag images automatically, reducing the time to tag large sets of data. The feature extraction unit and fast-learning classifiers can be implemented as artificial neural networks that associate a label with features extracted from an image and tag similar features from the image or other images with the same label. Moreover, the smart tagging system can learn from user adjustment to its proposed tagging. This reduces tagging time and errors.Type: GrantFiled: August 8, 2022Date of Patent: December 17, 2024Assignee: Neurala, Inc.Inventors: Lucas Neves, Liam Debeasi, Heather Ames Versace, Jeremy Wurbs, Massimiliano Versace, Warren Katz, Anatoli Gorchet
-
Method and system for a high-frequency attention network for efficient single image super-resolution
Patent number: 12169913Abstract: Example aspects include techniques for implementing a high-frequency attention network for single image super-resolution. These techniques may include extracting a plurality of features from an original image input into a CNN to generate a feature map, and restoring one or more high-frequency details of the original image via an efficient residual block (ERB) and a high-frequency attention block (HFAB) configured to assign a scaling factor to one or more high-frequency areas. In addition, the techniques may include generating reconstruction input information by performing an element-wise operation on the one or more high-frequency details and cross-connection information from the feature map and performing, by the CNN, an enhancement operation on the reconstruction input information to generate an enhanced image.Type: GrantFiled: February 10, 2022Date of Patent: December 17, 2024Assignee: LEMON INC.Inventor: Ding Liu -
Patent number: 12148146Abstract: A computer system for mapping coatings to a spatial appearance space may receive coating spatial appearance variables of a target coating from a coating-measurement instrument. The computer system may generate spatial appearance space coordinates for the target coating by mapping each of the coating spatial appearance variables to an individual axis of a multidimensional coordinate system. The computer system may identify particular spatial appearance space coordinates from the identified spatial appearance space coordinates associated with the potentially matching reference coatings that are associated with a smallest spatial-appearance-space distance from the spatial appearance space coordinates of the target coating. Further, the computer system may display a visual interface element indicating a particular reference coating that is associated with the particular spatial appearance space coordinates as a proposed spatial appearance match to the target coating.Type: GrantFiled: September 18, 2020Date of Patent: November 19, 2024Assignee: PPG Industries Ohio, Inc.Inventors: Anthony J. Foderaro, Alison M. Norris
-
Patent number: 12141421Abstract: An electronic device and a controlling method thereof are provided. An electronic device includes a memory configured to store at least one instruction and a processor configured to execute the at least one instruction and operate as instructed by the at least one instruction. The processor is configured to: obtain a first image; based on receiving a first user command to correct the first image, obtain a second image by correcting the first image; based on the first image and the second image, train a neural network model; and based on receiving a second user command to correct a third image, obtain a fourth image by correcting the third image using the trained neural network model.Type: GrantFiled: January 22, 2021Date of Patent: November 12, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Chanwon Seo, Youngeun Lee, Eunseo Kim, Myungjin Eom
-
Patent number: 12141952Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for detecting and classifying an exposure defect in an image using neural networks trained via a limited amount of labeled training images. An image may be applied to a first neural network to determine whether the images includes an exposure defect. Detected defective image may be applied to a second neural network to determine an exposure defect classification for the image. The exposure defect classification can includes severe underexposure, medium underexposure, mild underexposure, mild overexposure, medium overexposure, severe overexposure, and/or the like. The image may be presented to a user along with the exposure defect classification.Type: GrantFiled: September 30, 2022Date of Patent: November 12, 2024Assignee: Adobe Inc.Inventors: Akhilesh Kumar, Zhe Lin, William Lawrence Marino
-
Patent number: 12136251Abstract: In accordance with one embodiment of the present disclosure, a method includes receiving an input image having an object and a background, intrinsically decomposing the object and the background into an input image data having a set of features, augmenting the input image data with a 2.5D differentiable renderer for each feature of the set of features to create a set of augmented images, and compiling the input image and the set of augmented images into a training data set for training a downstream task network.Type: GrantFiled: January 19, 2022Date of Patent: November 5, 2024Assignee: Toyota Research Institute, Inc.Inventors: Sergey Zakharov, Rares Ambrus, Vitor Guizilini, Adrien Gaidon
-
Patent number: 12136148Abstract: A display method, system, device, and related computer programs can present the classification performance of an artificial neural network in a form interpretable by humans. In these methods, systems, devices, and programs, a probability calculator (i.e., a classifier) based on an artificial neural network calculates a classification result for an input image in the form of a probability. The distribution of classification-result probabilities are displayed using at least one display axis of a graph as a probability axis.Type: GrantFiled: June 17, 2019Date of Patent: November 5, 2024Assignee: K.K. CYBOInventors: Keisuke Goda, Nao Nitta, Takeaki Sugimura
-
Patent number: 12131548Abstract: Disclosed is a method for training shallow convolutional neural networks for infrared target detection using a two-phase learning strategy that can converge to satisfactory detection performance, even with scale-invariance capability. In the first step, the aim is to ensure that only filters in the convolutional layer produce semantic features that serve the problem of target detection. L2-norm (Euclidian norm) is used as loss function for the stable training of semantic filters obtained from the convolutional layers. In the next step, only the decision layers are trained by transferring the weight values in the convolutional layers completely and freezing the learning rate. In this step, unlike the first, the L1-norm (mean-absolute-deviation) loss function is used.Type: GrantFiled: April 15, 2020Date of Patent: October 29, 2024Assignee: ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETIInventors: Engin Uzun, Tolga Aksoy, Erdem Akagunduz
-
Patent number: 12131463Abstract: An information processing apparatus calculates an index for a plant growth state with a neural network using fewer images to achieve training of the neural network. The apparatus includes first to N-th image analyzers (N?2) each analyzing a cultivation area image of a plant cultivation area with a neural network to calculate a state index indicating a growth state of the plant in the cultivation area and including the neural network trained using cultivation area images each having a predetermined growth index classified into a corresponding class of first to N-th growth index classes, and a selector receiving an input of a cultivation area image for which the state index is calculated and causing one of the first to N-th image analyzers trained using cultivation area images classified into the same growth index class as the input cultivation area image to analyze the input cultivation area image.Type: GrantFiled: June 23, 2020Date of Patent: October 29, 2024Assignee: OMRON CorporationInventors: Xiangyu Zeng, Atsushi Hashimoto
-
Patent number: 12133030Abstract: A method for converting a source video content constrained to a first color space to a video content constrained to a second color space using an artificial intelligence machine-learning algorithm based on a creative profile.Type: GrantFiled: December 19, 2019Date of Patent: October 29, 2024Assignee: Warner Bros. Entertainment Inc.Inventors: Michael Zink, Ha Nguyen
-
Patent number: 12131550Abstract: In one example, a method is provided that includes receiving lidar data obtained by a lidar device. The lidar data includes a plurality of data points indicative of locations of reflections from an environment of the vehicle. The method includes receiving images of portions of the environment captured by a camera at different times. The method also includes determining locations in the images that correspond to a data point of the plurality of data points. Additionally, the method includes determining feature descriptors for the locations of the images and comparing the feature descriptors to determine that sensor data associated with at least one of the lidar device, the camera, or a pose sensor is accurate or inaccurate.Type: GrantFiled: December 30, 2020Date of Patent: October 29, 2024Assignee: Waymo LLCInventors: Colin Braley, Volodymyr Ivanchenko
-
Patent number: 12124533Abstract: Embodiments are generally directed to methods and apparatuses of spatially sparse convolution module for visual rendering and synthesis. An embodiment of a method for image processing, comprising: receiving an input image by a convolution layer of a neural network to generate a plurality of feature maps; performing spatially sparse convolution on the plurality of feature maps to generate spatially sparse feature maps; and upsampling the spatially sparse feature maps to generate an output image.Type: GrantFiled: September 23, 2021Date of Patent: October 22, 2024Assignee: INTEL CORPORATIONInventors: Anbang Yao, Ming Lu, Yikai Wang, Scott Janus, Sungye Kim
-
Patent number: 12124879Abstract: Provided is a control method of a deep neural network (DNN) accelerator for optimized data processing. The control method includes, based on a dataflow and a hardware mapping value of neural network data allocated to a first-level memory, calculating a plurality of offsets representing start components of a plurality of data tiles of the neural network data, based on receiving an update request for the neural network data from a second-level memory, identifying a data type of an update data tile corresponding to the received update request among the plurality of data tiles, identifying one or more components of the update data tile, based on the data type of the update data tile and an offset of the update data tile among the calculated plurality of offsets, and updating neural network data of the identified one or more components between the first-level memory and the second-level memory.Type: GrantFiled: March 29, 2023Date of Patent: October 22, 2024Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITYInventors: William Jinho Song, Bogil Kim, Chanho Park, Semin Koong, Taesoo Lim
-
Patent number: 12124535Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.Type: GrantFiled: June 7, 2024Date of Patent: October 22, 2024Assignee: VIZIT LABS, INC.Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
-
Patent number: 12112023Abstract: An electronic device and a controlling method thereof are provided. An electronic device includes a memory configured to store at least one instruction and a processor configured to execute the at least one instruction and operate as instructed by the at least one instruction. The processor is configured to: obtain a first image; based on receiving a first user command to correct the first image, obtain a second image by correcting the first image; based on the first image and the second image, train a neural network model; and based on receiving a second user command to correct a third image, obtain a fourth image by correcting the third image using the trained neural network model.Type: GrantFiled: January 22, 2021Date of Patent: October 8, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Chanwon Seo, Youngeun Lee, Eunseo Kim, Myungjin Eom
-
Patent number: 12106829Abstract: The technology disclosed relates to artificial intelligence-based base calling. The technology disclosed relates to accessing a progression of per-cycle analyte channel sets generated for sequencing cycles of a sequencing run, processing, through a neural network-based base caller (NNBC), windows of per-cycle analyte channel sets in the progression for the windows of sequencing cycles of the sequencing run such that the NNBC processes a subject window of per-cycle analyte channel sets in the progression for the subject window of sequencing cycles of the sequencing run and generates provisional base call predictions for three or more sequencing cycles in the subject window of sequencing cycles, from multiple windows in which a particular sequencing cycle appeared at different positions, using the NNBC to generate provisional base call predictions for the particular sequencing cycle, and determining a base call for the particular sequencing cycle based on the plurality of base call predictions.Type: GrantFiled: July 13, 2023Date of Patent: October 1, 2024Assignee: Illumina, Inc.Inventors: Anindita Dutta, Gery Vessere, Dorna KashefHaghighi, Kishore Jaganathan, Amirali Kia
-
Patent number: 12100196Abstract: The present disclosure relates to a system and method of performing quantization of a neural network having multiple layers. The method comprises receiving a floating-point dataset as input dataset and determining a first shift constant for first layer of the neural network based on the input dataset. The method also comprises performing quantization for the first layer using the determined shift constant of the first layer. The method further comprises determining a next shift constant for next layer of the neural network based on output of a layer previous to the next layer, and performing quantization for the next layer using the determined next shift constant. The method further comprises iterating the steps of determining shift constant and performing quantization for all layers of the neural network to generate fixed point dataset as output.Type: GrantFiled: March 21, 2022Date of Patent: September 24, 2024Assignee: Blaize, Inc.Inventors: Deepak Chandra Bijalwan, Pratyusha Musunuru
-
Patent number: 12093843Abstract: Embodiments relate to performing inference, such as object recognition, based on sensory inputs received from sensors and location information associated with the sensory inputs. The sensory inputs describe one or more features of the objects. The location information describes known or potential locations of the sensors generating the sensory inputs. An inference system learns representations of objects by characterizing a plurality of feature-location representations of the objects, and then performs inference by identifying or updating candidate objects consistent with feature-location representations observed from the sensory input data and location information. In one instance, the inference system learns representations of objects for each sensor. The set of candidate objects for each sensor is updated to those consistent with candidate objects for other sensors, as well as the observed feature-location representations for the sensor.Type: GrantFiled: March 11, 2021Date of Patent: September 17, 2024Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Subutai Ahmad, Yuwei Cui, Marcus Anthony Lewis
-
Patent number: 12087028Abstract: A computer-implemented method for place recognition including: obtaining information identifying an image of a first scene; identifying a plurality of pixel clusters in the image; generating a set of feature vectors associated with the pixel clusters; generating a graph of the scene; adding a first edge between a first node and a second node in response to determining that a first property associated with a first pixel cluster is similar to a second property associated with a second pixel cluster; generating a vector representation of the graph; calculating a measure of similarity between the vector representation of the graph and a reference vector representation associated with a second scene; and determining that the first scene and the second scene are associated with a same place in response to determining that the measure of similarity is less than a threshold.Type: GrantFiled: March 1, 2022Date of Patent: September 10, 2024Assignee: Kabushiki Kaisha ToshibaInventors: Chao Zhang, Ignas Budvytis, Stephan Liwicki
-
Patent number: 12086703Abstract: In some examples, a machine learning model may be trained to denoise an image. In some examples, the machine learning model may identify noise in an image of a sequence based at least in part, on at least one other image of the sequence. In some examples, the machine learning model may include a recurrent neural network. In some examples, the machine learning model may have a modular architecture including one or more building units. In some examples, the machine learning model may have a multi-branch architecture. In some examples, the noise may be identified and removed from the image by an iterative process.Type: GrantFiled: August 18, 2021Date of Patent: September 10, 2024Assignee: Micron Technology, Inc.Inventors: Bambi L DeLaRosa, Katya Giannios, Abhishek Chaurasia
-
Patent number: 12080050Abstract: Methods and systems for determining information for a specimen are provided. One system includes a computer subsystem configured for determining a global texture characteristic of an image of a specimen and one or more local characteristics of a localized area in the image. The system also includes one or more components executed by the computer subsystem. The component(s) include a machine learning model configured for determining information for the specimen based on the global texture characteristic and the one or more local characteristics. The computer subsystem is also configured for generating results including the determined information. The methods and systems may be used for metrology (in which the determined information includes one or more characteristics of a structure formed on the specimen) or inspection (in which the determined information includes a classification of a defect detected on the specimen).Type: GrantFiled: December 20, 2021Date of Patent: September 3, 2024Assignee: KLA Corp.Inventors: David Kucher, Sophie Salomon, Vijay Ramachandran
-
Patent number: 12073582Abstract: Apparatuses and methods train a model and then use the trained model to determine a global three dimensional (3D) position and orientation of a fiduciary marker. In the context of an apparatus for training a model, a wider field-of-view sensor is configured to acquire a static image of a space in which the fiducial marker is disposed and a narrower field-of-view sensor is configured to acquire a plurality of images of at least a portion of the fiducial marker. The apparatus also includes a pan-tilt unit configured to controllably alter pan and tilt angles of the narrower field-of-view sensor during image acquisition. The apparatus further includes a control system configured to determine a transformation of position and orientation information determined from the images acquired by the narrower field-of-view sensor to a coordinate system for the space for which the static image is acquired by the wider field-of-view sensor.Type: GrantFiled: October 1, 2021Date of Patent: August 27, 2024Assignee: THE BOEING COMPANYInventors: David James Huber, Deepak Khosla, Yang Chen, Brandon Courter, Luke Charles Ingram, Jacob Moorman, Scott Rad, Anthony Wayne Baker
-
Patent number: 12072442Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.Type: GrantFiled: November 22, 2021Date of Patent: August 27, 2024Assignee: NVIDIA CorporationInventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
-
Patent number: 12073304Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.Type: GrantFiled: June 16, 2023Date of Patent: August 27, 2024Assignee: DeepMind Technologies LimitedInventors: Charles Blundell, Oriol Vinyals
-
Patent number: 12054152Abstract: A computer is programmed to determine a training dataset that includes a plurality of images each including a first object and an object label, train a first machine learning program to identify first object parameters of the first objects in the plurality of images based on the object labels and a confidence level based on a standard deviation of a distribution of a plurality of identifications of the first object parameters, receive, from a second machine learning program, a plurality of second images each including a second object identified with a low confidence level, process the plurality of second images with the first machine learning program to identify the second object parameters with a corresponding second confidence level that is greater than a second confidence level, retrain the first machine learning program based on the identified second object parameters.Type: GrantFiled: January 12, 2021Date of Patent: August 6, 2024Assignee: Ford Global Technologies, LLCInventors: Gurjeet Singh, Sowndarya Sundar
-
Patent number: 12056920Abstract: A method of determining a roadway map includes receiving an image from above a roadway. The method further includes generating a skeletonized map based on the received image, wherein the skeletonized map comprises a plurality of roads. The method includes identifying intersections based on joining of multiple roads of the plurality of roads in the skeletonized map. The method includes partitioning the skeletonized map based on the identified intersections, wherein partitioning the skeletonized map defines a roadway data set and an intersection data set. The method includes analyzing the roadway data set to determine a number of lanes in each roadway of the plurality of roads. The method further includes analyzing the intersection data set to lane connections in the identified intersections. The method further includes merging results of the analyzed road data set and the analyzed intersection data set to generate the roadway map.Type: GrantFiled: January 12, 2022Date of Patent: August 6, 2024Assignee: WOVEN BY TOYOTA, INC.Inventor: José Felix Rodrigues
-
Patent number: 12050976Abstract: A method of performing, by an electronic device, a convolution operation at a certain layer in a neural network includes: obtaining N pieces of input channel data; performing a first convolution operation by applying a first input channel data group including K pieces of first input channel data from among the N pieces of input channel data to a first kernel filter group including K first kernel filters; performing a second convolution operation by applying a second input channel data group including K pieces of second input channel data from among the N pieces of input channel data to a second kernel filter group including K second kernel filters; and obtaining output channel data based on the first convolution operation and the second convolution operation, wherein K is a natural number that is less than N.Type: GrantFiled: May 15, 2020Date of Patent: July 30, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Tammy Lee
-
Patent number: 12045963Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect, via a graphical user interface of a client device, a user selection of an object portrayed within a digital image. The disclosed systems determine, in response to detecting the user selection of the object, a relationship between the object and an additional object portrayed within the digital image. The disclosed systems receive one or more user interactions for modifying the object. The disclosed systems modify the digital image in response to the one or more user interactions by modifying the object and the additional object based on the relationship between the object and the additional object.Type: GrantFiled: November 23, 2022Date of Patent: July 23, 2024Assignee: Adobe Inc.Inventors: Scott Cohen, Zhe Lin, Zhihong Ding, Luis Figueroa, Kushal Kafle
-
Patent number: 12033375Abstract: An object identification unit contains an artificial neural network and is designed to identify human faces. For this purpose, a face is divided into a number of triangles. The relative component of the area of each triangle in the total of the areas of all triangles is ascertained to ascertain a rotational angle of the face. The relative component of the area of each triangle in the total of the area of all triangles is then scaled to a rotation-invariant dimension of the face. The scaled area of the triangles is supplied to the artificial neural network in order to identify a person.Type: GrantFiled: April 27, 2021Date of Patent: July 9, 2024Assignee: Airbus Defence and Space GmbHInventor: Manfred Hiebl
-
Patent number: 12026621Abstract: A computer-implemented method for training a machine-learning network, wherein the network includes receiving an input data from a sensor, wherein the input data includes data indicative of an image, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor, generating an adversarial version of the input data utilizing an optimizer, wherein the adversarial version of the input data utilizes a subset of the input data, parameters associated with the optimizer, and one or more perturbation tiles, determining loss function value in response to the adversarial version of the input data and a classification of the adversarial version of the input data, determining a perturbation tile in response the loss function value associated with one or more subsets of the adversarial version of the input data, and output a perturbation that includes at least the perturbation tile.Type: GrantFiled: November 30, 2020Date of Patent: July 2, 2024Assignee: Robert Bosch GmbHInventors: Devin T. Willmott, Anit Kumar Sahu, Fatemeh Sheikholeslami, Filipe J. Cabrita Condessa, Jeremy Kolter
-
Patent number: 12019707Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.Type: GrantFiled: January 18, 2024Date of Patent: June 25, 2024Assignee: VIZIT LABS, INC.Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
-
Patent number: 12020414Abstract: The present disclosure relates to an object selection system that accurately detects and automatically selects target instances of user-requested objects (e.g., a query object instance) in a digital image. In one or more embodiments, the object selection system can analyze one or more user inputs to determine an optimal object attribute detection model from multiple specialized and generalized object attribute models. Additionally, the object selection system can utilize the selected object attribute model to detect and select one or more target instances of a query object in an image, where the image includes multiple instances of the query object.Type: GrantFiled: August 15, 2022Date of Patent: June 25, 2024Assignee: Adobe Inc.Inventors: Scott Cohen, Zhe Lin, Mingyang Ling
-
Patent number: 12019662Abstract: A computerized system and methods are provided for the automated extraction of contextually relevant information, and the automatic processing of actionable information from generic document sets. More specifically, automated systems and techniques for the extraction and processing of opportunity documents, are provided, which avoid inaccuracies and inefficiencies resulting from conventional and/or human-based document processing techniques.Type: GrantFiled: January 30, 2023Date of Patent: June 25, 2024Assignee: RedShred LLCInventors: James Michael Kukla, Jeehye Yun
-
Patent number: 12008821Abstract: Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first image depicting a first object and a second image depicting a second object, wherein the first object comprises a first feature set and the second object comprises a second feature set. The method can include processing the first image with a machine-learned image transformation model comprising a plurality of model channels to obtain a first channel mapping indicative of a mapping between the plurality of model channels and the first feature set. The method can include processing the second image with the model to obtain a second channel mapping indicative of a mapping between the plurality of model channels and the second feature set. The method can include generating an interpolation vector for a selected feature.Type: GrantFiled: May 7, 2021Date of Patent: June 11, 2024Assignee: GOOGLE LLCInventors: Wen-Sheng Chu, Abhishek Kumar, Min Jin Chong
-
Patent number: 12002345Abstract: Embodiments of the present disclosure relate to a method and an apparatus for alerting threats to users. The apparatus may capture a plurality of signals including at least one of Electro-Magnetic (E-M) signals and sound signals. The E-M signal and sound signals are used to detect objects around the user. A threat to the user is predicted based on the objects around the user and one or more alerts are generated such that the user avoids the threat. The prediction of the threat enables the user to take an action even before the threat has occurred. Also, the alerts are generated based on the prediction such that the user can avoid the threat well in advance of the occurrence of the threat.Type: GrantFiled: August 10, 2020Date of Patent: June 4, 2024Assignee: Wipro LimitedInventors: Shashidhar Soppin, Chandrashekar Bangalore Nagaraj, Manjunath Ramachandra Iyer
-
Patent number: 12002185Abstract: A fluorescent single molecule emitter simultaneously transmits its identity, location, and cellular context through its emission patterns. A deep neural network (DNN) performs multiplexed single-molecule analysis to enable retrieving such information with high accuracy. The DNN can extract three-dimensional molecule location, orientation, and wavefront distortion with precision approaching the theoretical limit of information content of the image which will allow multiplexed measurements through the emission patterns of a single molecule.Type: GrantFiled: June 10, 2019Date of Patent: June 4, 2024Assignee: Purdue Research FoundationInventors: Peiyi Zhang, Fang Huang, Sheng Liu
-
Patent number: 12001607Abstract: An image classification neural network is trained based on images that are the presented to an observer as a visual stimulus while collecting neurophysiological signals from a brain of the observer. The neurophysiological signals are processes to identify a neurophysiological event indicative of a detection of a target by the observer in one or more of the images, and the image classification neural network is trained to identify the target in the image based on the identification of the neurophysiological event.Type: GrantFiled: February 8, 2023Date of Patent: June 4, 2024Assignee: InnerEye Ltd.Inventors: Amir B. Geva, Eitan Netzer, Ran El Manor, Sergey Vaisman, Leon Y. Deouell, Uri Antman
-
Patent number: 11989931Abstract: An object classification method and apparatus are disclosed. The object classification method includes receiving an input image, storing first feature data extracted by a first feature extraction layer of a neural network configured to extract features of the input image, receiving second feature data from a second feature extraction layer which is an upper layer of the first feature extraction layer, generating merged feature data by merging the first feature data and the second feature data, and classifying an object in the input image based on the merged feature data.Type: GrantFiled: March 17, 2022Date of Patent: May 21, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Sangil Jung, Seungin Park, Byung In Yoo
-
Patent number: 11987264Abstract: A method and activity recognition system for recognising activities in surrounding environment for controlling navigation of an autonomous vehicle is disclosed. The activity recognition system receives first data feed from neuromorphic event-based camera and second data feed from frame-based RGB video camera. The first data feed comprises high-speed temporal information encoding motion associated with change in surrounding environment at each spatial location, and second data feed comprises spatio-temporal data providing scene-level contextual information associated with surrounding environment. An adaptive sampling of second data feed is performed with respect to foreground activity rate based on amount of foreground motion encoded in first data feed. Further, the activity recognition system recognizes activities associated with at least one object in surrounding environment by identifying correlation between both data feed by using two-stream neural network model.Type: GrantFiled: July 16, 2021Date of Patent: May 21, 2024Assignees: Wipro Limited, Indian Institute of ScienceInventors: Chetan Singh Thakur, Anirban Chakraborty, Sathyaprakash Narayanan, Bibrat Ranjan Pradhan
-
Patent number: 11989916Abstract: Embodiments provide an automated approach for generating unbiased synthesized image-label pairs for colorization training of retro photographs. Modern grayscale images with corresponding color images are translated to images with the characteristics of retro photographs, thereby producing training data that pairs images with the characteristics of retro paragraphs with corresponding color images. This training data can then be employed to train a deep learning model to colorize retro photographs more effectively.Type: GrantFiled: October 11, 2021Date of Patent: May 21, 2024Assignee: KYOCERA Document Solutions Inc.Inventors: Kilho Shin, Dongpei Su
-
Patent number: 11991251Abstract: A method may include detecting, within a remote session, a gesture indicative of an intent of a participant in the remote session to share a resource included within content being shared by a first client device participating in the remote session. The resource may be available on a network. In response to detection of the gesture, information for accessing the resource may be extracted from an image of the content. At least a portion of the information may be provided to a second client device participating in the remote session to enable the second device to access the resource. Related systems and articles of manufacture are also provided.Type: GrantFiled: October 11, 2021Date of Patent: May 21, 2024Inventors: Xuan Liu, Wenshuang Zhang
-
Patent number: 11978239Abstract: The disclosure provides a target detection method and apparatus, a model training method and apparatus, a device, and a storage medium. The target detection method includes: obtaining a first image; obtaining a second image corresponding to the first image, the second image belonging to a second domain; and obtaining a detection result corresponding to the second image through a cross-domain image detection model, the detection result including target localization information and target class information of a target object, the cross-domain image detection model including a first network model configured to convert an image from a first domain into an image in the second domain, and the second network model configured to perform region localization on the image in the second domain.Type: GrantFiled: July 14, 2023Date of Patent: May 7, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Ze Qun Jie
-
Patent number: 11965728Abstract: An automated method of inspecting a pipe includes: positioning the pipe with respect to a laser scanner using a positioning apparatus; scanning a size of the positioned pipe by the laser scanner; identifying a specification and historical data of the pipe's type by inputting the scanned size to an artificially intelligent module trained through machine learning to match input size data to standardized pipe types and output corresponding specifications and historical data of the pipe types; scanning dimensions of the positioned pipe by the laser scanner using a dimension portion of the identified historical data; comparing the scanned dimensions with standard dimensions from the identified specification; detecting a dimension nonconformity when the scanned dimensions are not within acceptable tolerances of the standard dimensions; and in response to detecting the dimension nonconformity, generating an alert and updating the dimension portion of the identified historical data to reflect the detected dimension nType: GrantFiled: April 6, 2021Date of Patent: April 23, 2024Assignee: SAUDI ARABIAN OIL COMPANYInventors: Mazin M. Fathi, Yousef Adnan Rayes
-
Patent number: 11967144Abstract: Methods, apparatuses and systems directed to pattern identification and pattern recognition. In some particular implementations, the invention provides a flexible pattern recognition platform including pattern recognition engines that can be dynamically adjusted to implement specific pattern recognition configurations for individual pattern recognition applications. In some implementations, the present invention also provides for a partition configuration where knowledge elements can be grouped and pattern recognition operations can be individually configured and arranged to allow for multi-level pattern recognition schemes.Type: GrantFiled: October 1, 2021Date of Patent: April 23, 2024Assignee: DataShapes, Inc.Inventor: Jeffrey Brian Adams
-
Patent number: 11954595Abstract: Provided is a method, performed by an electronic device, of recognizing an object included in an image, the method including: extracting first object information from a first object included in a first image, obtaining a learning model for generating an image including a second object from the first object information, generating a second image including the second object by inputting the first object information to the learning model, comparing the first image with the second image, and recognizing the first object as the second object in the first image, based on a result of the comparing.Type: GrantFiled: August 16, 2019Date of Patent: April 9, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yehoon Kim, Chanwon Seo
-
Patent number: 11955272Abstract: A method for generating an object detector based on deep learning capable of detecting an extended object class is provided. The method is related to generating the object detector based on the deep learning capable of detecting the extended object class to thereby allow both an object class having been trained and additional object class to be detected. According to the method, it is possible to generate the training data set necessary for training an object detector capable of detecting the extended object class at a low cost in a short time and further it is possible to generate the object detector capable of detecting the extended object class at a low cost in a short time.Type: GrantFiled: October 27, 2023Date of Patent: April 9, 2024Assignee: SUPERB AI CO., LTD.Inventor: Kye Hyeon Kim
-
Patent number: 11947668Abstract: In some embodiments, an apparatus includes a memory and a processor. The processor is configured to extract a set of features from a potentially malicious file and provide the set of features as an input to a normalization layer of a neural network. The processor is configured to implement the normalization layer by calculating a set of parameters associated with the set of features and normalizing the set of features based on the set of parameters to define a set of normalized features. The processor is further configured to provide the set of normalized features and the set of parameters as inputs to an activation layer of the neural network such that the activation layer produces an output based on the set of normalized features and the set of parameters. The output can be used to produce a maliciousness classification of the potentially malicious file.Type: GrantFiled: October 12, 2018Date of Patent: April 2, 2024Assignee: Sophos LimitedInventor: Richard Harang
-
Patent number: 11948088Abstract: Method and apparatus are disclosed for image recognition. The method may include performing a vision task on an image by using a multi-scales capsules network, wherein the multi-scales capsules network includes at least two branches and an aggregation block, each of the at least two branches includes a convolution block, a primary capsules block and a transformation block, and a dimension of capsules of the primary capsules block in each of the at least two branches is different.Type: GrantFiled: May 14, 2018Date of Patent: April 2, 2024Assignee: Nokia Technologies OYInventor: Tiancai Wang
-
Patent number: 11941794Abstract: System and methods and computer program code are provided to perform a commissioning process comprising capturing, using an image capture device, an image of an area containing at least a first fixture, identifying location and positioning information associated with the image, performing image processing of the image to identify a location of the at least first fixture in the image, and converting the location of the at least first fixture in the image into physical coordinate information associated with the at least first fixture.Type: GrantFiled: August 19, 2020Date of Patent: March 26, 2024Assignee: CURRENT LIGHTING SOLUTIONS, LLCInventors: Glenn Howard Kuenzler, Taylor Apolonius Barto