Neural Networks Patents (Class 382/156)
  • Patent number: 12045963
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect, via a graphical user interface of a client device, a user selection of an object portrayed within a digital image. The disclosed systems determine, in response to detecting the user selection of the object, a relationship between the object and an additional object portrayed within the digital image. The disclosed systems receive one or more user interactions for modifying the object. The disclosed systems modify the digital image in response to the one or more user interactions by modifying the object and the additional object based on the relationship between the object and the additional object.
    Type: Grant
    Filed: November 23, 2022
    Date of Patent: July 23, 2024
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Zhe Lin, Zhihong Ding, Luis Figueroa, Kushal Kafle
  • Patent number: 12033375
    Abstract: An object identification unit contains an artificial neural network and is designed to identify human faces. For this purpose, a face is divided into a number of triangles. The relative component of the area of each triangle in the total of the areas of all triangles is ascertained to ascertain a rotational angle of the face. The relative component of the area of each triangle in the total of the area of all triangles is then scaled to a rotation-invariant dimension of the face. The scaled area of the triangles is supplied to the artificial neural network in order to identify a person.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: July 9, 2024
    Assignee: Airbus Defence and Space GmbH
    Inventor: Manfred Hiebl
  • Patent number: 12026621
    Abstract: A computer-implemented method for training a machine-learning network, wherein the network includes receiving an input data from a sensor, wherein the input data includes data indicative of an image, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor, generating an adversarial version of the input data utilizing an optimizer, wherein the adversarial version of the input data utilizes a subset of the input data, parameters associated with the optimizer, and one or more perturbation tiles, determining loss function value in response to the adversarial version of the input data and a classification of the adversarial version of the input data, determining a perturbation tile in response the loss function value associated with one or more subsets of the adversarial version of the input data, and output a perturbation that includes at least the perturbation tile.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: July 2, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Devin T. Willmott, Anit Kumar Sahu, Fatemeh Sheikholeslami, Filipe J. Cabrita Condessa, Jeremy Kolter
  • Patent number: 12019662
    Abstract: A computerized system and methods are provided for the automated extraction of contextually relevant information, and the automatic processing of actionable information from generic document sets. More specifically, automated systems and techniques for the extraction and processing of opportunity documents, are provided, which avoid inaccuracies and inefficiencies resulting from conventional and/or human-based document processing techniques.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: June 25, 2024
    Assignee: RedShred LLC
    Inventors: James Michael Kukla, Jeehye Yun
  • Patent number: 12019707
    Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.
    Type: Grant
    Filed: January 18, 2024
    Date of Patent: June 25, 2024
    Assignee: VIZIT LABS, INC.
    Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
  • Patent number: 12020414
    Abstract: The present disclosure relates to an object selection system that accurately detects and automatically selects target instances of user-requested objects (e.g., a query object instance) in a digital image. In one or more embodiments, the object selection system can analyze one or more user inputs to determine an optimal object attribute detection model from multiple specialized and generalized object attribute models. Additionally, the object selection system can utilize the selected object attribute model to detect and select one or more target instances of a query object in an image, where the image includes multiple instances of the query object.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: June 25, 2024
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Zhe Lin, Mingyang Ling
  • Patent number: 12008821
    Abstract: Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first image depicting a first object and a second image depicting a second object, wherein the first object comprises a first feature set and the second object comprises a second feature set. The method can include processing the first image with a machine-learned image transformation model comprising a plurality of model channels to obtain a first channel mapping indicative of a mapping between the plurality of model channels and the first feature set. The method can include processing the second image with the model to obtain a second channel mapping indicative of a mapping between the plurality of model channels and the second feature set. The method can include generating an interpolation vector for a selected feature.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: June 11, 2024
    Assignee: GOOGLE LLC
    Inventors: Wen-Sheng Chu, Abhishek Kumar, Min Jin Chong
  • Patent number: 12002345
    Abstract: Embodiments of the present disclosure relate to a method and an apparatus for alerting threats to users. The apparatus may capture a plurality of signals including at least one of Electro-Magnetic (E-M) signals and sound signals. The E-M signal and sound signals are used to detect objects around the user. A threat to the user is predicted based on the objects around the user and one or more alerts are generated such that the user avoids the threat. The prediction of the threat enables the user to take an action even before the threat has occurred. Also, the alerts are generated based on the prediction such that the user can avoid the threat well in advance of the occurrence of the threat.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: June 4, 2024
    Assignee: Wipro Limited
    Inventors: Shashidhar Soppin, Chandrashekar Bangalore Nagaraj, Manjunath Ramachandra Iyer
  • Patent number: 12002185
    Abstract: A fluorescent single molecule emitter simultaneously transmits its identity, location, and cellular context through its emission patterns. A deep neural network (DNN) performs multiplexed single-molecule analysis to enable retrieving such information with high accuracy. The DNN can extract three-dimensional molecule location, orientation, and wavefront distortion with precision approaching the theoretical limit of information content of the image which will allow multiplexed measurements through the emission patterns of a single molecule.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: June 4, 2024
    Assignee: Purdue Research Foundation
    Inventors: Peiyi Zhang, Fang Huang, Sheng Liu
  • Patent number: 12001607
    Abstract: An image classification neural network is trained based on images that are the presented to an observer as a visual stimulus while collecting neurophysiological signals from a brain of the observer. The neurophysiological signals are processes to identify a neurophysiological event indicative of a detection of a target by the observer in one or more of the images, and the image classification neural network is trained to identify the target in the image based on the identification of the neurophysiological event.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: June 4, 2024
    Assignee: InnerEye Ltd.
    Inventors: Amir B. Geva, Eitan Netzer, Ran El Manor, Sergey Vaisman, Leon Y. Deouell, Uri Antman
  • Patent number: 11989931
    Abstract: An object classification method and apparatus are disclosed. The object classification method includes receiving an input image, storing first feature data extracted by a first feature extraction layer of a neural network configured to extract features of the input image, receiving second feature data from a second feature extraction layer which is an upper layer of the first feature extraction layer, generating merged feature data by merging the first feature data and the second feature data, and classifying an object in the input image based on the merged feature data.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: May 21, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangil Jung, Seungin Park, Byung In Yoo
  • Patent number: 11987264
    Abstract: A method and activity recognition system for recognising activities in surrounding environment for controlling navigation of an autonomous vehicle is disclosed. The activity recognition system receives first data feed from neuromorphic event-based camera and second data feed from frame-based RGB video camera. The first data feed comprises high-speed temporal information encoding motion associated with change in surrounding environment at each spatial location, and second data feed comprises spatio-temporal data providing scene-level contextual information associated with surrounding environment. An adaptive sampling of second data feed is performed with respect to foreground activity rate based on amount of foreground motion encoded in first data feed. Further, the activity recognition system recognizes activities associated with at least one object in surrounding environment by identifying correlation between both data feed by using two-stream neural network model.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: May 21, 2024
    Assignees: Wipro Limited, Indian Institute of Science
    Inventors: Chetan Singh Thakur, Anirban Chakraborty, Sathyaprakash Narayanan, Bibrat Ranjan Pradhan
  • Patent number: 11989916
    Abstract: Embodiments provide an automated approach for generating unbiased synthesized image-label pairs for colorization training of retro photographs. Modern grayscale images with corresponding color images are translated to images with the characteristics of retro photographs, thereby producing training data that pairs images with the characteristics of retro paragraphs with corresponding color images. This training data can then be employed to train a deep learning model to colorize retro photographs more effectively.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: May 21, 2024
    Assignee: KYOCERA Document Solutions Inc.
    Inventors: Kilho Shin, Dongpei Su
  • Patent number: 11991251
    Abstract: A method may include detecting, within a remote session, a gesture indicative of an intent of a participant in the remote session to share a resource included within content being shared by a first client device participating in the remote session. The resource may be available on a network. In response to detection of the gesture, information for accessing the resource may be extracted from an image of the content. At least a portion of the information may be provided to a second client device participating in the remote session to enable the second device to access the resource. Related systems and articles of manufacture are also provided.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: May 21, 2024
    Inventors: Xuan Liu, Wenshuang Zhang
  • Patent number: 11978239
    Abstract: The disclosure provides a target detection method and apparatus, a model training method and apparatus, a device, and a storage medium. The target detection method includes: obtaining a first image; obtaining a second image corresponding to the first image, the second image belonging to a second domain; and obtaining a detection result corresponding to the second image through a cross-domain image detection model, the detection result including target localization information and target class information of a target object, the cross-domain image detection model including a first network model configured to convert an image from a first domain into an image in the second domain, and the second network model configured to perform region localization on the image in the second domain.
    Type: Grant
    Filed: July 14, 2023
    Date of Patent: May 7, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Ze Qun Jie
  • Patent number: 11965728
    Abstract: An automated method of inspecting a pipe includes: positioning the pipe with respect to a laser scanner using a positioning apparatus; scanning a size of the positioned pipe by the laser scanner; identifying a specification and historical data of the pipe's type by inputting the scanned size to an artificially intelligent module trained through machine learning to match input size data to standardized pipe types and output corresponding specifications and historical data of the pipe types; scanning dimensions of the positioned pipe by the laser scanner using a dimension portion of the identified historical data; comparing the scanned dimensions with standard dimensions from the identified specification; detecting a dimension nonconformity when the scanned dimensions are not within acceptable tolerances of the standard dimensions; and in response to detecting the dimension nonconformity, generating an alert and updating the dimension portion of the identified historical data to reflect the detected dimension n
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: April 23, 2024
    Assignee: SAUDI ARABIAN OIL COMPANY
    Inventors: Mazin M. Fathi, Yousef Adnan Rayes
  • Patent number: 11967144
    Abstract: Methods, apparatuses and systems directed to pattern identification and pattern recognition. In some particular implementations, the invention provides a flexible pattern recognition platform including pattern recognition engines that can be dynamically adjusted to implement specific pattern recognition configurations for individual pattern recognition applications. In some implementations, the present invention also provides for a partition configuration where knowledge elements can be grouped and pattern recognition operations can be individually configured and arranged to allow for multi-level pattern recognition schemes.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: April 23, 2024
    Assignee: DataShapes, Inc.
    Inventor: Jeffrey Brian Adams
  • Patent number: 11955272
    Abstract: A method for generating an object detector based on deep learning capable of detecting an extended object class is provided. The method is related to generating the object detector based on the deep learning capable of detecting the extended object class to thereby allow both an object class having been trained and additional object class to be detected. According to the method, it is possible to generate the training data set necessary for training an object detector capable of detecting the extended object class at a low cost in a short time and further it is possible to generate the object detector capable of detecting the extended object class at a low cost in a short time.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: April 9, 2024
    Assignee: SUPERB AI CO., LTD.
    Inventor: Kye Hyeon Kim
  • Patent number: 11954595
    Abstract: Provided is a method, performed by an electronic device, of recognizing an object included in an image, the method including: extracting first object information from a first object included in a first image, obtaining a learning model for generating an image including a second object from the first object information, generating a second image including the second object by inputting the first object information to the learning model, comparing the first image with the second image, and recognizing the first object as the second object in the first image, based on a result of the comparing.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: April 9, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yehoon Kim, Chanwon Seo
  • Patent number: 11948088
    Abstract: Method and apparatus are disclosed for image recognition. The method may include performing a vision task on an image by using a multi-scales capsules network, wherein the multi-scales capsules network includes at least two branches and an aggregation block, each of the at least two branches includes a convolution block, a primary capsules block and a transformation block, and a dimension of capsules of the primary capsules block in each of the at least two branches is different.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: April 2, 2024
    Assignee: Nokia Technologies OY
    Inventor: Tiancai Wang
  • Patent number: 11947668
    Abstract: In some embodiments, an apparatus includes a memory and a processor. The processor is configured to extract a set of features from a potentially malicious file and provide the set of features as an input to a normalization layer of a neural network. The processor is configured to implement the normalization layer by calculating a set of parameters associated with the set of features and normalizing the set of features based on the set of parameters to define a set of normalized features. The processor is further configured to provide the set of normalized features and the set of parameters as inputs to an activation layer of the neural network such that the activation layer produces an output based on the set of normalized features and the set of parameters. The output can be used to produce a maliciousness classification of the potentially malicious file.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: April 2, 2024
    Assignee: Sophos Limited
    Inventor: Richard Harang
  • Patent number: 11941794
    Abstract: System and methods and computer program code are provided to perform a commissioning process comprising capturing, using an image capture device, an image of an area containing at least a first fixture, identifying location and positioning information associated with the image, performing image processing of the image to identify a location of the at least first fixture in the image, and converting the location of the at least first fixture in the image into physical coordinate information associated with the at least first fixture.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: March 26, 2024
    Assignee: CURRENT LIGHTING SOLUTIONS, LLC
    Inventors: Glenn Howard Kuenzler, Taylor Apolonius Barto
  • Patent number: 11922662
    Abstract: In one or more implementations, the apparatus, systems and methods disclosed herein are directed to configuring a color measurement device to output color measurements that match the expected output of a different color measurement device. In a particular implementation, a method is provided for matching the color measurements made by a color measurement device to the color measurements made by a target color measurement device by implementing a single step color calibration and conversion process using an Artificial Neural Network (ANN). By way of non-limiting example, the raw counts from the color measurement device is converted to a specific color space, such as L*a*b, directly through an ANN. Such ANN is trained to ensure the output of the color measurement from the color measurement device will match with the output of the color measurement from a target color measurement device.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: March 5, 2024
    Assignee: DATACOLOR INC.
    Inventor: Hong Wei
  • Patent number: 11908185
    Abstract: Methods, non-transitory computer-readable storage media, and computer or computer systems directed to detecting, analyzing, and tracking roads and grading activity using satellite or aerial imagery in combination with a machine learned model are described.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: February 20, 2024
    Assignee: Metrostudy, Inc.
    Inventors: Corentin Guillo, Sivakumaran Somasundaram
  • Patent number: 11896360
    Abstract: Systems and methods for generating thin slice images from thick slice images are disclosed herein. In some examples, a deep learning system may calculate a residual from a thick slice image and add the residual to the thick slice image to generate a thin slice image. In some examples, the deep learning system includes a neural network. In some examples, the neural network may include one or more levels, where one or more of the levels include one or more blocks. In some examples, each level includes a convolution block and a non-linear activation function block. The levels of the neural network may be in a cascaded arrangement in some examples.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: February 13, 2024
    Assignee: LVIS Corporation
    Inventors: Zhongnan Fang, Akshay S. Chaudhari, Jin Hyung Lee, Brian A. Hargreaves
  • Patent number: 11891002
    Abstract: An apparatus includes a capture device and a processor. The capture device may be configured to generate a plurality of video frames corresponding to users of a vehicle. The processor may be configured to perform operations to detect objects in the video frames, detect users of the vehicle based on the objects detected in the video frames, determine a comfort profile for the users and select a reaction to adjust vehicle components according to the comfort profile of the detected users. The comfort profile may be determined in response to characteristics of the users. The characteristics of the users may be determined by performing the operations on each of the users.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: February 6, 2024
    Assignee: Ambarella International LP
    Inventors: Shimon Pertsel, Patrick Martin
  • Patent number: 11893792
    Abstract: Techniques are disclosed for identifying and presenting video content that demonstrates features of a target product. The video content can be accessed, for example, from a media database of user-generated videos that demonstrate one or more features of the target product so that a user can see and hear the product in operation via a product webpage before making a purchasing decision. The product functioning videos supplement any static images of the target product and the textual product description to provide the user with additional context for each of the product's features, depending on the textual product description. The user can quickly and easily interact with the product webpage to access and playback the product functioning video to see and/or hear the product in operation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Gourav Singhal, Sourabh Gupta, Mrinal Kumar Sharma
  • Patent number: 11880759
    Abstract: Embodiments of an electronic device include an integrated circuit, a reconfigurable stream switch formed in the integrated circuit along with a plurality of convolution accelerators and a decompression unit coupled to the reconfigurable stream switch. The decompression unit decompresses encoded kernel data in real time during operation of convolutional neural network.
    Type: Grant
    Filed: February 22, 2023
    Date of Patent: January 23, 2024
    Assignees: STMICROELECTRONICS S.r.l., STMicroelectronics International N.V.
    Inventors: Giuseppe Desoli, Carmine Cappetta, Thomas Boesch, Surinder Pal Singh, Saumya Suneja
  • Patent number: 11880766
    Abstract: An improved system architecture uses a pipeline including a Generative Adversarial Network (GAN) including a generator neural network and a discriminator neural network to generate an image. An input image in a first domain and information about a target domain are obtained. The domains correspond to image styles. An initial latent space representation of the input image is produced by encoding the input image. An initial output image is generated by processing the initial latent space representation with the generator neural network. Using the discriminator neural network, a score is computed indicating whether the initial output image is in the target domain. A loss is computed based on the computed score. The loss is minimized to compute an updated latent space representation. The updated latent space representation is processed with the generator neural network to generate an output image in the target domain.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Patent number: 11880429
    Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.
    Type: Grant
    Filed: September 21, 2023
    Date of Patent: January 23, 2024
    Assignee: Vizit Labs, Inc.
    Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
  • Patent number: 11868441
    Abstract: Systems and methods for detecting duplicate frames is provided. An automated duplicate frames detection service may extract one or more frames from content and determine a hamming distance between each of the extracted one or more frames and adjacent frames. In response to determining the hamming distance is less than a threshold hamming distance, the duplicate frames detection service may determine duplicate frames. In turn, the duplicate frames detection service may determine the duplicate frames are created without intent in response to determining the average distance between the one or more duplicate frames meets threshold criteria and provide an indication of the one or more duplicate frames without intent to a client device.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: January 9, 2024
    Assignee: NBCUniversal Media, LLC
    Inventors: Michael S. Levin, Christopher Lynn, Alexandra Paige, Constantinos Hoppas, Matthew Nash, Rachel A. Price
  • Patent number: 11858535
    Abstract: An electronic device and an operating method thereof may be configured to detect input data having a first time interval, detect first prediction data having a second time interval based on the input data using a preset recursive network, and detect second prediction data having a third time interval based on the input data and the first prediction data using the recursive network. The recursive network may include an encoder configured to detect each of a plurality of feature vectors based on at least one of the input data or the first prediction data, an attention module configured to calculate each of pieces of importance of the feature vectors by calculating the importance of each feature vector, and a decoder configured to output at least one of the first prediction data or the second prediction data using the feature vectors based on the pieces of importance.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: January 2, 2024
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Dongsuk Kum, Sanmin Kim
  • Patent number: 11854226
    Abstract: An object identification method is disclosed. The method includes training a first neural network for a first set of conditions regarding a first plurality of objects, training a second neural network for a second set of conditions regarding a second plurality of objects, receiving a plurality of target images associated with a target set of conditions in which to identify objects, analyzing the plurality of target images using the first and second neural networks to identify objects in the plurality of target images resulting in object identification information, and selecting the first neural network or the second neural network as a preferred neural network for the target set of conditions based on an analysis of the object identification information.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: December 26, 2023
    Assignee: TerraClear Inc.
    Inventors: Brent Ronald Frei, Dwight Galen McMaster, Michael Racine, Jacobus du Preez, William David Dimmit, Isabelle Butterfield, Clifford Holmgren, Dafydd Daniel Rhys-Jones, Thayne Kollmorgen, Vivek Ullal Nayak
  • Patent number: 11854209
    Abstract: Artificial intelligence using convolutional neural network with Hough Transform. In an embodiment, a convolutional neural network (CNN) comprises convolution layers, a Hough Transform (HT) layer, and a Transposed Hough Transform (THT) layer, arranged such that at least one convolution layer precedes the HT layer, at least one convolution layer is between the HT and THT layers, and at least one convolution layer follows the THT layer. The HT layer converts its input from a first space into a second space, and the THT layer converts its input from the second space into the first space. The CNN may be applied to an input image to perform semantic image segmentation, so as to produce an output image representing a result of the semantic image segmentation.
    Type: Grant
    Filed: March 20, 2023
    Date of Patent: December 26, 2023
    Assignee: Smart Engines Service, LLC
    Inventors: Alexander Vladimirovich Sheshkus, Dmitry Petrovich Nikolaev, Vladimir L'vovich Arlazarov, Vladimir Viktorovich Arlazarov
  • Patent number: 11847820
    Abstract: The invention relates to method and system for classifying faces of a Boundary Representation (B-Rep) model using Artificial Intelligence (AI). The method includes extracting topological information corresponding to each of a plurality of data points of a B-Rep model of a product; determining a set of parameters based on the topological information corresponding to each of the plurality of data points; transforming the set of parameters corresponding to each of the plurality of data points of the B-Rep model into a tabular format to obtain a parametric data table; and assigning each of the plurality of faces of the B-Rep model a category from a plurality of categories based on the parametric data table using an AI model.
    Type: Grant
    Filed: April 20, 2022
    Date of Patent: December 19, 2023
    Assignee: HCL Technologies Limited
    Inventors: Girish Ramesh Chandankar, Hari Krishnan Elumalai, Pankaj Gupta, Rajesh Chakravarty, Akash Agarwal, Raunaq Pandya, Yaganti Sasidhar Reddy
  • Patent number: 11847563
    Abstract: An auto-rotation module having a single-layer neural network on a user device can convert a document image to a monochrome image having black and white pixels and segment the monochrome image into bounding boxes, each bounding box defining a connected segment of black pixels in the monochrome image. The auto-rotation module can determine textual snippets from the bounding boxes and prepare them into input images for the single-layer neural network. The single-layer neural network is trained to process each input image, recognize a correct orientation, and output a set of results for each input image. Each result indicates a probability associated with a particular orientation. The auto-rotation module can examine the results, determine what degree of rotation is needed to achieve a correct orientation of the document image, and automatically rotate the document image by the degree of rotation needed to achieve the correct orientation of the document image.
    Type: Grant
    Filed: October 21, 2022
    Date of Patent: December 19, 2023
    Assignee: OPEN TEXT SA ULC
    Inventor: Christopher Dale Lund
  • Patent number: 11847760
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: December 19, 2023
    Assignee: Snap Inc.
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Patent number: 11841458
    Abstract: A neural network is trained to focus on a domain of interest. For example, in a pre-training phase, the neural network in trained using synthetic training data, which is configured to omit or limit content less relevant to the domain of interest, by updating parameters of the neural network to improve the accuracy of predictions. In a subsequent training phase, the pre-trained neural network is trained using real-world training data by updating only a first subset of the parameters associated with feature extraction, while a second subset of the parameters more associated with policies remains fixed.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: December 12, 2023
    Assignee: NVIDIA Corporation
    Inventor: Bernhard Firner
  • Patent number: 11836890
    Abstract: An image processing apparatus applies an image to a first learning network model to optimize the edges of the image, applies the image to a second learning network model to optimize the texture of the image, and applies a first weight to the first image and a second weight to the second image based on information on the edge areas and the texture areas of the image to acquire an output image.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: December 5, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Cheon Lee, Donghyun Kim, Yongsup Park, Jaeyeon Park, Iljun Ahn, Hyunseung Lee, Taegyoung Ahn, Youngsu Moon, Tammy Lee
  • Patent number: 11830081
    Abstract: Media, methods, and systems are disclosed for applying a computer-implemented model to a table of computed values to identify one or more anomalies. One or more input forms having a plurality of input form field values is received. The input form field values are automatically parsed into a set of computer-generated candidate standard field values. The set of candidate standard field values are automatically normalized into a corresponding set of normalized field values, based on a computer-automated input normalization model. An automated review model controller is applied to automatically identify a review model to apply to the set of normalized field values, based on certain predetermined target field values. The automatically identified review model is then applied to the set of normalized inputs, and in response to detecting an anomaly, a field value is flagged accordingly.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: November 28, 2023
    Assignee: HRB Innovations, Inc.
    Inventors: Zhi Zheng, Jason N. Ward, Benjamin A. Kite
  • Patent number: 11823490
    Abstract: Systems and methods for image processing are described. One or more embodiments of the present disclosure identify a latent vector representing an image of a face, identify a target attribute vector representing a target attribute for the image, generate a modified latent vector using a mapping network that converts the latent vector and the target attribute vector into a hidden representation having fewer dimensions than the latent vector, wherein the modified latent vector is generated based on the hidden representation, and generate a modified image based on the modified latent vector, wherein the modified image represents the face with the target attribute.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: November 21, 2023
    Assignee: ADOBE, INC.
    Inventors: Ratheesh Kalarot, Siavash Khodadadeh, Baldo Faieta, Shabnam Ghadar, Saeid Motiian, Wei-An Lin, Zhe Lin
  • Patent number: 11822620
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to optimizing the accuracy of local feature detection in a variety of physical environments. Homographic adaptation for facilitating personalization of local feature models to specific target environments is formulated in a bilevel optimization framework instead of relying on conventional randomization techniques. Models for extraction of local image features can be adapted according to homography transformations that are determined to be most relevant or optimal for a user's target environment.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: November 21, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vibhav Vineet, Ondrej Miksik, Vishnu Sai Rao Suresh Lokhande
  • Patent number: 11816871
    Abstract: Methods and devices are provided for processing image data on a sub-frame portion basis using layers of a convolutional neural network. The processing device comprises memory and a processor. The processor is configured to receive frames of image data comprising sub-frame portions, schedule a first sub-frame portion of a first frame to be processed by a first layer of the convolutional neural network when the first sub-frame portion is available for processing, process the first sub-frame portion by the first layer and continue the processing of the first sub-frame portion by the first layer when it is determined that there is sufficient image data available for the first layer to continue processing of the first sub-frame portion. Processing on a sub-frame portion basis continues for subsequent layers such that processing by a layer can begin as soon as sufficient data is available for the layer.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: November 14, 2023
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Tung Chuen Kwong, David Porpino Sobreira Marques, King Chiu Tam, Shilpa Rajagopalan, Benjamin Koon Pan Chan, Vickie Youmin Wu
  • Patent number: 11806175
    Abstract: A system for few-view computed tomography (CT) image reconstruction is described. The system includes a preprocessing module, a first generator network, and a discriminator network. The preprocessing module is configured to apply a ramp filter to an input sinogram to yield a filtered sinogram. The first generator network is configured to receive the filtered sinogram, to learn a filtered back-projection operation and to provide a first reconstructed image as output. The first reconstructed image corresponds to the input sinogram. The discriminator network is configured to determine whether a received image corresponds to the first reconstructed image or a corresponding ground truth image. The generator network and the discriminator network correspond to a Wasserstein generative adversarial network (WGAN). The WGAN is optimized using an objective function based, at least in part, on a Wasserstein distance and based, at least in part, on a gradient penalty.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: November 7, 2023
    Assignee: Rensselaer Polytechnic Institute
    Inventors: Huidong Xie, Ge Wang, Hongming Shan, Wenxiang Cong
  • Patent number: 11804029
    Abstract: The present disclosure relates to a hierarchical constraint (HC) based method and system for classifying fine-grained graptolite images. The method includes: constructing a graptolite fossil dataset; extracting features in graptolite images; calculating the similarity between graptolite images, and performing weighting according to a genetic relationship among species to obtain a weighted HC loss function (HC-Loss) of all graptolite images; calculating cross-entropy loss; taking a weighted sum of HC-Loss and CE-Loss as a total loss function in a training stage; and performing model training. The system of the present disclosure includes a processor and a memory.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: October 31, 2023
    Assignees: Nanjing Institute of Geology and Palaeontology, CAS, Tianjin University
    Inventors: Honghe Xu, Yaohua Pan, Zhibin Niu
  • Patent number: 11804036
    Abstract: A person re-identification method based on a perspective-guided multi-adversarial attention is provided. The deep convolutional neural network includes a feature learning module, a multi-adversarial module, and a perspective-guided attention mechanism module. The multi-adversarial module is followed by a global pooling layer and a perspective discriminator after each stage of a basic network of the feature learning module. The perspective-guided attention mechanism module is an attention map generator and the perspective discriminator. The training of the deep convolutional neural network includes learning of the feature learning module, learning of the multi-adversarial module, and learning of the perspective-guided attention mechanism module. The proposed method uses the trained deep convolutional neural network to extract features of the testing images, and using an Euclidean distance to perform feature matching on images in a query set and images in a gallery set.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: October 31, 2023
    Assignee: WUHAN UNIVERSITY
    Inventors: Bo Du, Fangyi Liu, Mang Ye
  • Patent number: 11780095
    Abstract: A machine learning device that learns an operation of a robot for picking up, by a hand unit, any of a plurality of objects placed in a random fashion, including a bulk-loaded state, includes a state variable observation unit that observes a state variable representing a state of the robot, including data output from a three-dimensional measuring device that obtains a three-dimensional map for each object, an operation result obtaining unit that obtains a result of a picking operation of the robot for picking up the object by the hand unit, and a learning unit that learns a manipulated variable including command data for commanding the robot to perform the picking operation of the object, in association with the state variable of the robot and the result of the picking operation, upon receiving output from the state variable observation unit and output from the operation result obtaining unit.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: October 10, 2023
    Assignees: FANUC CORPORATION, PREFERRED NETWORKS, INC.
    Inventors: Takashi Yamazaki, Takumi Oyama, Shun Suyama, Kazutaka Nakayama, Hidetoshi Kumiya, Hiroshi Nakagawa, Daisuke Okanohara, Ryosuke Okuta, Eiichi Matsumoto, Keigo Kawaai
  • Patent number: 11775838
    Abstract: Techniques for training a machine-learning (ML) model for captioning images are disclosed. A plurality of feature vectors and a plurality of visual attention maps are generated by a visual model of the ML model based on an input image. Each of the plurality of feature vectors correspond to different regions of the input image. A plurality of caption attention maps are generated by an attention model of the ML model based on the plurality of feature vectors. An attention penalty is calculated based on a comparison between the caption attention maps and the visual attention maps. A loss function is calculated based on the attention penalty. One or both of the visual model and the attention model are trained using the loss function.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: October 3, 2023
    Assignee: Ancestry.com Operations Inc.
    Inventors: Jiayun Li, Mohammad K. Ebrahimpour, Azadeh Moghtaderi, Yen-Yun Yu
  • Patent number: 11768913
    Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.
    Type: Grant
    Filed: November 3, 2022
    Date of Patent: September 26, 2023
    Assignee: Vizit Labs, Inc.
    Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
  • Patent number: 11763466
    Abstract: A system comprising an encoder neural network, a scene structure decoder neural network, and a motion decoder neural network. The encoder neural network is configured to: receive a first image and a second image; and process the first image and the second image to generate an encoded representation of the first image and the second image. The scene structure decoder neural network is configured to process the encoded representation to generate a structure output characterizing a structure of a scene depicted in the first image. The motion decoder neural network configured to process the encoded representation to generate a motion output characterizing motion between the first image and the second image.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: September 19, 2023
    Assignee: Google LLC
    Inventors: Cordelia Luise Schmid, Sudheendra Vijayanarasimhan, Susanna Maria Ricco, Bryan Andrew Seybold, Rahul Sukthankar, Aikaterini Fragkiadaki