Neural Networks Patents (Class 382/156)
-
Patent number: 12260516Abstract: The present disclosure relates to an image processing method for correcting colors in an input image representing a scene, the image processing method including: processing the input image with a machine learning model, wherein the machine learning model is previously trained to detect a predefined number N>1 of sources of light illuminating the scene and to generate N estimated illuminant images associated respectively to the N sources of light, wherein each estimated illuminant image includes an estimated color of the light emitted by the respective source of light and an estimated contribution image; generating a total illuminant image by using the N estimated illuminant images; and generating an output image by correcting the colors in the input image based on the total illuminant image.Type: GrantFiled: October 20, 2021Date of Patent: March 25, 2025Assignee: 7 SENSING SOFTWAREInventors: Matis Hudon, Elnaz Soleimani
-
Patent number: 12260633Abstract: A defect detection method includes obtaining a first image corresponding to a first area on an object revealing an apparent defect, the first area bounding the area showing the apparent defect; selecting a detection model according to a size of the first image, the detection model selected being one of a group of three models based on dimensions of image, the image being in one of portrait, landscape, or regular square proportions; detecting and confirming or denying a defect on the first image according to the detection model and outputting a detection result, the use of only three possible AI models reduces the likelihood of mis-determinations. An electronic device and a non-volatile storage medium therein, for performing the above-described method, are also disclosed.Type: GrantFiled: May 24, 2022Date of Patent: March 25, 2025Assignee: Fulian Precision Electronics (Tianjin) Co., LTD.Inventors: Fu-Yuan Tan, Ching-Han Cheng
-
Patent number: 12260523Abstract: An image recognition method includes: receiving an input image of a first quality; extracting an input feature of a second quality of the input image from the input image by inputting the input image to an encoding model in an image recognizing model; and generating a recognition result for the input image based on the input feature.Type: GrantFiled: May 12, 2021Date of Patent: March 25, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Insoo Kim, Seungju Han, Seong-jin Park, Jiwon Baek, Jaejoon Han
-
Patent number: 12254633Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.Type: GrantFiled: March 18, 2022Date of Patent: March 18, 2025Assignee: Adobe Inc.Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
-
Patent number: 12256179Abstract: Systems and methods for image processing, and specifically for color reconstruction of an output signal from a multispectral imaging sensor (MIS), are described. Embodiments of the present disclosure receive image sensor data from an image sensor of a camera device; apply a non-linear color space mapping to the image sensor data using a neural network to obtain image data, wherein the non-linear color space mapping comprises a non-negative homogeneous function; and store the image data in a memory of the camera device.Type: GrantFiled: November 14, 2022Date of Patent: March 18, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yotam Ater, Heejin Choi, Natan Bibelnik, Woo-shik Kim, Evgeny Soloveichik, Ortal Glatt
-
Patent number: 12249183Abstract: The present disclosure discloses an apparatus and a method for detecting a facial pose, an image processing system, and a storage medium. The apparatus comprises: an obtaining unit to obtain at least three keypoints of at least one face from an input image based on a pre-generated neural network, wherein coordinates of the keypoints obtained via a layer in the neural network for obtaining coordinates are three-dimensional coordinates; and a determining unit to determine, for the at least one face, a pose of the face based on the obtained keypoints, wherein the determined facial pose includes at least an angle. According to the present disclosure, the accuracy of the three-dimensional coordinates of the facial keypoints can be improved, thus the detection precision of a facial pose can be improved.Type: GrantFiled: March 4, 2022Date of Patent: March 11, 2025Assignee: Canon Kabushiki KaishaInventors: Qiao Wang, Deyu Wang, Kotaro Kitajima, Naoko Watazawa, Tsewei Chen, Wei Tao, Dongchao Wen
-
Patent number: 12236665Abstract: A processor-implemented method with neural network training includes: determining first backbone feature data corresponding to each input data by applying, to a first neural network model, two or more sets of the input data of the same scene, respectively; determining second backbone feature data corresponding to each input data by applying, to a second neural network model, the two or more sets of the input data, respectively; determining projection-based first embedded data and dropout-based first view data from the first backbone feature data; and determining projection-based second embedded data and dropout-based second view data from the second backbone feature data; and training either one or both of the first neural network model and the second neural network model based on a loss determined based on a combination of any two or more of the first embedded data, the first view data, the second embedded data, the second view data, and an embedded data clustering result.Type: GrantFiled: December 14, 2021Date of Patent: February 25, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Hee Min Choi, Hyoa Kang
-
Patent number: 12229966Abstract: In part, the disclosure relates to methods, and systems suitable for evaluating image data from a patient on a real time or substantially real time basis using machine learning (ML) methods and systems. Systems and methods for improving diagnostic tools for end users such as cardiologists and imaging specialists using machine learning techniques applied to specific problems associated with intravascular images that have polar representations. Further, given the use of rotating probes to obtain image data for OCT, IVUS, and other imaging data, dealing with the two coordinate systems associated therewith creates challenges. The present disclosure addresses these and numerous other challenges relating to solving the problem of quickly imaging and diagnosis a patient such that stenting and other procedures may be applied during a single session in the cath lab.Type: GrantFiled: January 11, 2023Date of Patent: February 18, 2025Assignee: LightLab Imaging, Inc.Inventors: Shimin Li, Ajay Gopinath, Kyle Savidge
-
Patent number: 12229920Abstract: Described herein are means for implementing fixed-point image-to-image translation using improved Generative Adversarial Networks (GANs). For instance, an exemplary system is specially configured for implementing a new framework, called a Fixed-Point GAN, which improves upon prior known methodologies by enhancing the quality of the images generated through global, local, and identity transformation. The Fixed-Point GAN as introduced and described herein, improves many applications dependant on image-to-image translation, including those in the field of medical image processing for the purposes of disease detection and localization. Other related embodiments are disclosed.Type: GrantFiled: September 16, 2021Date of Patent: February 18, 2025Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Jianming Liang, Zongwei Zhou, Nima Tajbakhsh, Md Mahfuzur Rahman Siddiquee
-
Patent number: 12223429Abstract: In various examples, a two-dimensional (2D) and three-dimensional (3D) deep neural network (DNN) is implemented to fuse 2D and 3D object detection results for classifying objects. For example, regions of interest (ROIs) and/or bounding shapes corresponding thereto may be determined using one or more region proposal networks (RPNs)—such as an image-based RPN and/or a depth-based RPN. Each ROI may be extended into a frustum in 3D world-space, and a point cloud may be filtered to include only points from within the frustum. The remaining points may be voxelated to generate a volume in 3D world space, and the volume may be applied to a 3D DNN to generate one or more vectors. The one or more vectors, in addition to one or more additional vectors generated using a 2D DNN processing image data, may be applied to a classifier network to generate a classification for an object.Type: GrantFiled: December 6, 2023Date of Patent: February 11, 2025Assignee: NVIDIA CorporationInventors: Innfarn Yoo, Rohit Taneja
-
Patent number: 12217443Abstract: A depth image generation method, apparatus, and storage medium and electronic device. The method includes: acquiring a plurality of target images; performing multi-stage convolution processing on the plurality of target images through a plurality of convolutional layers in a convolution model to obtain feature map sets respectively outputted by the plurality of convolutional layers; performing view aggregation on a plurality of feature maps in each feature map set respectively to obtain an aggregated feature corresponding to each feature map set; and performing fusion processing on the plurality of obtained aggregated features to obtain a depth image. The plurality of acquired target images are obtained by photographing the target object from different views respectively, so that the plurality of obtained target images include information from different angles, which enriches information content of the acquired target images.Type: GrantFiled: April 6, 2022Date of Patent: February 4, 2025Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTDInventors: Runze Zhang, Hongwei Yi, Ying Chen, Shang Xu, Yu Wing Tai
-
Patent number: 12210969Abstract: An image classification system is provided for determining a likely classification of an image using multiple machine learning models that share a base machine learning model. The image classification system may be a browser-based system on a user computing device that obtains multiple machine learning models over a network from a remote system once, stores the models locally in the image classification system, and uses the models multiple times without needing to subsequently request the machine learning models again from the remote system. The image classification system may therefore determine likely a classification associated with an image by running the machine learning models on a user computing device.Type: GrantFiled: March 9, 2022Date of Patent: January 28, 2025Assignee: Expedia, Inc.Inventors: Li Wen, Zhanpeng Huo, Jingya Jiang
-
Patent number: 12210835Abstract: In one embodiment, a method includes accessing an image and a natural-language question regarding the image and extracting, from the image, a first set of image features at a first level of granularity and a second set of image features at a second level of granularity. The method further includes extracting, from the question, a first set of text features at the first level of granularity and a second set of text features at the second level of granularity; generating a first output representing an alignment between the first set of image features and the first set of text features; generating a second output representing an alignment between the second set of image features and the second set of text features; and determining an answer to the question based on the first output and the second output.Type: GrantFiled: September 16, 2022Date of Patent: January 28, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Peixi Xiong, Yilin Shen, Hongxia Jin
-
Patent number: 12211178Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for combining digital images. In particular, in one or more embodiments, the disclosed systems combine latent codes of a source digital image and a target digital image utilizing a blending network to determine a combined latent encoding and generate a combined digital image from the combined latent encoding utilizing a generative neural network. In some embodiments, the disclosed systems determine an intersection face mask between the source digital image and the combined digital image utilizing a face segmentation network and combine the source digital image and the combined digital image utilizing the intersection face mask to generate a blended digital image.Type: GrantFiled: April 21, 2022Date of Patent: January 28, 2025Assignee: Adobe Inc.Inventors: Tobias Hinz, Shabnam Ghadar, Richard Zhang, Ratheesh Kalarot, Jingwan Lu, Elya Shechtman
-
Patent number: 12189735Abstract: Systems and methods for adaptive verification may include a memory and a processor. The memory may be configured to store a plurality of animation templates. The processor may be configured to perform a first challenge process to request a first user image from a first predetermined distance, receive the first user image, request a second user image from a second predetermined distance, receive the second user image, transmit the first user image and the second user image for a verification process, the verification process including identification of one or more user attributes, receive a third user image associated with the one or more user attributes identified during the verification, and display the third user image including an adaptation, wherein the adaptation is generated for at least one of the plurality of animation templates, the adaptation illustrating the one or more user attributes.Type: GrantFiled: June 28, 2021Date of Patent: January 7, 2025Assignee: CAPITAL ONE SERVICES, LLCInventors: Laura Lee Boykin, Zainab Zaki, Joanna Weber
-
Patent number: 12190484Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.Type: GrantFiled: March 15, 2021Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Elya Shechtman
-
Patent number: 12183106Abstract: An apparatus including a processor caused to receive document images, each including representations of characters. The processor is caused to parse each document image to extract, based on structure type, subsets of characters, to generate a text encoding for that document image. For each document, the processor is caused to extract visual features to generate a visual encoding for that document image, each visual feature associated with a subset of characters. The processor is caused to generate parsed documents, each parsed document uniquely associated with a document image and based on the text and visual encoding for that document image. For each parsed document, the processor is caused to identify sections uniquely associated with section type. The processor is caused to train machine learning models, each machine learning model associated with one section type and trained using a portion of each parsed document associated with that section type.Type: GrantFiled: June 28, 2024Date of Patent: December 31, 2024Assignee: Greenhouse Software, Inc.Inventor: Triantafyllos Xylouris
-
Patent number: 12175060Abstract: An electronic device and a controlling method thereof are provided. An electronic device includes a memory configured to store at least one instruction and a processor configured to execute the at least one instruction and operate as instructed by the at least one instruction. The processor is configured to: obtain a first image; based on receiving a first user command to correct the first image, obtain a second image by correcting the first image; based on the first image and the second image, train a neural network model; and based on receiving a second user command to correct a third image, obtain a fourth image by correcting the third image using the trained neural network model.Type: GrantFiled: January 22, 2021Date of Patent: December 24, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Chanwon Seo, Youngeun Lee, Eunseo Kim, Myungjin Eom
-
Patent number: 12169784Abstract: Today, artificial neural networks are trained on large sets of manually tagged images. Generally, for better training, the training data should be as large as possible. Unfortunately, manually tagging images is time consuming and susceptible to error, making it difficult to produce the large sets of tagged data used to train artificial neural networks. To address this problem, the inventors have developed a smart tagging utility that uses a feature extraction unit and a fast-learning classifier to learn tags and tag images automatically, reducing the time to tag large sets of data. The feature extraction unit and fast-learning classifiers can be implemented as artificial neural networks that associate a label with features extracted from an image and tag similar features from the image or other images with the same label. Moreover, the smart tagging system can learn from user adjustment to its proposed tagging. This reduces tagging time and errors.Type: GrantFiled: August 8, 2022Date of Patent: December 17, 2024Assignee: Neurala, Inc.Inventors: Lucas Neves, Liam Debeasi, Heather Ames Versace, Jeremy Wurbs, Massimiliano Versace, Warren Katz, Anatoli Gorchet
-
Method and system for a high-frequency attention network for efficient single image super-resolution
Patent number: 12169913Abstract: Example aspects include techniques for implementing a high-frequency attention network for single image super-resolution. These techniques may include extracting a plurality of features from an original image input into a CNN to generate a feature map, and restoring one or more high-frequency details of the original image via an efficient residual block (ERB) and a high-frequency attention block (HFAB) configured to assign a scaling factor to one or more high-frequency areas. In addition, the techniques may include generating reconstruction input information by performing an element-wise operation on the one or more high-frequency details and cross-connection information from the feature map and performing, by the CNN, an enhancement operation on the reconstruction input information to generate an enhanced image.Type: GrantFiled: February 10, 2022Date of Patent: December 17, 2024Assignee: LEMON INC.Inventor: Ding Liu -
Patent number: 12169955Abstract: An image acquisition unit 110 acquires a plurality of images. The plurality of images include an object to be inferred. An image cut-out unit 120 cuts out an object region including the object from each of the plurality of images acquired by the image acquisition unit 110. An importance generation unit 130 generates importance information by processing the object region cut out by the image cut-out unit 120. The importance information indicates the importance of the object region when an object inference model is generated, and is generated for each object region, that is, for each image acquired by the image acquisition unit 110. A learning data generation unit 140 stores a plurality of object regions cut out by the image cut-out unit 120 and a plurality of pieces of importance information generated by the importance generation unit 130 in a learning data storage unit 150 as at least a part of the learning data.Type: GrantFiled: January 28, 2022Date of Patent: December 17, 2024Assignee: NEC CORPORATIONInventors: Tomokazu Kaneko, Katsuhiko Takahashi, Makoto Terao, Soma Shiraishi, Takami Sato, Yu Nabeto, Ryosuke Sakai
-
Patent number: 12148146Abstract: A computer system for mapping coatings to a spatial appearance space may receive coating spatial appearance variables of a target coating from a coating-measurement instrument. The computer system may generate spatial appearance space coordinates for the target coating by mapping each of the coating spatial appearance variables to an individual axis of a multidimensional coordinate system. The computer system may identify particular spatial appearance space coordinates from the identified spatial appearance space coordinates associated with the potentially matching reference coatings that are associated with a smallest spatial-appearance-space distance from the spatial appearance space coordinates of the target coating. Further, the computer system may display a visual interface element indicating a particular reference coating that is associated with the particular spatial appearance space coordinates as a proposed spatial appearance match to the target coating.Type: GrantFiled: September 18, 2020Date of Patent: November 19, 2024Assignee: PPG Industries Ohio, Inc.Inventors: Anthony J. Foderaro, Alison M. Norris
-
Patent number: 12141421Abstract: An electronic device and a controlling method thereof are provided. An electronic device includes a memory configured to store at least one instruction and a processor configured to execute the at least one instruction and operate as instructed by the at least one instruction. The processor is configured to: obtain a first image; based on receiving a first user command to correct the first image, obtain a second image by correcting the first image; based on the first image and the second image, train a neural network model; and based on receiving a second user command to correct a third image, obtain a fourth image by correcting the third image using the trained neural network model.Type: GrantFiled: January 22, 2021Date of Patent: November 12, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Chanwon Seo, Youngeun Lee, Eunseo Kim, Myungjin Eom
-
Patent number: 12141952Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for detecting and classifying an exposure defect in an image using neural networks trained via a limited amount of labeled training images. An image may be applied to a first neural network to determine whether the images includes an exposure defect. Detected defective image may be applied to a second neural network to determine an exposure defect classification for the image. The exposure defect classification can includes severe underexposure, medium underexposure, mild underexposure, mild overexposure, medium overexposure, severe overexposure, and/or the like. The image may be presented to a user along with the exposure defect classification.Type: GrantFiled: September 30, 2022Date of Patent: November 12, 2024Assignee: Adobe Inc.Inventors: Akhilesh Kumar, Zhe Lin, William Lawrence Marino
-
Patent number: 12136251Abstract: In accordance with one embodiment of the present disclosure, a method includes receiving an input image having an object and a background, intrinsically decomposing the object and the background into an input image data having a set of features, augmenting the input image data with a 2.5D differentiable renderer for each feature of the set of features to create a set of augmented images, and compiling the input image and the set of augmented images into a training data set for training a downstream task network.Type: GrantFiled: January 19, 2022Date of Patent: November 5, 2024Assignee: Toyota Research Institute, Inc.Inventors: Sergey Zakharov, Rares Ambrus, Vitor Guizilini, Adrien Gaidon
-
Patent number: 12136148Abstract: A display method, system, device, and related computer programs can present the classification performance of an artificial neural network in a form interpretable by humans. In these methods, systems, devices, and programs, a probability calculator (i.e., a classifier) based on an artificial neural network calculates a classification result for an input image in the form of a probability. The distribution of classification-result probabilities are displayed using at least one display axis of a graph as a probability axis.Type: GrantFiled: June 17, 2019Date of Patent: November 5, 2024Assignee: K.K. CYBOInventors: Keisuke Goda, Nao Nitta, Takeaki Sugimura
-
Patent number: 12131463Abstract: An information processing apparatus calculates an index for a plant growth state with a neural network using fewer images to achieve training of the neural network. The apparatus includes first to N-th image analyzers (N?2) each analyzing a cultivation area image of a plant cultivation area with a neural network to calculate a state index indicating a growth state of the plant in the cultivation area and including the neural network trained using cultivation area images each having a predetermined growth index classified into a corresponding class of first to N-th growth index classes, and a selector receiving an input of a cultivation area image for which the state index is calculated and causing one of the first to N-th image analyzers trained using cultivation area images classified into the same growth index class as the input cultivation area image to analyze the input cultivation area image.Type: GrantFiled: June 23, 2020Date of Patent: October 29, 2024Assignee: OMRON CorporationInventors: Xiangyu Zeng, Atsushi Hashimoto
-
Patent number: 12131548Abstract: Disclosed is a method for training shallow convolutional neural networks for infrared target detection using a two-phase learning strategy that can converge to satisfactory detection performance, even with scale-invariance capability. In the first step, the aim is to ensure that only filters in the convolutional layer produce semantic features that serve the problem of target detection. L2-norm (Euclidian norm) is used as loss function for the stable training of semantic filters obtained from the convolutional layers. In the next step, only the decision layers are trained by transferring the weight values in the convolutional layers completely and freezing the learning rate. In this step, unlike the first, the L1-norm (mean-absolute-deviation) loss function is used.Type: GrantFiled: April 15, 2020Date of Patent: October 29, 2024Assignee: ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETIInventors: Engin Uzun, Tolga Aksoy, Erdem Akagunduz
-
Patent number: 12133030Abstract: A method for converting a source video content constrained to a first color space to a video content constrained to a second color space using an artificial intelligence machine-learning algorithm based on a creative profile.Type: GrantFiled: December 19, 2019Date of Patent: October 29, 2024Assignee: Warner Bros. Entertainment Inc.Inventors: Michael Zink, Ha Nguyen
-
Patent number: 12131550Abstract: In one example, a method is provided that includes receiving lidar data obtained by a lidar device. The lidar data includes a plurality of data points indicative of locations of reflections from an environment of the vehicle. The method includes receiving images of portions of the environment captured by a camera at different times. The method also includes determining locations in the images that correspond to a data point of the plurality of data points. Additionally, the method includes determining feature descriptors for the locations of the images and comparing the feature descriptors to determine that sensor data associated with at least one of the lidar device, the camera, or a pose sensor is accurate or inaccurate.Type: GrantFiled: December 30, 2020Date of Patent: October 29, 2024Assignee: Waymo LLCInventors: Colin Braley, Volodymyr Ivanchenko
-
Patent number: 12124879Abstract: Provided is a control method of a deep neural network (DNN) accelerator for optimized data processing. The control method includes, based on a dataflow and a hardware mapping value of neural network data allocated to a first-level memory, calculating a plurality of offsets representing start components of a plurality of data tiles of the neural network data, based on receiving an update request for the neural network data from a second-level memory, identifying a data type of an update data tile corresponding to the received update request among the plurality of data tiles, identifying one or more components of the update data tile, based on the data type of the update data tile and an offset of the update data tile among the calculated plurality of offsets, and updating neural network data of the identified one or more components between the first-level memory and the second-level memory.Type: GrantFiled: March 29, 2023Date of Patent: October 22, 2024Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITYInventors: William Jinho Song, Bogil Kim, Chanho Park, Semin Koong, Taesoo Lim
-
Patent number: 12124533Abstract: Embodiments are generally directed to methods and apparatuses of spatially sparse convolution module for visual rendering and synthesis. An embodiment of a method for image processing, comprising: receiving an input image by a convolution layer of a neural network to generate a plurality of feature maps; performing spatially sparse convolution on the plurality of feature maps to generate spatially sparse feature maps; and upsampling the spatially sparse feature maps to generate an output image.Type: GrantFiled: September 23, 2021Date of Patent: October 22, 2024Assignee: INTEL CORPORATIONInventors: Anbang Yao, Ming Lu, Yikai Wang, Scott Janus, Sungye Kim
-
Patent number: 12124535Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.Type: GrantFiled: June 7, 2024Date of Patent: October 22, 2024Assignee: VIZIT LABS, INC.Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
-
Patent number: 12112023Abstract: An electronic device and a controlling method thereof are provided. An electronic device includes a memory configured to store at least one instruction and a processor configured to execute the at least one instruction and operate as instructed by the at least one instruction. The processor is configured to: obtain a first image; based on receiving a first user command to correct the first image, obtain a second image by correcting the first image; based on the first image and the second image, train a neural network model; and based on receiving a second user command to correct a third image, obtain a fourth image by correcting the third image using the trained neural network model.Type: GrantFiled: January 22, 2021Date of Patent: October 8, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Chanwon Seo, Youngeun Lee, Eunseo Kim, Myungjin Eom
-
Patent number: 12106829Abstract: The technology disclosed relates to artificial intelligence-based base calling. The technology disclosed relates to accessing a progression of per-cycle analyte channel sets generated for sequencing cycles of a sequencing run, processing, through a neural network-based base caller (NNBC), windows of per-cycle analyte channel sets in the progression for the windows of sequencing cycles of the sequencing run such that the NNBC processes a subject window of per-cycle analyte channel sets in the progression for the subject window of sequencing cycles of the sequencing run and generates provisional base call predictions for three or more sequencing cycles in the subject window of sequencing cycles, from multiple windows in which a particular sequencing cycle appeared at different positions, using the NNBC to generate provisional base call predictions for the particular sequencing cycle, and determining a base call for the particular sequencing cycle based on the plurality of base call predictions.Type: GrantFiled: July 13, 2023Date of Patent: October 1, 2024Assignee: Illumina, Inc.Inventors: Anindita Dutta, Gery Vessere, Dorna KashefHaghighi, Kishore Jaganathan, Amirali Kia
-
Patent number: 12100196Abstract: The present disclosure relates to a system and method of performing quantization of a neural network having multiple layers. The method comprises receiving a floating-point dataset as input dataset and determining a first shift constant for first layer of the neural network based on the input dataset. The method also comprises performing quantization for the first layer using the determined shift constant of the first layer. The method further comprises determining a next shift constant for next layer of the neural network based on output of a layer previous to the next layer, and performing quantization for the next layer using the determined next shift constant. The method further comprises iterating the steps of determining shift constant and performing quantization for all layers of the neural network to generate fixed point dataset as output.Type: GrantFiled: March 21, 2022Date of Patent: September 24, 2024Assignee: Blaize, Inc.Inventors: Deepak Chandra Bijalwan, Pratyusha Musunuru
-
Patent number: 12093843Abstract: Embodiments relate to performing inference, such as object recognition, based on sensory inputs received from sensors and location information associated with the sensory inputs. The sensory inputs describe one or more features of the objects. The location information describes known or potential locations of the sensors generating the sensory inputs. An inference system learns representations of objects by characterizing a plurality of feature-location representations of the objects, and then performs inference by identifying or updating candidate objects consistent with feature-location representations observed from the sensory input data and location information. In one instance, the inference system learns representations of objects for each sensor. The set of candidate objects for each sensor is updated to those consistent with candidate objects for other sensors, as well as the observed feature-location representations for the sensor.Type: GrantFiled: March 11, 2021Date of Patent: September 17, 2024Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Subutai Ahmad, Yuwei Cui, Marcus Anthony Lewis
-
Patent number: 12087028Abstract: A computer-implemented method for place recognition including: obtaining information identifying an image of a first scene; identifying a plurality of pixel clusters in the image; generating a set of feature vectors associated with the pixel clusters; generating a graph of the scene; adding a first edge between a first node and a second node in response to determining that a first property associated with a first pixel cluster is similar to a second property associated with a second pixel cluster; generating a vector representation of the graph; calculating a measure of similarity between the vector representation of the graph and a reference vector representation associated with a second scene; and determining that the first scene and the second scene are associated with a same place in response to determining that the measure of similarity is less than a threshold.Type: GrantFiled: March 1, 2022Date of Patent: September 10, 2024Assignee: Kabushiki Kaisha ToshibaInventors: Chao Zhang, Ignas Budvytis, Stephan Liwicki
-
Patent number: 12086703Abstract: In some examples, a machine learning model may be trained to denoise an image. In some examples, the machine learning model may identify noise in an image of a sequence based at least in part, on at least one other image of the sequence. In some examples, the machine learning model may include a recurrent neural network. In some examples, the machine learning model may have a modular architecture including one or more building units. In some examples, the machine learning model may have a multi-branch architecture. In some examples, the noise may be identified and removed from the image by an iterative process.Type: GrantFiled: August 18, 2021Date of Patent: September 10, 2024Assignee: Micron Technology, Inc.Inventors: Bambi L DeLaRosa, Katya Giannios, Abhishek Chaurasia
-
Patent number: 12080050Abstract: Methods and systems for determining information for a specimen are provided. One system includes a computer subsystem configured for determining a global texture characteristic of an image of a specimen and one or more local characteristics of a localized area in the image. The system also includes one or more components executed by the computer subsystem. The component(s) include a machine learning model configured for determining information for the specimen based on the global texture characteristic and the one or more local characteristics. The computer subsystem is also configured for generating results including the determined information. The methods and systems may be used for metrology (in which the determined information includes one or more characteristics of a structure formed on the specimen) or inspection (in which the determined information includes a classification of a defect detected on the specimen).Type: GrantFiled: December 20, 2021Date of Patent: September 3, 2024Assignee: KLA Corp.Inventors: David Kucher, Sophie Salomon, Vijay Ramachandran
-
Patent number: 12073582Abstract: Apparatuses and methods train a model and then use the trained model to determine a global three dimensional (3D) position and orientation of a fiduciary marker. In the context of an apparatus for training a model, a wider field-of-view sensor is configured to acquire a static image of a space in which the fiducial marker is disposed and a narrower field-of-view sensor is configured to acquire a plurality of images of at least a portion of the fiducial marker. The apparatus also includes a pan-tilt unit configured to controllably alter pan and tilt angles of the narrower field-of-view sensor during image acquisition. The apparatus further includes a control system configured to determine a transformation of position and orientation information determined from the images acquired by the narrower field-of-view sensor to a coordinate system for the space for which the static image is acquired by the wider field-of-view sensor.Type: GrantFiled: October 1, 2021Date of Patent: August 27, 2024Assignee: THE BOEING COMPANYInventors: David James Huber, Deepak Khosla, Yang Chen, Brandon Courter, Luke Charles Ingram, Jacob Moorman, Scott Rad, Anthony Wayne Baker
-
Patent number: 12072442Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.Type: GrantFiled: November 22, 2021Date of Patent: August 27, 2024Assignee: NVIDIA CorporationInventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
-
Patent number: 12073304Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.Type: GrantFiled: June 16, 2023Date of Patent: August 27, 2024Assignee: DeepMind Technologies LimitedInventors: Charles Blundell, Oriol Vinyals
-
Patent number: 12056920Abstract: A method of determining a roadway map includes receiving an image from above a roadway. The method further includes generating a skeletonized map based on the received image, wherein the skeletonized map comprises a plurality of roads. The method includes identifying intersections based on joining of multiple roads of the plurality of roads in the skeletonized map. The method includes partitioning the skeletonized map based on the identified intersections, wherein partitioning the skeletonized map defines a roadway data set and an intersection data set. The method includes analyzing the roadway data set to determine a number of lanes in each roadway of the plurality of roads. The method further includes analyzing the intersection data set to lane connections in the identified intersections. The method further includes merging results of the analyzed road data set and the analyzed intersection data set to generate the roadway map.Type: GrantFiled: January 12, 2022Date of Patent: August 6, 2024Assignee: WOVEN BY TOYOTA, INC.Inventor: José Felix Rodrigues
-
Patent number: 12054152Abstract: A computer is programmed to determine a training dataset that includes a plurality of images each including a first object and an object label, train a first machine learning program to identify first object parameters of the first objects in the plurality of images based on the object labels and a confidence level based on a standard deviation of a distribution of a plurality of identifications of the first object parameters, receive, from a second machine learning program, a plurality of second images each including a second object identified with a low confidence level, process the plurality of second images with the first machine learning program to identify the second object parameters with a corresponding second confidence level that is greater than a second confidence level, retrain the first machine learning program based on the identified second object parameters.Type: GrantFiled: January 12, 2021Date of Patent: August 6, 2024Assignee: Ford Global Technologies, LLCInventors: Gurjeet Singh, Sowndarya Sundar
-
Patent number: 12050976Abstract: A method of performing, by an electronic device, a convolution operation at a certain layer in a neural network includes: obtaining N pieces of input channel data; performing a first convolution operation by applying a first input channel data group including K pieces of first input channel data from among the N pieces of input channel data to a first kernel filter group including K first kernel filters; performing a second convolution operation by applying a second input channel data group including K pieces of second input channel data from among the N pieces of input channel data to a second kernel filter group including K second kernel filters; and obtaining output channel data based on the first convolution operation and the second convolution operation, wherein K is a natural number that is less than N.Type: GrantFiled: May 15, 2020Date of Patent: July 30, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Tammy Lee
-
Patent number: 12045963Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect, via a graphical user interface of a client device, a user selection of an object portrayed within a digital image. The disclosed systems determine, in response to detecting the user selection of the object, a relationship between the object and an additional object portrayed within the digital image. The disclosed systems receive one or more user interactions for modifying the object. The disclosed systems modify the digital image in response to the one or more user interactions by modifying the object and the additional object based on the relationship between the object and the additional object.Type: GrantFiled: November 23, 2022Date of Patent: July 23, 2024Assignee: Adobe Inc.Inventors: Scott Cohen, Zhe Lin, Zhihong Ding, Luis Figueroa, Kushal Kafle
-
Patent number: 12033375Abstract: An object identification unit contains an artificial neural network and is designed to identify human faces. For this purpose, a face is divided into a number of triangles. The relative component of the area of each triangle in the total of the areas of all triangles is ascertained to ascertain a rotational angle of the face. The relative component of the area of each triangle in the total of the area of all triangles is then scaled to a rotation-invariant dimension of the face. The scaled area of the triangles is supplied to the artificial neural network in order to identify a person.Type: GrantFiled: April 27, 2021Date of Patent: July 9, 2024Assignee: Airbus Defence and Space GmbHInventor: Manfred Hiebl
-
Patent number: 12026621Abstract: A computer-implemented method for training a machine-learning network, wherein the network includes receiving an input data from a sensor, wherein the input data includes data indicative of an image, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor, generating an adversarial version of the input data utilizing an optimizer, wherein the adversarial version of the input data utilizes a subset of the input data, parameters associated with the optimizer, and one or more perturbation tiles, determining loss function value in response to the adversarial version of the input data and a classification of the adversarial version of the input data, determining a perturbation tile in response the loss function value associated with one or more subsets of the adversarial version of the input data, and output a perturbation that includes at least the perturbation tile.Type: GrantFiled: November 30, 2020Date of Patent: July 2, 2024Assignee: Robert Bosch GmbHInventors: Devin T. Willmott, Anit Kumar Sahu, Fatemeh Sheikholeslami, Filipe J. Cabrita Condessa, Jeremy Kolter
-
Patent number: 12019707Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.Type: GrantFiled: January 18, 2024Date of Patent: June 25, 2024Assignee: VIZIT LABS, INC.Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran