Learning Method Patents (Class 706/25)
-
Patent number: 12191311Abstract: A semiconductor device includes a first transistor including a first channel layer of a first conductivity type, a second transistor provided in parallel with the first transistor and including a second channel layer of a second conductivity type, and a third transistor stacked on the first and second transistors. The third transistor may include a gate insulating film including a ferroelectric material. The third transistor may include third channel layer and a gate electrode that are spaced apart from each other in a thickness direction with the gate insulating film therebetween.Type: GrantFiled: December 5, 2023Date of Patent: January 7, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Sangwook Kim, Jinseong Heo, Yunseong Lee, Sanghyun Jo
-
Patent number: 12189697Abstract: A computing system is disclosed that includes a processor and memory. The memory stores instructions that, when executed by the processor, cause the processor to perform several acts. The acts include receiving, by a generative model, input set forth by a user of a client computing device that is in network communication with the computing system. The acts also include generating, by the generative model, a query based upon the input set forth by the user; providing the query to a search engine. The acts further include receiving, by the generative model and from the search engine, content identified by the search engine based upon the query. The acts additionally include generating, by the generative model, an output based upon a prompt, where the prompt includes the content identified by the search engine based upon the query. The acts also include transmitting the output to the client computing device for presentment to the user.Type: GrantFiled: June 15, 2023Date of Patent: January 7, 2025Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Zhun Liu, Saksham Singhal, Xia Song, Rahul Lal
-
Patent number: 12187389Abstract: A hybrid personal watercraft combines features of pontoon boats and deck boats, in a cost-effective and versatile package. The watercraft includes port and starboard sponsons which combine a pair of outboard flotation cavities. A space below the deck and above the hull bottom creates at least one, and potentially up to three additional flotation cavities, which may also be used as storage areas accessible by an access door in the bow of the watercraft and/or a set of hatches in the deck. The watercraft may be efficiently produced assembled from polymer materials, such as thermoplastic polyolefin (TPO).Type: GrantFiled: February 21, 2022Date of Patent: January 7, 2025Assignee: Polaris Industries Inc.Inventors: Erik Rogers, Michael T. Yobe
-
Patent number: 12182697Abstract: A computing device includes one or more processors, a first random access memory (RAM) comprising magnetic random access memory (MRAM), a second random access memory of a type distinct from MRAM, and a non-transitory computer-readable storage medium storing instructions for execution by the one or more processors. The computing device receives first data on which to train an artificial neural network (ANN) and trains the ANN by, using the first RAM comprising the MRAM, performing a first set of training iterations to train the ANN using the first data, and, after performing the first set of training iterations, using the second RAM of the type distinct from MRAM, performing a second set of training iterations to train the ANN using the first data. The computing device stores values for the trained ANN. The trained ANN is configured to classify second data based on the stored values.Type: GrantFiled: December 17, 2018Date of Patent: December 31, 2024Assignee: Integrated Silicon Solution, (Cayman) Inc.Inventors: Michail Tzoufras, Marcin Gajek
-
Patent number: 12178616Abstract: An electronic device according to an example embodiment includes a processor, and a memory operatively connected to the processor and including instructions executable by the processor, wherein when the instructions are executed, the processor is configured to collect an EEG signal measuring brain activity and an fNIRS signal measuring the brain activity, and output a result of determining a type of the brain activity from a trained neural network model using the EEG signal and the fNIRS signal, and the neural network model may be trained to, extract an EEG feature from the EEG signal, extract an fNIRS feature from the fNIRS signal, extract a fusion feature based on the EEG signal and the fNIRS signal, and output the result of determining the type of the brain activity based on the EEG feature and the fusion feature.Type: GrantFiled: October 27, 2022Date of Patent: December 31, 2024Assignee: Foundation for Research and Business, Seoul National University of Science and TechnologyInventor: Seong Eun Kim
-
Patent number: 12182704Abstract: Systems, devices, and methods related to a deep learning accelerator and memory are described. An integrated circuit may be configured with: a central processing unit, a deep learning accelerator configured to execute instructions with matrix operands; random access memory configured to store first instructions of an artificial neural network executable by the deep learning accelerator and second instructions of an application executable by the central processing unit; one or more connections among the random access memory, the deep learning accelerator and the central processing unit; and an input/output interface to an external peripheral bus. While the deep learning accelerator is executing the first instructions to convert sensor data according to the artificial neural network to inference results, the central processing unit may execute the application that uses inference results from the artificial neural network.Type: GrantFiled: September 8, 2022Date of Patent: December 31, 2024Assignee: Micron Technology, Inc.Inventors: Poorna Kale, Jaime Cummins
-
Patent number: 12174960Abstract: The disclosed computer-implemented method for identifying and remediating security threats against graph neural network models may include (i) analyzing an input format for model data utilized by a graph neural network (GNN) model on a target computing system, (ii) generating probing data corresponding to the input format, (iii) querying the GNN model utilizing the probing data, (iv) building, based on a query response output of the GNN model utilizing the probing data, one or more shadow GNN models, (v) verifying a performance metric of the shadow GNN models against a target performance metric associated with the GNN model, and (vi) performing a security action that protects against a potential security threat against the GNN model when the performance metric of the shadow GNN models is similar to target performance metric associated with the GNN model. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: March 3, 2022Date of Patent: December 24, 2024Assignee: GEN DIGITAL INC.Inventor: Yun Shen
-
Patent number: 12174918Abstract: A model adapted to a predetermined system is adapted to another system with an environment or an agent similar to that of the predetermined system. Specifically, a first model adapted to a first system that is operated based on a first condition including a specific environment and a specific agent is corrected using a correction model to generate a second model. The second model is adapted to a second system that is operated based on a second condition, where the second condition is partially different from the first condition.Type: GrantFiled: September 27, 2018Date of Patent: December 24, 2024Assignee: NEC CORPORATIONInventor: Ryota Higa
-
Patent number: 12175786Abstract: Embodiments for automatically converting printed documents into electronic format using artificial intelligence techniques disclosed herein include: (i) receiving a plurality of images of documents; (ii) for each received image, using an image classification algorithm to classify the image as one of (a) an image of a first type of document, or (b) an image of a second type of document; (iii) for each image classified as an image of the first type of document, using an object localization algorithm to identity an area of interest in the image; (iv) for an identified area of interest, using an optical character recognition algorithm to extract text from the identified area of interest; and (v) populating a record associated with the document with the extracted text.Type: GrantFiled: April 25, 2022Date of Patent: December 24, 2024Assignee: Data-Core Systems, Inc.Inventors: Anshuman Narayan, Jishnu Bhattacharyya, Dhrubajyoti Chakravarty, Pradeep K. Banerjee, Sin-Min Chang
-
Patent number: 12164599Abstract: Volumetric quantification can be performed for various parameters of an object represented in volumetric data. Multiple views of the object can be generated, and those views provided to a set of neural networks that can generate inferences in parallel. The inferences from the different networks can be used to generate pseudo-labels for the data, for comparison purposes, which enables a co-training loss to be determined for the unlabeled data. The co-training loss can then be used to update the relevant network parameters for the overall data analysis network. If supervised data is also available then the network parameters can further be updated using the supervised loss.Type: GrantFiled: August 9, 2023Date of Patent: December 10, 2024Assignee: NVIDIA CorporationInventors: Holger Roth, Yingda Xia, Dong Yang, Daguang Xu
-
Patent number: 12165017Abstract: A machine learning model engine executes a machine learning model that has been trained with training data and processes scoring data to generate predictions. A machine learning model analyzer is configured to evaluate the machine learning model. The machine learning model analyzer determines a plurality of drift metrics for the plurality of input variables to compare the distribution of the training data to the distribution of the scoring data. Each of the plurality of drift metrics is associated with one of the plurality of input variables. The machine learning model analyzer also determines an overall drift metric for the combination of the input variables. The plurality of input variables are weighted in the overall drift metric in accordance with the plurality of feature importances. The machine learning model analyzer generates an alert based on the overall distribution of the training data relative to the overall distribution of the scoring data.Type: GrantFiled: October 29, 2020Date of Patent: December 10, 2024Assignee: Wells Fargo Bank, N.A.Inventor: Nathan Grossman
-
Patent number: 12164059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: July 15, 2021Date of Patent: December 10, 2024Assignee: NVIDIA CorporationInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 12165082Abstract: Hyperparameters for tuning a machine learning system may be optimized using Bayesian optimization with constraints. The hyperparameter optimization may be performed for a received training set and received constraints. Respective probabilistic models for the machine learning system and constraint functions may be initialized, then hyperparameter optimization may include iteratively identifying respective values for hyperparameters using analysis of the respective models performed using an acquisition function implementing entropy search on the respective models, training the machine learning system using the identified values to determine measures of accuracy and constraint metrics, and updating the respective models using the determined measures.Type: GrantFiled: June 29, 2020Date of Patent: December 10, 2024Assignee: Amazon Technologies, Inc.Inventors: Giovanni Zappella, Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Philippe Archambeau, Matthias Seeger
-
Patent number: 12165045Abstract: Hardware implementations of DNNs and related methods with a variable output data format. Specifically, in the hardware implementations and methods described herein the hardware implementation is configured to perform one or more hardware passes to implement a DNN wherein during each hardware pass the hardware implementation receives input data for a particular layer, processes that input data in accordance with the particular layer (and optionally one or more subsequent layers), and outputs the processed data in a desired format based on the layer, or layers, that are processed in the particular hardware pass. In particular, when a hardware implementation receives input data to be processed, the hardware implementation also receives information indicating the desired format for the output data of the hardware pass and the hardware implementation is configured to, prior to outputting the processed data convert the output data to the desired format.Type: GrantFiled: September 20, 2018Date of Patent: December 10, 2024Assignee: Imagination Technologies LimitedInventors: Chris Martin, David Hough, Paul Brasnett, Cagatay Dikici, James Imber, Clifford Gibson
-
Patent number: 12154204Abstract: A method includes obtaining a speech segment. The method also includes generating, using at least one processing device of an electronic device, context-independent features and context-dependent features of the speech segment. The method further includes decoding, using the at least one processing device of the electronic device, a first viseme based on the context-independent features. The method also includes decoding, using the at least one processing device of the electronic device, a second viseme based on the context-dependent features and the first viseme. In addition, the method includes generating, using the at least one processing device of the electronic device, an output viseme based on the first and second visemes, where the output viseme is associated with a visual animation of the speech segment.Type: GrantFiled: February 16, 2022Date of Patent: November 26, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Liang Zhao, Siva Penke
-
Patent number: 12153898Abstract: Provided is a method and system for weight memory mapping for a streaming operation of giant generative artificial intelligence hardware. A weight memory mapping system may include a weight memory configured to store a weight matrix for a pretrained artificial intelligence model; an input register configured to store a plurality of input data; a first hardware operator configured to process a matrix multiplication operation between the plurality of input data and the weight matrix and to compute a lane-level final sum during the progress of the matrix multiplication operation by reusing a partial sum of the matrix multiplication operation; and a second hardware operator configured to preprocess a next matrix multiplication operation during the progress of the matrix multiplication operation using the final sum.Type: GrantFiled: June 14, 2024Date of Patent: November 26, 2024Assignee: HyperAccel Co., Ltd.Inventors: Junsoo Kim, Jung-Hoon Kim, Junseo Cha
-
Patent number: 12149510Abstract: A system and method are disclosed for providing a private multi-modal artificial intelligence platform. The method includes splitting a neural network into a first client-side network, a second client-side network and a server-side network and sending the first client-side network to a first client. The first client-side network processes first data from the first client, the first data having a first type. The method includes sending the second client-side network to a second client. The second client-side network processes second data from the second client, the second data having a second type. The first type and the second type have a common association. Forward and back propagation occurs between the client side networks and disparate data types on the different client side networks and the server-side network to train the neural network.Type: GrantFiled: February 19, 2021Date of Patent: November 19, 2024Assignee: TRIPLEBLIND HOLDINGS, INC.Inventors: Greg Storm, Gharib Gharibi, Riddhiman Das
-
Patent number: 12148419Abstract: Mechanisms are provided for performing machine learning training of a computer model. A perturbation generator generates a modified training data comprising perturbations injected into original training data, where the perturbations cause a data corruption of the original training data. The modified training data is input into a prediction network of the computer model and processing the modified training data through the prediction network to generate a prediction output. Machine learning training is executed of the prediction network based on the prediction output and the original training data to generate a trained prediction network of a trained computer model. The trained computer model is deployed to an artificial intelligence computing system for performance of an inference operation.Type: GrantFiled: December 13, 2021Date of Patent: November 19, 2024Assignee: International Business Machines CorporationInventors: Xiaodong Cui, Brian E. D. Kingsbury, George Andrei Saon, David Haws, Zoltan Tueske
-
Patent number: 12147901Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and a storage medium. In a forward propagation of the multi-layer neural network model, the number of input feature maps is expanded and a data computation is performed by using the expanded input feature maps.Type: GrantFiled: December 19, 2019Date of Patent: November 19, 2024Assignee: Canon Kabushiki KaishaInventors: Hongxing Gao, Wei Tao, Tsewei Chen, Dongchao Wen, Junjie Liu
-
Patent number: 12141699Abstract: The present disclosure relates to systems and methods for providing vector-wise sparsity in neural networks. In some embodiments, an exemplary method for providing vector-wise sparsity in a neural network, comprises: dividing a matrix associated with the neural network into a plurality of vectors; selecting a first subset of non-zero elements from the plurality of vectors to form a pruned matrix; and outputting the pruned matrix for executing the neural network using the pruned matrix.Type: GrantFiled: July 23, 2020Date of Patent: November 12, 2024Assignee: Alibaba Group Holding LimitedInventors: Maohua Zhu, Tao Zhang, Zhenyu Gu, Yuan Xie
-
Patent number: 12131182Abstract: Systems and methods of data processing are provided. The method comprises receiving an input data to be processed by a series of operations, identifying a first operation from the series of operations, selecting at least one second operation from the series of operations to be grouped with the first operation based at least in part on an amount of an input data and an output data of the grouped operations and the capacity of the memory unit, and processing a portion of the input data of the grouped operations. An efficiency of the series of data operations can be improved by ensuring the input data and output data of any data operations are both stored in the memory unit.Type: GrantFiled: March 22, 2019Date of Patent: October 29, 2024Assignee: Nanjing Horizon Robotics Technology Co., Ltd.Inventors: Zhenjiang Wang, Jianjun Li, Liang Chen, Kun Ling, Delin Li, Chen Sun
-
Patent number: 12131258Abstract: A method for compressing a deep neural network includes determining a pruning ratio for a channel and a mixed-precision quantization bit-width based on an operational budget of a device implementing the deep neural network. The method further includes quantizing a weight parameter of the deep neural network and/or an activation parameter of the deep neural network based on the quantization bit-width. The method also includes pruning the channel of the deep neural network based on the pruning ratio.Type: GrantFiled: September 23, 2020Date of Patent: October 29, 2024Assignee: QUALCOMM IncorporatedInventors: Yadong Lu, Ying Wang, Tijmen Pieter Frederik Blankevoort, Christos Louizos, Matthias Reisser, Jilei Hou
-
Patent number: 12124958Abstract: A computer-implemented method for enforcing an idempotent-constrained characteristic during training of a neural network may be provided. The method comprises training of a neural network by minimizing a loss function, wherein the loss function comprises an additional term imposing an idempotence-based regularization to the neural network during the training.Type: GrantFiled: January 22, 2020Date of Patent: October 22, 2024Assignee: International Business Machines CorporationInventors: Antonio Foncubierta Rodriguez, Matteo Manica, Joris Cadow
-
Patent number: 12124963Abstract: Disclosed is a disentangled personalized federated learning method via consensus representation extraction and diversity propagation provided by embodiments of the present application. The method includes: receiving, by a current node, local consensus representation extraction models and unique representation extraction models corresponding to other nodes, respectively; extracting, by the current node, the representations of the data of the current node by using the unique representation extraction models of other nodes respectively, and calculating first mutual information between different sets of representation distributions, determining similarity of the data distributions between the nodes based on the size of the first mutual information, and determining aggregation weights corresponding to the other nodes based on the first mutual information; the current node obtains the global consensus representation aggregation model corresponding to the current node.Type: GrantFiled: June 1, 2024Date of Patent: October 22, 2024Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Zhenan Sun, Yunlong Wang, Zhengquan Luo, Kunbo Zhang, Qi Li, Yong He
-
Patent number: 12124960Abstract: An object of the present invention is to provide a learning apparatus and a learning method capable of appropriately learning pieces of data that belong to the same category and are acquired under different conditions. In a learning apparatus according a first aspect of the present invention, first data and second data are respectively input to a first input layer and a second input layer that are independent of each other, and feature quantities are calculated. Thus, the feature quantity calculation in one of the first and second input layers is not affected by the feature quantity calculation in the other input layer. In addition to feature extraction performed in the input layers, each of a first intermediate feature quantity calculation process and a second intermediate feature quantity calculation process is performed at least once in an intermediate layer that is shared by the first and second input layers.Type: GrantFiled: January 13, 2021Date of Patent: October 22, 2024Assignee: FUJIFILM CorporationInventors: Masaaki Oosake, Makoto Ozeki
-
Patent number: 12124779Abstract: A method of construction of a feedforward neural network includes a step of initialization of a neural network according to an initial topology, and at least one topological optimization phase, of which each phase includes: an additive phase including a modification of the network topology by adding at least one node and/or a connection link between the input of a node of a layer and the output of a node of any one of the preceding layers, and/or a subtractive phase including a modification of the network topology by removing at least one node and/or a connection link between two layers. Each topology modification includes the selection of a topology modification among several candidate modifications, based on an estimation of the variation in the network error between the previous topology and each topology modified according to a candidate modification.Type: GrantFiled: November 7, 2019Date of Patent: October 22, 2024Assignee: ADAGOSInventors: Manuel Bompard, Mathieu Causse, Florent Masmoudi, Mohamed Masmoudi, Houcine Turki
-
Patent number: 12124957Abstract: Provided are an apparatus and method of compressing an artificial neural network. According to the method and the apparatus, an optimal compression rate and an optimal operation accuracy are determined by compressing an artificial neural network, determining a task accuracy of a compressed artificial neural network, and automatically calculating a compression rate and a compression ratio based on the determined task accuracy. The method includes obtaining an initial value of a task accuracy for a task processed by the artificial neural network, compressing the artificial neural network by adjusting weights of connections among layers of the artificial neural network included in information regarding the connections, determining a compression rate for the compressed artificial neural network based on the initial value of the task accuracy and a task accuracy of the compressed artificial neural network, and re-compressing the compressed artificial neural network according to the compression rate.Type: GrantFiled: July 29, 2019Date of Patent: October 22, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Youngmin Oh
-
Patent number: 12124956Abstract: A hardware processor can receive a set of input data individually describing a particular asset associated with an entity. The hardware processor can receive a set of inputs individually responsive to a respective subset of a plurality of queries for a particular user. The hardware processor can generate a predictive model based on the set of input data. The hardware processor can calculate a predictive outcome for the particular user by applying the predictive model to the set of inputs. The hardware processor can identify a target score impacting the predictive outcome for the particular user. The hardware processor can assign a training program to the particular user corresponding to the target score.Type: GrantFiled: July 7, 2023Date of Patent: October 22, 2024Assignee: Cangrade, Inc.Inventors: Steven Lehr, Gershon Goren, Liana Epstein
-
Patent number: 12124855Abstract: The present disclosure relates to a training method for a parameter configuration model, a parameter configuration method, and a parameter configuration device.Type: GrantFiled: September 15, 2022Date of Patent: October 22, 2024Assignee: SHENZHEN MICROBT ELECTRONICS TECHNOLOGY CO., LTD.Inventors: Guo Ai, Haifeng Guo, Zuoxing Yang
-
Patent number: 12124955Abstract: A hardware processor can receive a set of input data individually describing a particular asset associated with an entity. The hardware processor can receive sets of inputs individually responsive to a respective subset of queries. The hardware processor can generate a predictive model using the set of input data. The hardware processor can calculate predictive outcomes individually associated with a respective user by applying the predictive model to each respective set of inputs of the sets of inputs. The hardware processor can generate a list ranked according to the predictive outcomes for the particular asset.Type: GrantFiled: June 30, 2023Date of Patent: October 22, 2024Assignee: Cangrade, Inc.Inventors: Steven Lehr, Gershon Goren, Liana Epstein
-
Patent number: 12117917Abstract: A method of using a computing device to compare performance of multiple algorithms. The method includes receiving, by a computing device, multiple algorithms to assess. The computing device further receives a total amount of resources to allocate to the multiple algorithms. The computing device additionally assigns a fair share of the total amount of resources to each of the multiple algorithms. The computing device still further executes each of the multiple algorithms using the assigned fair share of the total amount of resources. The computing device additionally compares the performance of each of the multiple based on at least one of multiple hardware relative utility metrics describing a hardware relative utility of any given resource allocation for each of the multiple algorithms.Type: GrantFiled: April 29, 2021Date of Patent: October 15, 2024Assignee: International Business Machines CorporationInventors: Robert Engel, Aly Megahed, Eric Kevin Butler, Nitin Ramchandani, Yuya Jeremy Ong
-
Patent number: 12118056Abstract: Methods and apparatus for performing matrix transforms within a memory fabric. Various embodiments of the present disclosure are directed to converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein. Exemplary embodiments described herein perform matrix transformations within a memory device that includes a matrix fabric and matrix multiplication unit (MMU). In one exemplary embodiment, the matrix fabric uses a “crossbar” construction of resistive elements. Each resistive element stores a level of impedance that represents the corresponding matrix coefficient value. The crossbar connectivity can be driven with an electrical signal representing the input vector as an analog voltage. The resulting signals can be converted from analog voltages to a digital values by an MMU to yield a vector-matrix product. In some cases, the MMU may additionally perform various other logical operations within the digital domain.Type: GrantFiled: May 3, 2019Date of Patent: October 15, 2024Assignee: Micron Technology, Inc.Inventor: Fa-Long Luo
-
Patent number: 12118662Abstract: In an approach to improve the generation of a virtual object in a three-dimensional virtual environment, embodiments of the present invention identify a virtual object to be generated in a three-dimensional virtual environment based on a natural language utterance. Additionally, embodiments generate the virtual object based on a CLIP-guided Generative Latent Space (CLIP-GLS) analysis, and monitor usage of the generated virtual object in the three-dimensional virtual space. Moreover, embodiments infer human perception data from the monitoring, and generate a utility score for the virtual object based on the human perception data.Type: GrantFiled: September 19, 2022Date of Patent: October 15, 2024Assignee: International Business Machines CorporationInventors: Jeremy R. Fox, Martin G. Keen, Alexander Reznicek, Bahman Hekmatshoartabari
-
Patent number: 12112260Abstract: Disclosed is a method of determining a characteristic of interest relating to a structure on a substrate formed by a lithographic process, the method comprising: obtaining an input image of the structure; and using a trained neural network to determine the characteristic of interest from said input image. Also disclosed is a reticle comprising a target forming feature comprising more than two sub-features each having different sensitivities to a characteristic of interest when imaged onto a substrate to form a corresponding target structure on said substrate. Related methods and apparatuses are also described.Type: GrantFiled: May 29, 2019Date of Patent: October 8, 2024Assignee: ASML Netherlands B.V.Inventors: Lorenzo Tripodi, Patrick Warnaar, Grzegorz Grzela, Mohammadreza Hajiahmadi, Farzad Farhadzadeh, Patricius Aloysius Jacobus Tinnemans, Scott Anderson Middlebrooks, Adrianus Cornelis Matheus Koopman, Frank Staals, Brennan Peterson, Anton Bernhard Van Oosten
-
Patent number: 12106218Abstract: Modifying digital content based on predicted future user behavior is provided. Trends in propagation values corresponding to a layer of nodes in an artificial neural network are identified based on measuring the propagation values at each run of the artificial neural network. The trends in the propagation values are forecasted to generate predicted propagation values at a specified future point in time. The predicted propagation values are applied to the layer of nodes in the artificial neural network. Predicted website analytics values corresponding to a set of website variables of interest for the specified future point in time are generated based on running the artificial neural network with the predicted propagation values. A website corresponding to the set of website variables of interest is modified based on the predicted website analytics values corresponding to the set of website variables of interest for the specified future point in time.Type: GrantFiled: February 19, 2018Date of Patent: October 1, 2024Assignee: International Business Machines CorporationInventors: Aaron K. Baughman, Gray F. Cannon, Ryan L. Whitman
-
Patent number: 12100017Abstract: A unified model for a neural network can be used to predict a particular value, such as a customer value. In various instances, customer value may have particular sub-components. Taking advantage of this fact, a specific learning architecture can be used to predict not just customer value (e.g. a final objective) but also the sub-components of customer value. This allows improved accuracy and reduced error in various embodiments.Type: GrantFiled: November 30, 2021Date of Patent: September 24, 2024Assignee: PayPal, Inc.Inventors: Shiwen Shen, Danielle Zhu, Feng Pan
-
Patent number: 12100445Abstract: An interface circuit includes an integrator circuit and a buffer circuit. The integrator circuit is configured to be electrically coupled to a column of memory cells, receive a signal corresponding to a sum of currents flowing through the memory cells of the column, and integrate the signal over time to generate an intermediate voltage. The buffer circuit is electrically coupled to an output of the integrator circuit to receive the intermediate voltage, and is configured to be electrically coupled to a row of further memory cells, generate an analog voltage corresponding to the intermediate voltage, and output the analog voltage to the further memory cells of the row.Type: GrantFiled: July 31, 2023Date of Patent: September 24, 2024Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.Inventor: Mei-Chen Chuang
-
Patent number: 12093813Abstract: Techniques related to compressing a pre-trained dense deep neural network to a sparsely connected deep neural network for efficient implementation are discussed. Such techniques may include iteratively pruning and splicing available connections between adjacent layers of the deep neural network and updating weights corresponding to both currently disconnected and currently connected connections between the adjacent layers.Type: GrantFiled: September 30, 2016Date of Patent: September 17, 2024Assignee: Intel CorporationInventors: Anbang Yao, Yiwen Guo, Yan Li, Yurong Chen
-
Patent number: 12093805Abstract: This disclosure relates to method and system for optimal policy learning and recommendation for distribution task using deep RL model, in applications where when the action space has a probability simplex structure. The method includes training a RL agent by defining a policy network for learning the optimal policy using a policy gradient (PG) method, where the policy network comprising an artificial neural network (ANN) with a set of outputs. A continuous action space having a continuous probability simplex structure is defined. The learning of the optimal policy is updated based on one of stochastic and deterministic PG. For stochastic PG, a Dirichlet distribution based stochastic policy parameterized by output of the ANN with an activation function at an output layer of the ANN is selected. For deterministic PG, a soft-max function is selected as activation function at the output layer of the ANN to maintain the probability simplex structure.Type: GrantFiled: March 26, 2021Date of Patent: September 17, 2024Assignee: Tata Consultancy Services LimitedInventors: Avinash Achar, Easwara Subramanian, Sanjay Purushottam Bhat, Vignesh Lakshmanan Kangadharan Palaniradja
-
Patent number: 12093531Abstract: A hardware accelerator is provided. The hardware accelerator includes a first memory; a source address generation unit coupled to the first memory; a data collection unit coupled to the first memory; a first data queue coupled to the data collection unit; a data dispersion unit coupled to the first data queue; a destination address generation unit coupled to the data dispersion unit; an address queue coupled to the destination address generation unit; a second data queue coupled to the data dispersion unit; and a second memory coupled to the second data queue. The hardware accelerator can perform anyone or any combination of tensor stride, tensor reshape and tensor transpose to achieve tensorflow depth-to-space permutation or tensorflow space-to-depth permutation.Type: GrantFiled: October 21, 2021Date of Patent: September 17, 2024Assignee: Cvitek Co. Ltd.Inventors: Wei-Chun Chang, Yuan-Hsiang Kuo, Chia-Lin Lu, Hsueh-Chien Lu
-
Patent number: 12093836Abstract: Automatic multi-objective hardware optimization for processing a deep learning network is disclosed. An example of a storage medium includes instructions for obtaining client preferences for a plurality of performance indicators for processing of a deep learning workload; generating a workload representation for the deep learning workload; providing the workload representation to machine learning processing to generate a workload executable, the workload executable including hardware mapping based on the client preferences; and applying the workload executable in processing of the deep learning workload.Type: GrantFiled: December 21, 2020Date of Patent: September 17, 2024Assignee: INTEL CORPORATIONInventors: Mattias Marder, Estelle Aflalo, Avrech Ben-David, Shauharda Khadka, Somdeb Majumdar, Santiago Miret, Hanlin Tang
-
Patent number: 12086572Abstract: Embodiments herein describe techniques for expressing the layers of a neural network in a software model. In one embodiment, the software model includes a class that describes the various functional blocks (e.g., convolution units, max-pooling units, rectified linear units (ReLU), and scaling functions) used to execute the neural network layers. In turn, other classes in the software model can describe the operation of each of the functional blocks. In addition, the software model can include conditional logic for expressing how the data flows between the functional blocks since different layers in the neural network can process the data differently. A compiler can convert the high-level code in the software model (e.g., C++) into a hardware description language (e.g., register transfer level (RTL)) which is used to configure a hardware system to implement a neural network accelerator.Type: GrantFiled: October 17, 2017Date of Patent: September 10, 2024Assignee: XILINX, INC.Inventors: Yongjun Wu, Jindrich Zejda, Elliott Delaye, Ashish Sirasao
-
Patent number: 12088823Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for encoding video comprising a sequence of video frames. In one aspect, a method comprises for one or more of the video frames: obtaining a feature embedding for the video frame; processing the feature embedding using a rate control machine learning model to generate a respective score for each of multiple quantization parameter values; selecting a quantization parameter value using the scores; determining a cumulative amount of data required to represent: (i) an encoded representation of the video frame and (ii) encoded representations of each preceding video frame; determining, based on the cumulative amount of data, that a feedback control criterion for the video frame is satisfied; updating the selected quantization parameter value; and processing the video frame using an encoding model to generate the encoded representation of the video frame.Type: GrantFiled: November 3, 2021Date of Patent: September 10, 2024Assignee: DeepMind Technologies LimitedInventors: Chenjie Gu, Hongzi Mao, Ching-Han Chiang, Cheng Chen, Jingning Han, Ching Yin Derek Pang, Rene Andre Claus, Marisabel Guevara Hechtman, Daniel James Visentin, Christopher Sigurd Fougner, Charles Booth Schaff, Nishant Patil, Alejandro Ramirez Bellido
-
Patent number: 12086715Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing sequence modeling tasks using insertions. One of the methods includes receiving a system input that includes one or more source elements from a source sequence and zero or more target elements from a target sequence, wherein each source element is selected from a vocabulary of source elements and wherein each target element is selected from a vocabulary of target elements; generating a partial concatenated sequence that includes the one or more source elements from the source sequence and the zero or more target elements from the target sequence, wherein the source and target elements arranged in the partial concatenated sequence according to a combined order; and generating a final concatenated sequence that includes a finalized source sequence and a finalized target sequence, wherein the finalized target sequence includes one or more target elements.Type: GrantFiled: May 22, 2023Date of Patent: September 10, 2024Assignee: Google LLCInventors: William Chan, Mitchell Thomas Stern, Nikita Kitaev, Kelvin Gu, Jakob D. Uszkoreit
-
Patent number: 12086713Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for evaluating candidate output sequences using language model neural networks. In particular, an auto-regressive language model neural network is used to generate a candidate output sequence. The same auto-regressive language model neural network is used to evaluate the candidate output sequence to determine rating scores for each of one or more criteria. The rating score(s) are then used to determine whether to provide the candidate output sequence.Type: GrantFiled: July 28, 2022Date of Patent: September 10, 2024Assignee: Google LLCInventors: Daniel De Freitas Adiwardana, Noam M. Shazeer
-
Patent number: 12086993Abstract: A method for tracking and/or characterizing multiple objects in a sequence of images. The method includes: assigning a neural network to each object to be tracked; providing a memory shared by all neural networks, and designed to map an address vector of address components, via differentiable operations, onto one or multiple memory locations, and to read data from these memory locations or write data into these memory locations; supplying images from the sequence, and/or details of these images, to each neural network; during the processing of each image and/or image detail by one of the neural networks, generating an address vector from at least one processing product of this neural network; based on this address vector, writing at least one further processing product of the neural network into the shared memory, and/or reading out data from this shared memory and further processing the data by the neural network.Type: GrantFiled: March 16, 2022Date of Patent: September 10, 2024Assignee: ROBERT BOSCH GMBHInventor: Cosmin Ionut Bercea
-
Patent number: 12079713Abstract: Methods and apparatus for discrimitive semantic transfer and physics-inspired optimization in deep learning are disclosed. A computation training method for a convolutional neural network (CNN) includes receiving a sequence of training images in the CNN of a first stage to describe objects of a cluttered scene as a semantic segmentation mask. The semantic segmentation mask is received in a semantic segmentation network of a second stage to produce semantic features. Using weights from the first stage as feature extractors and weights from the second stage as classifiers, edges of the cluttered scene are identified using the semantic features.Type: GrantFiled: May 3, 2023Date of Patent: September 3, 2024Assignee: Intel CorporationInventors: Anbang Yao, Hao Zhao, Ming Lu, Yiwen Guo, Yurong Chen
-
Patent number: 12079722Abstract: The embodiments of this application provide a method and device for optimizing neural network. The method includes: binarizing and bit-packing input data of a convolution layer along a channel direction, and obtaining compressed input data; binarizing and bit-packing respectively each convolution kernel of the convolution layer along the channel direction, and obtaining each corresponding compressed convolution kernel; dividing the compressed input data sequentially in a convolutional computation order into blocks of the compressed input data with the same size of each compressed convolution kernel, wherein the data input to one time convolutional computation form a data block; and, taking a convolutional computation on each block of the compressed input data and each compressed convolution kernel sequentially, obtaining each convolutional result data, and obtaining multiple output data of the convolution layer according to each convolutional result data.Type: GrantFiled: February 1, 2023Date of Patent: September 3, 2024Assignee: Beijing Tusen Zhitu Technology Co., Ltd.Inventors: Yuwei Hu, Jiangming Jin, Lei Su, Dinghua Li
-
Patent number: 12080289Abstract: Disclosed is an electronic apparatus. The electronic apparatus includes: a communication interface, a memory, and a processor connected to the memory and the communication interface, the processor configured to control the electronic apparatus to, based on receiving a speech related to a function of the electronic apparatus, obtain text information corresponding to the received speech, control the communication interface to transmit the obtained text information to a server including a first neural network model corresponding to the function, execute the function based on response information received from the server, and based on identifying that an update period of the first neural network model is greater than or equal to a first threshold period based on the information related to the function of the electronic apparatus, the electronic apparatus may receive the information about the first neural network model from the server and store the information in the memory.Type: GrantFiled: September 27, 2021Date of Patent: September 3, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyeonmok Ko, Dayoung Kwon, Jonggu Kim, Seoha Song, Kyenghun Lee, Hojung Lee, Saebom Jang, Pureum Jung, Changho Paeon, Jiyeon Hong
-
Patent number: 12079695Abstract: A computer-implemented method of generating scale-permuted models can generate models having improved accuracy and reduced evaluation computational requirements. The method can include defining, by a computing system including one or more computing devices, a search space including a plurality of candidate permutations of a plurality of candidate feature blocks, each of the plurality of candidate feature blocks having a respective scale. The method can include performing, by the computing system, a plurality of search iterations by a search algorithm to select a scale-permuted model from the search space, the scale-permuted model based at least in part on a candidate permutation of the plurality of candidate permutations.Type: GrantFiled: October 1, 2020Date of Patent: September 3, 2024Assignee: GOOGLE LLCInventors: Xianzhi Du, Yin Cui, Tsung-Yi Lin, Quoc V. Le, Pengchong Jin, Mingxing Tan, Golnaz Ghiasi, Xiaodan Song