Patents Examined by Kamran Afshar
-
Patent number: 12169763Abstract: Techniques are disclosed for providing a scalable multi-tenant serve pool for chatbot systems. A query serving system (QSS) receives a request to serve a query for a skillbot. The QSS includes: (i) a plurality of deployments in a serving pool, and (ii) a plurality of deployments in a free pool. The QSS determines whether a first deployment from the plurality of deployments in the serving pool can serve the query based on an identifier of the skillbot. In response to determining that the first deployment cannot serve the query, the QSS selects a second deployment from the plurality of deployments in the free pool to be assigned to the skillbot, and loads a machine-learning model associated with the skillbot into the second deployment, wherein the machine-learning model is trained to serve the query for the skillbot. The query is served using the machine-learning model loaded into the second deployment.Type: GrantFiled: April 13, 2021Date of Patent: December 17, 2024Assignee: Oracle International CorporationInventors: Vishal Vishnoi, Suman Mallapura Somasundar, Xin Xu, Stevan Malesevic
-
Patent number: 12169793Abstract: A system and method for controlling a system, comprising estimating an optimal control policy for the system; receiving data representing sequential states and associated trajectories of the system, comprising off-policy states and associated off-policy trajectories; improving the estimate of the optimal control policy by performing at least one approximate value iteration, comprising: estimating a value of operation of the system dependent on the estimated optimal control policy; using a complex return of the received data, biased by the off-policy states, to determine a bound dependent on at least the off-policy trajectories, and using the bound to improve the estimate of the value of operation of the system according to the estimated optimal control policy; and updating the estimate of the optimimal control policy, dependent on the improved estimate of the value of operation of the system.Type: GrantFiled: November 16, 2020Date of Patent: December 17, 2024Assignee: The Research Foundation for The State University of New YorkInventors: Robert Wright, Lei Yu, Steven Loscalzo
-
Patent number: 12165067Abstract: Systems and methods for anomaly detection in accordance with embodiments of the invention are illustrated. One embodiment includes a method for training a system for detecting anomalous samples. The method draws data samples from a data distribution of true samples and an anomaly distribution and draws a latent sample from a latent space. The method further includes steps for training a generator to generate data samples based on the drawn data samples and the latent sample, and training a cyclic discriminator to distinguish between true data samples and reconstructed samples. A reconstructed sample is generated by the generator based on an encoding of a data sample. The method identifies a set of one or more true pairs, a set of one or more anomalous pairs, and a set of one or more generated pairs. The method trains a joint discriminator to distinguish true pairs from anomalous and generated pairs.Type: GrantFiled: June 25, 2020Date of Patent: December 10, 2024Assignees: The Board of Trustees of the Leland Stanford Junior University, Ford Global Technologies, LLCInventors: Ziyi Yang, Eric Felix Darve, Iman Soltani Bozchalooi
-
Localization-based test generation for individual fairness testing of artificial intelligence models
Patent number: 12147906Abstract: Methods, systems, and computer program products for localization-based test generation for individual fairness testing of AI models are provided herein. A computer-implemented method includes obtaining at least one artificial intelligence model and training data related to the at least one artificial intelligence model; identifying one or more boundary regions associated with the at least one artificial intelligence model based at least in part on results of processing at least a portion of the training data using the at least one artificial model; generating, in accordance with at least one of the one or more identified boundary regions, one or more synthetic data points for inclusion with the training data; and executing one or more fairness tests on the at least one artificial intelligence model using at least a portion of the one or more generated synthetic data points and at least a portion of the training data.Type: GrantFiled: April 26, 2021Date of Patent: November 19, 2024Assignee: International Business Machines CorporationInventors: Diptikalyan Saha, Aniya Aggarwal, Sandeep Hans -
Patent number: 12147892Abstract: Provided is an electronic apparatus. The electronic apparatus includes a memory and a processor. The processor is configured to apply a low rank approximation using a matrix decomposition for a first square matrix among a plurality of square matrices based on parameter values of a deep learning model, and obtain a first approximated matrix and a second approximated matrix for the first square matrix, obtain second approximated matrices for each of a plurality of remaining square matrices other than the first square matrix among the plurality of square matrices, based on the first approximated matrix for the first square matrix, and store the first approximated matrix the first square matrix and the second approximated matrices for each of the plurality of square matrices in the memory.Type: GrantFiled: April 8, 2020Date of Patent: November 19, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sejung Kwon, Baeseong Park, Dongsoo Lee
-
Patent number: 12141238Abstract: Discussed herein are devices, systems, and methods for classification using a clustering autoencoder. A method can include obtaining content to be classified by the DNN classifier, and operating the DNN classifier to determine a classification of the received content, the DNN classifier including a clustering classification layer that clusters based on a latent feature vector representation of the content, the classification corresponding to one or more clusters that are closest to the latent feature vector providing the classification and a corresponding confidence.Type: GrantFiled: October 27, 2020Date of Patent: November 12, 2024Assignee: Raytheon CompanyInventors: Philip A. Sallee, James Mullen
-
Patent number: 12136039Abstract: Some embodiments provide a method for training multiple parameters of a machine-trained (MT) network subject to a sparsity constraint that requires a threshold portion of the parameters to be equal to zero. A first set of the parameters subject to the sparsity constraint are grouped into groups of parameters. For each parameter of a second set of the parameters subject to the sparsity constraint, the method determines an accuracy penalty associated with the parameter being set to zero. For each group of parameters in the first set of parameters, the method determines a minimum accuracy penalty for each possible number of parameters in the group being set to zero. The method uses the determined accuracy penalties to set to the value zero at least the threshold portion of the plurality of parameters.Type: GrantFiled: July 7, 2020Date of Patent: November 5, 2024Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig
-
Patent number: 12112264Abstract: A device which comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, and second control lines extending in a second direction across the array of RPU cells. Peripheral circuitry comprising readout circuitry is coupled to the first and second control lines. A control system generates control signals to control the peripheral circuitry to perform a first operation and a second operation on the array of RPU cells. The control signals include a first configuration control signal to configure the readout circuitry to have a first hardware configuration when the first operation is performed on the array of RPU cells, and a second configuration control signal to configure the readout circuitry to have a second hardware configuration, which is different from the first hardware configuration, when the second operation is performed on the array of RPU cells.Type: GrantFiled: December 15, 2020Date of Patent: October 8, 2024Assignee: International Business Machines CorporationInventors: Malte Johannes Rasch, Tayfun Gokmen, Seyoung Kim
-
Patent number: 12106491Abstract: Embodiments of this application disclose a target tracking method performed at an electronic device. The electronic device obtains a first video stream and detects candidate regions within a current video frame in the first video stream. The electronic device then extracts, from the candidate regions, a deep feature corresponding to each candidate region and calculates a feature similarity for each candidate region and a deep feature of a target detected in a previous video frame. Finally, the electronic device determines, based on the feature similarity corresponding to the candidate region, that the target is detected in the current video frame. Target detection is performed in a range of video frames by using a target detection model, and target tracking is performed based on the deep feature, so that occurrence of cases such as a target tracking drift or loss can be effectively prevented, to ensure the accuracy of target tracking.Type: GrantFiled: October 6, 2020Date of Patent: October 1, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Hao Zhang, Zhiwei Niu
-
Patent number: 12079704Abstract: A system includes a data collection engine, a plurality of items including radio-frequency identification chips, a plurality of third party data and insight sources, a plurality of interfaces, client devices, a server and method thereof for preventing suicide. The server includes trained machine learning models, business logic and attributes of a plurality of patient events. The data collection engine sends attributes of new patient events to the server. The server can predict an adverse event risk of the new patient events based upon the attributes of the new patient events utilizing the trained machine learning models.Type: GrantFiled: October 31, 2022Date of Patent: September 3, 2024Assignee: Brain Trust Innovations I, LLCInventor: David LaBorde
-
Patent number: 12067484Abstract: An example method of training a neural network includes defining hardware building blocks (HBBs), neuron equivalents (NEQs), and conversion procedures from NEQs to HBBs; defining the neural network using the NEQs in a machine learning framework; training the neural network on a training platform; and converting the neural network as trained into a netlist of HBBs using the conversion procedures to convert the NEQs in the neural network to the HBBs of the netlist.Type: GrantFiled: June 21, 2019Date of Patent: August 20, 2024Assignee: XILINX, INC.Inventors: Yaman Umuroglu, Nicholas Fraser, Michaela Blott, Kristof Denolf, Kornelis A. Vissers
-
Patent number: 12067485Abstract: Methods, systems, and non-transitory computer readable medium are provided for long short-term memory (LSTM) anomaly detection for multi-sensor equipment monitoring. A method includes training a LSTM recurrent neural network (RNN) model for semiconductor processing fault detection. The training includes generating training data for the LSTM RNN model and providing the training data to train the LSTM RNN model on first training input and first target output to generate a trained LSTM RNN model for the semiconductor processing fault detection. The training data includes the first training input and the first target output based on normal runs of manufacturing processes of semiconductor processing equipment. Another method includes providing input based on runs of manufacturing processes of semiconductor processing equipment to a trained LSTM RNN model; obtaining one or more outputs from the trained LSTM RNN model; and using the one or more outputs for semiconductor processing fault detection.Type: GrantFiled: September 24, 2019Date of Patent: August 20, 2024Assignee: Applied Materials, IncInventors: Sima Didari, Tianqing Liao, Harikrishnan Rajagopal
-
Patent number: 12067479Abstract: Systems and methods for heterogenous hardware acceleration are disclosed. The systems and methods can include a neural network processing unit comprising compute tiles. Each of a first set of the compute tiles can include a first tensor array configured to support operations in a first number format. Each of a second set of the compute tiles can include a second tensor array configured to support operations in a second number format, the second number format supporting a greater range or a greater precision than the first number format, and a de-quantizer configured to convert data in the first number format to data in the second number format. The systems and methods can include neural network processing units, multi-chip hardware accelerators and distributed hardware accelerators including low-precision components for performing interference tasks and high-precision components for performing training tasks.Type: GrantFiled: October 25, 2019Date of Patent: August 20, 2024Assignee: T-Head (Shanghai) Semiconductor Co., Ltd.Inventor: Liang Han
-
Patent number: 12056604Abstract: Layers of a deep neural network (DNN) are partitioned into stages using a profile of the DNN. Each of the stages includes one or more of the layers of the DNN. The partitioning of the layers of the DNN into stages is optimized in various ways including optimizing the partitioning to minimize training time, to minimize data communication between worker computing devices used to train the DNN, or to ensure that the worker computing devices perform an approximately equal amount of the processing for training the DNN. The stages are assigned to the worker computing devices. The worker computing devices process batches of training data using a scheduling policy that causes the workers to alternate between forward processing of the batches of the DNN training data and backward processing of the batches of the DNN training data. The stages can be configured for model parallel processing or data parallel processing.Type: GrantFiled: June 29, 2018Date of Patent: August 6, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Vivek Seshadri, Amar Phanishayee, Deepak Narayanan, Aaron Harlap, Nikhil Devanur Rangarajan
-
Patent number: 12045725Abstract: Some embodiments provide a method for training a network including layers that each includes multiple nodes. The method identifies a set of related layers of the network. Each node in one of the related layers has corresponding nodes in each of the other related layers. Each set of corresponding nodes receives a same set of inputs and applies different sets of weights to the inputs to generate an output. The method identifies an element-wise addition layer including nodes that each add outputs of a different set of corresponding nodes from the related layers to generate a sum. The method uses a set of outputs generated by the nodes of each related layer to determine batch normalization parameters specific to each layer of the set of related layers. The method uses data generated by the element-wise addition layer to determine batch normalization parameters for the set of related layers.Type: GrantFiled: July 7, 2020Date of Patent: July 23, 2024Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig
-
Patent number: 12039432Abstract: An artificial neural network (ANN) apparatus can include processing component circuitry that receives linear inputs, and removes linearity from the one or more linear inputs based on an S-shaped saturating activation function that generates a continuous non-linear output. The neurons of the ANN comprise digital bit-wise components configured to transform the linear inputs into the continuous non-linear output.Type: GrantFiled: March 18, 2020Date of Patent: July 16, 2024Assignee: Infineon Technologies AGInventor: Andrew Stevens
-
Patent number: 12033053Abstract: Embodiments of the invention may execute a NN by executing sub-tensor columns, each sub-tensor column including computations from portions of a layers of the NN, and each sub-tensor column performing computations entirely within a first layer of cache (e.g. L2 in one embodiment) and saving its output entirely within a second layer of cache (e.g. L3 in one embodiment). Embodiments may include partitioning the execution of a NN by partitioning the execution of the NN into sub-tensor columns, each sub-tensor column including computations from portions of layers of the NN, each sub-tensor column performing computations entirely within a first layer of cache and saving its output entirely within a second layer of cache.Type: GrantFiled: November 23, 2022Date of Patent: July 9, 2024Assignee: NEURALMAGIC, INC.Inventors: Alexander Matveev, Nir Shavit, Govind Ramnarayan
-
Patent number: 12020163Abstract: A method includes receiving a request to solve a problem defined by input information and applying a neural network to generate an answer to the problem. The neural network includes an input level, a manager level including a first manager, a worker level including first and second workers, and an output level. Applying the neural network includes implementing the input level to provide a piece of input information to the first manager; implementing the first manager to delegate portions of the piece of information to the first and second workers; implementing the first worker to operate on its portion of information to generate a first output; implementing the second worker to operate on its portion of information to generate a second output; and implementing the output level to generate the answer to the problem, using the first and second outputs. The method also includes transmitting a response comprising the answer.Type: GrantFiled: February 4, 2020Date of Patent: June 25, 2024Assignee: Bank of America CorporationInventors: Garrett Thomas Botkin, Matthew Bruce Murray
-
Patent number: 12020160Abstract: A method, computer program product and system for generating a neural network. Initial neural networks are prepared, each of which includes an input layer containing one or more input nodes, a middle layer containing one or more middle nodes, and an output layer containing one or more output nodes. A new neural network is generated that includes a new middle layer containing one or more middle nodes based on the middle nodes of the middle layers of the initial neural networks.Type: GrantFiled: January 19, 2018Date of Patent: June 25, 2024Assignee: International Business Machines CorporationInventor: Takeshi Inagaki
-
Patent number: 12014262Abstract: Disclosed herein are apparatus, method, and computer-readable storage device embodiments for implementing deconvolution via a set of convolutions. An embodiment includes a convolution processor that includes hardware implementing logic to perform at least one algorithm comprising a convolution algorithm. The at least one convolution processor may be further configured to perform operations including performing a first convolution and outputting a first deconvolution segment as a result of the performing the first convolution. The at least one convolution processor may be further configured to perform a second convolution and output a second deconvolution segment as a result of the performing the second convolution.Type: GrantFiled: October 3, 2019Date of Patent: June 18, 2024Assignee: SYNOPSYS, INC.Inventors: Tom Michiels, Thomas Julian Pennello