Patents Examined by Kamran Afshar
-
Patent number: 11688160Abstract: A method of generating training data for training a neural network, method of training a neural network and using a neural network for autonomous operations, related devices and systems. In one aspect, a neural network for autonomous operation of an object in an environment is trained. Policy values are generated based on a sample data set. An approximate action-value function is generated from the policy values. A set of approximated policy values is generated using the approximate action-value function for all states in the sample data set for all possible actions. A training target for the neural network is calculated based on the approximated policy values. A training error is calculated as the difference between the training target and the policy value for the corresponding state-action pair in the sample data set. At least some of the parameters of the neural network are updated to minimize the training error.Type: GrantFiled: January 15, 2019Date of Patent: June 27, 2023Assignee: Huawei Technologies Co., Ltd.Inventor: Hengshuai Yao
-
Patent number: 11681924Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes receiving training data; training a neural network on the training data, wherein the neural network is configured to: receive a network input, convert the network input into a latent representation of the network input, and process the latent representation to generate a network output from the network input, and wherein training the neural network on the training data comprises training the neural network on a variational information bottleneck objective that encourages, for each training input, the latent representation generated for the training input to have low mutual information with the training input while the network output generated for the training input has high mutual information with the target output for the training input.Type: GrantFiled: December 18, 2020Date of Patent: June 20, 2023Assignee: Google LLCInventor: Alexander Amir Alemi
-
Patent number: 11675999Abstract: The machine learning device comprises a training part configured to train a machine learning model used in a vehicle; and a detecting part configured to detect replacement of a vehicle part mounted in the vehicle. The training part is configured to retrain the machine learning model using training data sets corresponding to a vehicle part after replacement when a vehicle part relating to input data of the machine learning model is replaced.Type: GrantFiled: August 18, 2021Date of Patent: June 13, 2023Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Daiki Yokoyama, Tomohiro Kaneko
-
Patent number: 11669558Abstract: A computer-implemented technique generates a dense embedding vector that provides a distributed representation of input text. The technique includes: generating an input term-frequency (TF) vector of dimension g that includes frequency information relating to frequency of occurrence of terms in an instance of input text; using a TF-modifying component to modify the term-specific frequency information in the input TF vector by respective machine-trained weighting factors, to produce an intermediate vector of dimension g; using a projection component to project the intermediate vector of dimension g into an embedding vector of dimension k, where k is less than g. Both the TF-modifying component and the projection component use respective machine-trained neural networks. An application performs any of a retrieval-based function, a recognition-based function, a recommendation-based function, a classification-based function, etc. based on the embedding vector.Type: GrantFiled: March 28, 2019Date of Patent: June 6, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Yan Wang, Ye Wu, Houdong Hu, Surendra Ulabala, Vishal Thakkar, Arun Sacheti
-
Patent number: 11663602Abstract: Various methods, apparatuses, and media for implementing a fraud machine learning model execution module are provided. A processor generates a plurality of machine learning models. The processor generates historical aggregate data based on prior transaction activities of a customer from a plurality of databases for transactions. The processor also tracks activities of the customer during a new transaction authorization process and generates a transaction data; integrates the transaction data with the historical aggregate data; executes each of said machine learning models using the integrated transaction data and the historical aggregate data to generate a fraud score and stores the fraud score into the memory; and determines whether the new transaction is fraudulent based on the generated fraud score.Type: GrantFiled: May 15, 2019Date of Patent: May 30, 2023Assignee: JPMORGAN CHASE BANK, N.A.Inventors: Faeiz Hindi, Ramana Nallajarla, Sambasiva R. Vadlamudi
-
Patent number: 11663509Abstract: A method for managing data includes obtaining a request for a machine learning (ML) pipeline selection from a client, wherein the request comprises a training dataset and a domain of the training dataset, and in response to the request: identifying a set of ML pipelines based on the domain, obtaining runtime statistics for the set of ML pipelines using the domain and at least a portion the training dataset, generating, using a user preference model, an ordering of the set of ML pipelines based on the runtime statistics and user preferences, and presenting the ordering, the runtime statistics, and a notification based on the ordering to the client.Type: GrantFiled: January 31, 2020Date of Patent: May 30, 2023Assignee: EMC IP HOLDING COMPANY LLCInventors: Victor Fong, Megan A. Murawski, Amy N. Seibel
-
Patent number: 11663492Abstract: Roughly described, a problem solving platform distributes the solving of the problem over a evolvable individuals, each of which also evolves its own pool of actors. The actors have the ability to contribute collaboratively to a solution at the level of the individual, instead of each actor being a candidate for the full solution. Populations evolve both at the level of the individual and at the level of actors within an individual. In an embodiment, an individual defines parameters according to which its population of actors can evolve. The individual is fixed prior to deployment to a production environment, but its actors can continue to evolve and adapt while operating in the production environment. Thus a goal of the evolutionary process at the level of individuals is to find populations of actors that can sustain themselves and survive, solving a dynamic problem for a given domain as a consequence.Type: GrantFiled: December 21, 2017Date of Patent: May 30, 2023Assignee: Cognizant Technology SolutionsInventors: Babak Hodjat, Hormoz Shahrzad
-
Patent number: 11663443Abstract: Techniques are described for reducing the number of parameters of a deep neural network model. According to one or more embodiments, a device can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a structure extraction component that determines a number of input nodes associated with a fully connected layer of a deep neural network model. The computer executable components can further comprise a transformation component that replaces the fully connected layer with a number of sparsely connected sublayers, wherein the sparsely connected sublayers have fewer connections than the fully connecter layer, and wherein the number of sparsely connected sublayers is determined based on a defined decrease to the number of input nodes.Type: GrantFiled: November 21, 2018Date of Patent: May 30, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Dan Gutfreund, Quanfu Fan, Abhijit S. Mudigonda
-
Patent number: 11665976Abstract: A reservoir element of the first aspect of the present disclosure includes: a spin conduction layer containing a non-magnetic conductor; ferromagnetic layers positioned in a first direction with respect to the spin conduction layer and spaced apart from each other in a plan view from the first direction; and via wirings electrically connected to spin conduction layer on a surface opposite to a surface with the ferromagnetic layers.Type: GrantFiled: September 10, 2019Date of Patent: May 30, 2023Assignee: TDK CORPORATIONInventors: Tomoyuki Sasaki, Tatsuo Shibata
-
Patent number: 11663451Abstract: A 2D array-based neuromorphic processor includes: axon circuits each being configured to receive a first input corresponding to one bit from among bits indicating n-bit activation; first direction lines extending in a first direction from the axon circuits; second direction lines intersecting the first direction lines; synapse circuits disposed at intersections of the first direction lines and the second direction lines, and each being configured to store a second input corresponding to one bit from among bits indicating an m-bit weight and to output operation values of the first input and the second input; and neuron circuits connected to the second direction lines, each of the neuron circuits being configured to receive an operation value output from at least one of the synapse circuits, based on time information assigned individually to the synapse circuits, and to perform a multi-bit operation by using the operation values and the time information.Type: GrantFiled: February 13, 2019Date of Patent: May 30, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
-
Patent number: 11657260Abstract: Neural network hardware acceleration data parallelism is performed by an integrated circuit including a plurality of memory banks, each memory bank among the plurality of memory banks configured to store values and to transmit stored values, a plurality of computation units, each computation unit among the plurality of computation units including a processor including circuitry configured to perform a mathematical operation on an input data value and a weight value to produce a resultant data value, and a computation controller configured to cause a value transmission to be received by more than one computation unit or memory bank.Type: GrantFiled: October 26, 2021Date of Patent: May 23, 2023Assignee: EDGECORTIX PTE. LTD.Inventors: Nikolay Nez, Oleg Khavin, Tanvir Ahmed, Jens Huthmann, Sakyasingha Dasgupta
-
Patent number: 11651260Abstract: A method for hardware-based machine learning acceleration is provided. The method may include partitioning, into a first batch of data and a second batch of data, an input data received at a hardware accelerator implementing a machine learning model. The input data may be a continuous stream of data samples. The input data may be partitioned based at least on a resource constraint of the hardware accelerator. An update of a probability density function associated with the machine learning model may be performed in real time. The probability density function may be updated by at least processing, by the hardware accelerator, the first batch of data before the second batch of data. An output may be generated based at least on the updated probability density function. The output may include a probability of encountering a data value. Related systems and articles of manufacture, including computer program products, are also provided.Type: GrantFiled: January 31, 2018Date of Patent: May 16, 2023Assignee: The Regents of the University of CaliforniaInventors: Bita Darvish Rouhani, Mohammad Ghasemzadeh, Farinaz Koushanfar
-
Patent number: 11645580Abstract: A system and method for content selection and presentation is disclosed. A system receives a plurality of content elements configured for presentation in at least one content container and selects one of the plurality of content elements for presentation in the at least one content container. The one of the plurality of content elements is selected by a trained selection model configured to use Thompson sampling. An interface including the selected one of the plurality of content elements is generated.Type: GrantFiled: January 21, 2020Date of Patent: May 9, 2023Assignee: Walmart Apollo, LLCInventors: Abhimanyu Mitra, Kannan Achan, Afroza Ali, Shirpaa Manoharan
-
Patent number: 11640537Abstract: An apparatus to facilitate execution of non-linear functions operations is disclosed. The apparatus comprises accelerator circuitry including a compute grid having a plurality of processing elements to execute neural network computations, store values resulting from the neural network computations, and perform piecewise linear (PWL) approximations of one or more non-linear functions using the stored values as input data.Type: GrantFiled: April 8, 2019Date of Patent: May 2, 2023Assignee: Intel CorporationInventors: Bharat Daga, Krishnakumar Nair, Pradeep Janedula, Aravind Babu Srinivasan, Bijoy Pazhanimala, Ambili Vengallur
-
Method for analyzing time-series data based on machine learning and information processing apparatus
Patent number: 11640553Abstract: A machine learning method includes: generating, by a computer, a sine wave using a basic period of input data having a periodic property; determining a sampling period based on a degree of roundness of an attractor generated from the sine wave; sampling the input data at the determined sampling period to generate a pseudo attractor; and performing a machine learning by using the pseudo attractor.Type: GrantFiled: October 23, 2019Date of Patent: May 2, 2023Assignee: FUJITSU LIMITEDInventors: Tomoyuki Tsunoda, Yoshiaki Ikai, Junji Kaneko -
Patent number: 11636319Abstract: An embodiment of a semiconductor package apparatus may include technology to process one or more vectors with a sum of squares operation with a layer of a multi-layer neural network, and determine a fixed-point approximation for the sum of squares operation. Other embodiments are disclosed and claimed.Type: GrantFiled: August 22, 2018Date of Patent: April 25, 2023Assignee: Intel CorporationInventors: Gokce Keskin, Anil Thomas, Oguz Elibol
-
Patent number: 11636326Abstract: Provided are computer systems, methods, and devices for operating an artificial neural network. The system includes neurons. The neurons include a plurality of synapses including charge-trapped transistors for processing input signals, an accumulation block for receiving drain currents from the plurality of synapses, the drain currents produced as an output of multiplication from the plurality of synapses, the drain currents calculating an amount of voltage multiplied by time, a capacitor for accumulating charge from the drain currents to act as short-term memory for accumulated signals, a discharge pulse generator for generating an output signal by discharging the accumulated charge during a discharging cycle, and a comparator for comparing an input voltage with a reference voltage. The comparator produces a first output if the input voltage is above the reference voltage and produces a second output if the input voltage is below the reference voltage.Type: GrantFiled: May 6, 2022Date of Patent: April 25, 2023Assignee: blumind Inc.Inventors: John Linden Gosson, Roger Levinson
-
Patent number: 11631014Abstract: Systems and methods of the present disclosure include at least one processor that receives a data set of a data stream from a data source, where the data set includes a time-varying data points. The processor determines event observations associated with data points of the time-varying data points based on a detection model to identify types of the event observations, including: i) anomalies, ii) change-points, iii) patterns, or iv) outliers. The processor generates anomaly records in an event data store based on the event observations and automatically generates event records for at least one of the anomaly records based on variables of at least one dimension of the time-varying data points, where the event record links one or more event observations. The processor automatically applies changes in the event record to each event observation of the one or more event observations based on the linking by the event record.Type: GrantFiled: July 31, 2020Date of Patent: April 18, 2023Assignee: Capital One Services, LLCInventors: John C. Stocker, Parth Shrotri, Luke Botti, Scott Jemielity, Diana Yoo, Mark Roberts, Naga V. Gumpina, Daniel Snipes, Alan Rozet
-
Patent number: 11625593Abstract: A neural network circuit is provided with which it is possible to significantly reduce the area occupied by the connection unit of a full connection (FC)-type neural network circuit. An analog-type neural network circuit constitute a learning apparatus having a self-learning function and corresponding to a brain function, wherein the neural network comprises: a plurality (n) of input-side neurons; a plurality (m, and including cases when n=m) of output-side neurons; (n×m) connection units each connecting one input-side neuron and one output-side neuron; and a self-learning control unit, the (n×m) connection units being constituted from connection units corresponding to only the positive weighting function as a brain function, and connection units corresponding to only the negative weighting function as the brain function.Type: GrantFiled: February 13, 2018Date of Patent: April 11, 2023Assignee: NATIONAL UNIVERSITY CORPORATION HOKKAIDO UNIVERSITYInventor: Tetsuya Asai
-
Patent number: 11620568Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by using hyperparameter predictors. In an embodiment, for each mini-machine learning model (MML model), a respective hyperparameter predictor set that predicts a respective set of hyperparameter settings for a data set is trained. Each MML model represents a respective reference machine learning model (RML model). Data set samples are generated from the data set. Meta-feature sets are generated, each meta-feature set describing a respective data set sample. A respective target set of hyperparameter settings are generated for said each MML model using a hypertuning algorithm. The meta-feature sets and the respective target set of hyperparameter settings are used to train the respective hyperparameter predictor set. Each hyperparameter predictor set is used during training and inference to improve the accuracy of automatically selecting a RML model per data set.Type: GrantFiled: April 18, 2019Date of Patent: April 4, 2023Assignee: Oracle International CorporationInventors: Hesam Fathi Moghadam, Sandeep Agrawal, Venkatanathan Varadarajan, Anatoly Yakovlev, Sam Idicula, Nipun Agarwal