Patents Examined by Johnathan R Germick
  • Patent number: 11972408
    Abstract: A method may include embedding, in a hidden layer and/or an output layer of a first machine learning model, a first digital watermark. The first digital watermark may correspond to input samples altering the low probabilistic regions of an activation map associated with the hidden layer of the first machine learning model. Alternatively, the first digital watermark may correspond to input samples rarely encountered by the first machine learning model. The first digital watermark may be embedded in the first machine learning model by at least training, based on training data including the input samples, the first machine learning model. A second machine learning model may be determined to be a duplicate of the first machine learning model based on a comparison of the first digital watermark embedded in the first machine learning model and a second digital watermark extracted from the second machine learning model.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: April 30, 2024
    Assignee: The Regents of the University of California
    Inventors: Bita Darvish Rouhani, Huili Chen, Farinaz Koushanfar
  • Patent number: 11972327
    Abstract: A method for action automation includes determining, using an electronic device, an action based on domain information. Activity patterns associated with the action are retrieved. For each activity pattern, a candidate action rule is determined. Each candidate action rule specifies one or more pre-conditions when the action occurs. One or more preferred candidate action rules are determined from multiple candidate action rules for automation of the action.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: April 30, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Vijay Srinivasan, Christian Koehler, Hongxia Jin
  • Patent number: 11842264
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a neural network system comprising one or more gated linear networks. A system includes: one or more gated linear networks, wherein each gated linear network corresponds to a respective data value in an output data sample and is configured to generate a network probability output that defines a probability distribution over possible values for the corresponding data value, wherein each gated linear network comprises a plurality of layers, wherein the plurality of layers comprises a plurality of gated linear layers, wherein each gated linear layer has one or more nodes, and wherein each node is configured to: receive a plurality of inputs, receive side information for the node; combine the plurality of inputs according to a set of weights defined by the side information, and generate and output a node probability output for the corresponding data value.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: December 12, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Agnieszka Grabska-Barwinska, Peter Toth, Christopher Mattern, Avishkar Bhoopchand, Tor Lattimore, Joel William Veness
  • Patent number: 11803756
    Abstract: A method of operating a neural network system includes parsing, by a processor, at least one item of information related to a neural network operation from an input neural network model; determining, by the processor, information of at least one dedicated hardware device; and generating, by the processor, a reshaped neural network model by changing information of the input neural network model according to a result of determining the information of the at least one dedicated hardware device such that the reshaped neural network model is tailored for execution by the dedicated hardware device.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: October 31, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Seung-soo Yang
  • Patent number: 11663483
    Abstract: According to embodiments, an encoder neural network receives a one-hot representation of a real text. The encoder neural network outputs a latent representation of the real text. A decoder neural network receives random noise data or artificial code generated by a generator neural network from random noise data. The decoder neural network outputs softmax representation of artificial text. The decoder neural network receives the latent representation of the real text. The decoder neural network outputs a reconstructed softmax representation of the real text. A hybrid discriminator neural network receives a first combination of the soft-text and the latent representation of the real text and a second combination of the softmax representation of artificial text and the artificial code. The hybrid discriminator neural network outputs a probability indicating whether the second combination is similar to the first combination. Additional embodiments for utilizing latent representation are also disclosed.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: May 30, 2023
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Md Akmal Haidar, Mehdi Rezagholizadeh
  • Patent number: 11636344
    Abstract: During training of deep neural networks, a Copernican loss (LC) is designed to augment the standard Softmax loss to explicitly minimize intra-class variation and simultaneously maximize inter-class variation. Copernican loss operates using the cosine distance and thereby affects angles leading to a cosine embedding, which removes the disconnect between training and testing.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: April 25, 2023
    Assignee: Carnegie Mellon University
    Inventors: Marios Savvides, Dipan Kumar Pal
  • Patent number: 11625595
    Abstract: Knowledge transfer between recurrent neural networks is performed by obtaining a first output sequence from a bidirectional Recurrent Neural Network (RNN) model for an input sequence, obtaining a second output sequence from a unidirectional RNN model for the input sequence, selecting at least one first output from the first output sequence based on a similarity between the at least one first output and a second output from the second output sequence; and training the unidirectional RNN model to increase the similarity between the at least one first output and the second output.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: April 11, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gakuto Kurata, Kartik Audhkhasi
  • Patent number: 11604960
    Abstract: Machine learning is utilized to learn an optimized quantization configuration for an artificial neural network (ANN). For example, an ANN can be utilized to learn an optimal bit width for quantizing weights for layers of the ANN. The ANN can also be utilized to learn an optimal bit width for quantizing activation values for the layers of the ANN. Once the bit widths have been learned, they can be utilized at inference time to improve the performance of the ANN by quantizing the weights and activation values of the layers of the ANN.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: March 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kalin Ovtcharov, Eric S. Chung, Vahideh Akhlaghi, Ritchie Zhao
  • Patent number: 11586904
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive learning rate while also ensuring that the learning rate is non-increasing.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: February 21, 2023
    Assignee: GOOGLE LLC
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Satyen Chandrakant Kale
  • Patent number: 11568303
    Abstract: An electronic apparatus is provided. The electronic apparatus includes a first memory configured to store a first artificial intelligence (AI) model including a plurality of first elements and a processor configured to include a second memory. The second memory is configured to store a second AI model including a plurality of second elements. The processor is configured to acquire output data from input data based on the second AI model. The first AI model is trained through an AI algorithm. Each of the plurality of second elements includes at least one higher bit of a plurality of bits included in a respective one of the plurality of first elements.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: January 31, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kyoung-hoon Kim, Young-hwan Park, Dong-soo Lee, Dae-hyun Kim, Han-su Cho, Hyun-jung Kim
  • Patent number: 11568266
    Abstract: Described herein are embodiments for systems and methods for mutual machine learning with global topic discovery and local word embedding. Both topic modeling and word embedding map documents onto a low-dimensional space, with the former clustering words into a global topic space and the latter mapping word into a local continuous embedding space. Embodiments of Topic Modeling and Sparse Autoencoder (TMSA) framework unify these two complementary patterns by constructing a mutual learning mechanism between word co-occurrence based topic modeling and autoencoder. In embodiments, word topics generated with topic modeling are passed into auto-encoder to impose topic sparsity for the autoencoder to learn topic-relevant word representations. In return, word embedding learned by autoencoder is sent back to topic modeling to improve the quality of topic generations. Performance evaluation on various datasets demonstrates the effectiveness of the disclosed TMSA framework in discovering topics and embedding words.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: January 31, 2023
    Assignee: Baidu USA LLC
    Inventors: Dingcheng Li, Jingyuan Zhang, Ping Li
  • Patent number: 11562213
    Abstract: Logic may reduce the size of runtime memory for deep neural network inference computations. Logic may determine, for two or more stages of a neural network, a count of shared block allocations, or shared memory block allocations, that concurrently exist during execution of the two or more stages. Logic may compare counts of the shared block allocations to determine a maximum count of the counts. Logic may reduce inference computation time for deep neural network inference computations. Logic may determine a size for each of the shared block allocations of the count of shared memory block allocations, to accommodate data to store in a shared memory during execution of the two or more stages of the cascaded neural network. Logic may determine a batch size per stage of the two or more stages of a cascaded neural network based on a lack interdependencies between input data.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: January 24, 2023
    Assignee: INTEL CORPORATION
    Inventors: Byoungwon Choe, Kwangwoong Park, Seokyong Byun
  • Patent number: 11551058
    Abstract: Example wireless feedback control systems disclosed herein include a receiver to receive a first measurement of a target system via a first wireless link. Disclosed example systems also include a neural network to predict a value of a state of the target system at a future time relative to a prior time associated with the first measurement, the neural network to predict the value of the state of the target system based on the first measurement and a prior sequence of values of a control signal previously generated to control the target system during a time interval between the prior time and the future time, and the neural network to output the predicted value of the state of the target system to a controller. Disclosed example systems further include a transmitter to transmit a new value of the control signal to the target system via a second wireless link.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: January 10, 2023
    Assignee: Intel Corporation
    Inventors: David Gómez Gutiérrez, Linda Patricia Osuna Ibarra, Dave Cavalcanti, Leobardo Campos Macías, Rodrigo Aldana López, Humberto Caballero Barragan, David Arditti Ilitzky
  • Patent number: 11526728
    Abstract: Systems, methods, and computer-executable instructions for determining a computation schedule for a recurrent neural network (RNN). A matrix multiplication (MM) directed-acyclic graph (DAG) is received for the RNN. Valid phased computation schedules for the RNN are generated. Each of the valid phase computation schedule includes an ordering of MM operations. For each of the plurality of valid phased computation schedules, each of the MM operations is partitioned to processor cores based on L3 cache to L2 cache data movement. The RNN is executed based on the valid phased computation schedules. A final computation schedule is stored. The final computation schedule is used for future executions of the RNN.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: December 13, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Minjia Zhang, Samyam Rajbhandari, Wenhan Wang, Yuxiong He
  • Patent number: 11468283
    Abstract: A neural array may include an array unit, a first processing unit, and a second processing unit. The array unit may include synaptic devices. The first processing unit may input a row input signal to the array unit, and receive a row output signal from the array unit. The second processing unit may input a column input signal to the array unit, and receive a column output signal from the array unit. The array unit may have a first array value and a second array value. When the first processing unit or the second processing unit receives an output signal based on the first array value from the array unit which has selected the first array value and then the array unit selects the second array value, it may input a signal generated based on the output signal to the array unit which has selected the second array value.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: October 11, 2022
    Assignee: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Jaeha Kim, Yunju Choi, Seungheon Baek
  • Patent number: 11461579
    Abstract: Some embodiments include a special-purpose hardware accelerator that can perform specialized machine learning tasks during both training and inference stages. For example, this hardware accelerator uses a systolic array having a number of data processing units (“DPUs”) that are each connected to a small number of other DPUs in a local region. Data from the many nodes of a neural network is pulsed through these DPUs with associated tags that identify where such data was originated or processed, such that each DPU has knowledge of where incoming data originated and thus is able to compute the data as specified by the architecture of the neural network. These tags enable the systolic neural network engine to perform computations during backpropagation, such that the systolic neural network engine is able to support training.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: October 4, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventor: Luiz M. Franca-Neto
  • Patent number: 11409922
    Abstract: A method for increasing a speed or energy efficiency at which a computer is capable of modeling a plurality of random walkers. The method includes defining a virtual space in which a plurality of virtual random walkers will move among different locations in the virtual space, wherein the virtual space comprises a plurality of vertices and wherein the different locations are ones of the plurality of vertices. A corresponding set of neurons in a spiking neural network is assigned to a corresponding vertex such that there is a correspondence between sets of neurons and the plurality of vertices, wherein a spiking neural network comprising a plurality of sets of spiking neurons is established. A virtual random walk of the plurality of virtual random walkers is executed using the spiking neural network, wherein executing includes tracking how many virtual random walkers are at each vertex at a given time increment.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: August 9, 2022
    Assignee: National Technology & Engineering Solutions of Sandia, LLC
    Inventors: James Bradley Aimone, William Mark Severa, Richard B. Lehoucq, Ojas D. Parekh
  • Patent number: 11403530
    Abstract: Some embodiments provide a method for compiling a neural network program for a neural network inference circuit. The method receives a neural network definition including multiple weight values arranged as multiple filters. For each filter, each of the weight values is one of a set of weight values associated with the filter. At least one of the filters has more than three different associated weight values. The method generates program instructions for instructing the neural network inference circuit to execute the neural network. The neural network inference circuit includes circuitry for executing neural networks with a maximum of three different weight values per filter.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: August 2, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11373087
    Abstract: A method of generating a fixed-point type neural network by quantizing a floating-point type neural network, includes obtaining, by a device, a plurality of post-activation values by applying an activation function to a plurality of activation values that are received from a layer included in the floating-point type neural network, and deriving, by the device, a plurality of statistical characteristics for at least some of the plurality of post-activation values. The method further includes determining, by the device, a step size for the quantizing of the floating-point type neural network, based on the plurality of statistical characteristics, and determining, by the device, a final fraction length for the fixed-point type neural network, based on the step size.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: June 28, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Han-young Yim, Do-yun Kim, Byeoung-su Kim, Nak-woo Sung, Jong-han Lim, Sang-hyuck Ha
  • Patent number: 11341396
    Abstract: A deep approximation neural network architecture which extrapolates data over unseen states for demand response applications in order to control distribution systems like product distribution systems of which energy distribution systems, e.g. heat or electrical power distribution, are one example. The method is a model-free control technique mainly in the form of Reinforcement Learning (RL) where a controller learns from interaction with the system to be controlled to control product distributions of which energy distribution systems, e.g. heat or electrical power distribution, are one example.
    Type: Grant
    Filed: December 26, 2016
    Date of Patent: May 24, 2022
    Assignee: VITO NV
    Inventors: Bert Claessens, Peter Vrancx