Patents Examined by Kamran Afshar
  • Patent number: 11521049
    Abstract: An optimization device includes: processing circuits each configured to: hold a first value of a neuron of an Ising model; and perform a process to determine whether to permit updating of the first value based on information of the Ising model and information about a target neuron; a control circuit configured to: set, while causing a portion of the processing circuits to perform the process for a partial neuron group, information to be used for the process for a first neuron other than the partial neuron group in a first processing circuit; cause a second processing circuit among the portion of the processing circuits to inactivate the process; and cause the first processing circuit to start the process for the first neuron; and an update neuron selection circuit configured to: select the target neuron from one or more update permissible neurons; and update the value of the target neuron.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 6, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Sanroku Tsukamoto, Satoshi Matsubara
  • Patent number: 11521045
    Abstract: Methods, systems and devices for unsupervised learning utilizing at least one kT-RAM. An evaluation can be performed over a group of N AHaH nodes on a spike pattern using a read instruction (FF), and then an increment high (RH) instruction can be applied to the most positive AHaH node among the N AHaH nodes if an ID associated with the most positive AHaH node is not contained in a set, followed by adding a node ID to the set. In addition, an increment low (RL) instruction can be applied to all AHaH nodes that evaluated positive but were not the most positive, contingent on the most-positive AHaH node's ID not being contained in the set. In addition, node ID's can be removed from the set if the set size is equal to the N number of AHaH nodes.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: December 6, 2022
    Assignee: Knowm, Inc.
    Inventor: Alex Nugent
  • Patent number: 11521043
    Abstract: An information processing method for embedding watermark bits into weights of a first neural network includes: obtaining an output of a second neural network by inputting a plurality of input values obtained from a plurality of weights of the first neural network to the second neural network; obtaining second gradients of the respective plurality of input values based on an error between the output of the second neural network and the watermark bits; and updating the weights based on values obtained by adding first gradients of the weights of the first neural network that have been obtained based on backpropagation and the respective second gradients.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: December 6, 2022
    Assignee: KDDI CORPORATION
    Inventors: Yusuke Uchida, Shigeyuki Sakazawa
  • Patent number: 11521108
    Abstract: Emails or other communications are labeled with a category label such as “spam” or “good” without using confidential or Personally Identifiable Information (PII). The category label is based on features of the emails such as metadata that do not contain PII. Graphs of inferred relationships between email features and category labels are used to assign labels to emails and to features of the emails. The labeled emails are used as a training dataset for training a machine learning model (“MLM”). The MLM model identifies unwanted emails such as spam, bulk email, phishing email, and emails that contain malware.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: December 6, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yi Luo, Weigsheng Li, Sharada Shirish Acharya, Mainak Sen, Ravi Kiran Reddy Poluri, Christian Rudnick
  • Patent number: 11513869
    Abstract: A system for returning synthetic database query results. The system may include a memory unit for storing instructions, and a processor configured to execute the instructions to perform operations comprising: receiving a query input by a user at a user interface; determining, based on natural language processing, a type of the query input; determining, based on the received query input and a database language interpreter, an output data format; returning, based on a generation model and the output data format, a result of the query input; providing, to a plurality of training models and based on the determined query type, the query input and the result; and training the training models, based on the query input and the result.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: November 29, 2022
    Assignee: Capital One Services, LLC
    Inventors: Jeremy Goodsitt, Austin Walters, Vincent Pham, Fardin Abdi Taghi Abad
  • Patent number: 11507872
    Abstract: A hybrid quantum-classical (HQC) computing system, including a quantum computing component and a classical computing component, computes the inverse of a Boolean function for a given output. The HQC computing system translates a set of constraints into interactions between quantum spins; forms, from the interactions, an Ising Hamiltonian whose ground state encodes a set of states of a specific input value that are consistent with the set of constraints; performs, on the quantum computing component, a quantum optimization algorithm to generate an approximation to the ground state of the Ising Hamiltonian; and measures the approximation to the ground state of the Ising Hamiltonian, on the quantum computing component, to obtain a plurality of input bits which are a satisfying assignment of the set of constraints.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: November 22, 2022
    Assignee: Zapata Computing, Inc.
    Inventors: Yudong Cao, Jonathan P. Olson, Eric R. Anschuetz
  • Patent number: 11507879
    Abstract: A method, system, and non-transitory compute readable medium for vector representation of a sequence of items, including training a sequence using a first distributed representation, such that a new distributed representation is produced for which vector entries of the new distributed representation are amplified to create dominant dimensions for when the vector entries of each item correspond to a class of an item to be explained and fractionalizing vector entries of each item that do not correspond to the class of the item to be explained such that the dominant dimensions correspond to higher absolute value entries than the fractionalized vector entries in order to emphasize the dominant dimensions.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: November 22, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Oded Shmueli
  • Patent number: 11501130
    Abstract: A memory-centric neural network system and operating method thereof includes: a processing unit; semiconductor memory devices coupled to the processing unit, the semiconductor memory devices containing instructions executed by the processing unit; a weight matrix constructed with rows and columns of memory cells, inputs of the memory cells of a same row being connected to one of axons, outputs of the memory cells of a same column being connected to one of neurons; timestamp registers registering timestamps of the axons and the neurons; and a lookup table containing adjusting values indexed in accordance with the timestamps, wherein the processing unit updates the weight matrix in accordance with the adjusting values.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: November 15, 2022
    Assignee: SK hynix Inc.
    Inventors: Kenneth C. Ma, Dongwook Suh
  • Patent number: 11501131
    Abstract: A memory-centric neural network system and operating method thereof includes: a processing unit; semiconductor memory devices coupled to the processing unit, the semiconductor memory devices containing instructions executed by the processing unit; a weight matrix constructed with rows and columns of memory cells, inputs of the memory cells of a same row being connected to one of axons, outputs of the memory cells of a same column being connected to one of neurons; timestamp registers registering timestamps of the axons and the neurons; and a lookup table containing adjusting values indexed in accordance with the timestamps, wherein the processing unit updates the weight matrix in accordance with the adjusting values.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: November 15, 2022
    Assignee: SK hynix Inc.
    Inventors: Kenneth C. Ma, Dongwook Suh
  • Patent number: 11494620
    Abstract: A method of computer processing is disclosed comprising receiving a data packet at a processing node of a neural network, performing a calculation of the data packet at the processing node to create a processed data packet, attaching a tag to the processed data packet, transmitting the processed data packet from the processing node to a receiving node during a systolic pulse, receiving the processed data packet at the receiving node, performing a clockwise convolution on the processed data packet and a counter clockwise convolution on the processed data packet, performing an adding function and backpropagating results of the performed sigmoid function to each of the processing nodes that originally processed the data packet.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: November 8, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventor: Luiz M. Franca-Neto
  • Patent number: 11488000
    Abstract: The present disclosure provides an operation apparatus and method for an acceleration chip for accelerating a deep neural network algorithm. The apparatus comprises: a vector addition processor module and a vector function value arithmetic unit and a vector multiplier-adder module wherein the three modules execute a programmable instruction, and interact with each other to calculate values of neurons and a network output result of a neural network, and a variation amount of a synaptic weight representing the interaction strength of the neurons on an input layer to the neurons on an output layer; and the three modules are all provided with an intermediate value storage region and perform read and write operations on a primary memory.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: November 1, 2022
    Assignee: Intitute of Computing Technology, Chinese Academy of Sciences
    Inventors: Zhen Li, Shaoli Liu, Shijin Zhang, Tao Luo, Cheng Qian, Yunji Chen, Tianshi Chen
  • Patent number: 11481092
    Abstract: An intelligent workspace is disclosed. In example embodiments, methods and systems for operating the intelligent workspace on an application of a computing device are disclosed. The workspace includes various tools utilizing user behavioral analytics and user role information for dynamically operating on applications like procurement applications. The systems and methods reduce operational time of the user and enhance the user experience.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: October 25, 2022
    Assignee: GLOBAL EPROCURE
    Inventor: Kabir Roy
  • Patent number: 11481611
    Abstract: Provided are embodiments of a multi-task learning system with hardware acceleration that includes a resistive random access memory crossbar array. Aspects of the invention includes an input layer that has one or more input layer nodes for performing one or more tasks of the multi-task learning system, a hidden layer that has one or more hidden layer nodes, and a shared hidden layer that has one or more shared hidden layer nodes which represent a parameter, wherein the shared hidden layer nodes are coupled to each of the one or more hidden layer nodes of the hidden layer.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: October 25, 2022
    Assignee: International Business Machines Corporation
    Inventors: Takashi Ando, Reinaldo Vega, Hari Mallela
  • Patent number: 11481631
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using embedded function with a deep network. One of the methods includes receiving an input comprising a plurality of features, wherein each of the features is of a different feature type; processing each of the features using a respective embedding function to generate one or more numeric values, wherein each of the embedding functions operates independently of each other embedding function, and wherein each of the embedding functions is used for features of a respective feature type; processing the numeric values using a deep network to generate a first alternative representation of the input, wherein the deep network is a machine learning model composed of a plurality of levels of non-linear operations; and processing the first alternative representation of the input using a logistic regression classifier to predict a label for the input.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: October 25, 2022
    Assignee: Google LLC
    Inventors: Gregory S. Corrado, Kai Chen, Jeffrey A. Dean, Gary R. Holt, Julian P Grady, Sharat Chikkerur, David W. Sculley, II
  • Patent number: 11481598
    Abstract: A computer-implemented method for creating an auto-scaled predictive analytics model includes determining, via a processor, whether a queue size of a service master queue is greater than zero. Responsive to determining that the queue size is greater than zero, the processor fetches a count of requests in a plurality of requests in the service master queue and a type for each of the requests. The processor derives a value for time required for each of the requests and retrieves a number of available processing nodes based on the time required for each of the requests. The processor then auto-scales a processing node number responsive to determining that a total execution time for all of the requests in the plurality of requests exceeds a predetermined time value and outputs an auto-scaled predictive analytics model based on the processing node number and queue size.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: October 25, 2022
    Assignee: International Business Machines Corporation
    Inventors: Mahadev Khapali, Shashank V. Vagarali
  • Patent number: 11481637
    Abstract: An electronic device that includes a controller functional block and a computational functional block having one or more computational elements performs operations associated with a training operation for a generative adversarial network, the generative adversarial network including a generative network and a discriminative network. The controller functional block determines one or more characteristics of the generative adversarial network. Based on the one or more characteristics, the controller functional block configures the one or more computational elements to perform processing operations for each of the generative network and the discriminative network during the training operation for the generative adversarial network.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: October 25, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Nicholas P. Malaya
  • Patent number: 11468333
    Abstract: System level occupancy counting in a lighting system configured to obtain an indicator data of a RF spectrum signal (signal) generated at a number of times in an area. At each respective one of the number of times, based on results of application of heuristic algorithm coefficients, the lighting system generates an indicator data metric value for each of the indicator data for the respective time. The lighting system processes each of the indicator data metric value to compute a plurality of metric values for the respective time and combine the plurality of metric values to compute an output metric value for each of a plurality of probable number of occupants in the area for the respective time. The lighting system determines an occupancy count in the area at the respective time based on the computed output metric value.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: October 11, 2022
    Assignee: ABL IP HOLDING LLC
    Inventors: Min-Hao Michael Lu, Michael Miu, Eric J. Johnson
  • Patent number: 11468300
    Abstract: A circuit structure and a driving method thereof, a neural network are disclosed. The circuit structure includes at least one circuit unit, each circuit unit includes a first group of resistive switching devices and a second group of resistive switching devices, the first group of resistive switching devices includes a resistance gradual-change device, the second group of resistive switching devices includes a resistance abrupt-change device, the first group of resistive switching devices and the second group of resistive switching devices are connected in series, in a case that no voltage is applied, a resistance value of the first group of resistive switching devices is larger than a resistance value of the second group of resistive switching devices.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: October 11, 2022
    Assignee: Tsinghua University
    Inventors: Xinyi Li, Huaqiang Wu, Sen Song, Qingtian Zhang, Bin Gao, He Qian
  • Patent number: 11461703
    Abstract: Methods and systems for selecting and performing group actions include selecting parameters for an approximated action-value function, which determines a reward value associated with a particular group action taken from a particular state, using a determinant of a parameter matrix for the action-value function. A group action is selected using the approximated action-value function and the selected parameters. Agents are triggered to perform respective tasks in the group action.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: October 4, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Takayuki Osogami, Rudy R. Harry Putra
  • Patent number: 11455526
    Abstract: According to an embodiment, a neural network device includes: a plurality of cores each executing computation and processing of a partial component in a neural network; and a plurality of routers transmitting data output from each core to one of the plurality of cores such that computation and processing are executed according to structure of the neural network. Each of the plurality of cores outputs at least one of a forward data and a backward data propagated through the neural network in a forward direction and a backward direction, respectively. Each of the plurality of routers is included in one of a plurality of partial regions each being a forward region or a backward region. A router included in the forward region and a router included in the backward region transmit the forward data and the backward data to other routers in the same partial regions, respectively.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: September 27, 2022
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kumiko Nomura, Takao Marukame, Yoshifumi Nishi