Patents Examined by Sehwan Kim
  • Patent number: 11132616
    Abstract: A characteristic value estimation device has a sensor data input unit to input sensor data detected by one or more sensors, a model input unit to input a first calculation model, a model learning unit to perform learning on a second calculation model, a model switch to select any one of the first calculation model and the second calculation model, a predictive value calculation unit to calculate an error of the calculation model, a probability distribution correction unit to correct the probability distribution of the uncertain parameter, a virtual sensor value estimation unit to estimate sensor data of a virtual sensor arranged virtually, a characteristic value distribution estimation unit to estimate a detailed distribution of the characteristic value, the sensor data of the virtual sensor, and the sensor data of the sensor, and a reliability calculation unit to calculate a reliability of the precise characteristic value distribution.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: September 28, 2021
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Mikito Iwamasa, Takuro Moriyama, Tomoshi Otsuki
  • Patent number: 11093854
    Abstract: The present disclosure provides an emoji recommendation method and device.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: August 17, 2021
    Assignee: BEIJING XINMEI HUTONG TECHNOLOGY CO., LTD.
    Inventors: Xin Gao, Li Zhou, Xinyong Hu
  • Patent number: 11074494
    Abstract: In one respect, there is provided a system for classifying an instruction sequence with a machine learning model. The system may include at least one processor and at least one memory. The memory may include program code that provides operations when executed by the at least one processor. The operations may include: processing an instruction sequence with a trained machine learning model configured to detect one or more interdependencies amongst a plurality of tokens in the instruction sequence and determine a classification for the instruction sequence based on the one or more interdependencies amongst the plurality of tokens; and providing, as an output, the classification of the instruction sequence. Related methods and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: July 27, 2021
    Assignee: Cylance Inc.
    Inventors: Xuan Zhao, Matthew Wolff, John Brock, Brian Wallace, Andy Wortman, Jian Luan, Mahdi Azarafrooz, Andrew Davis, Michael Wojnowicz, Derek Soeder, David Beveridge, Eric Petersen, Ming Jin, Ryan Permeh
  • Patent number: 11049045
    Abstract: A classification apparatus includes: a calculation unit that outputs, as a classification result, results of classification by each of a plurality of classifiers with respect to learning data formed of data of at least two classes at a learning time and calculates a combination result value obtained by linear combination, using a combination coefficient, of results of classification by each of the plurality of classifiers with respect to the learning data to output the calculated combination result value as the classification result at a classification time; an extraction unit that extracts a correct solution class and an incorrect solution class for each of the classifiers from the classification result; a difference calculation unit that calculates a difference between the correct solution class and the incorrect solution class for each of the classifiers; a conversion unit that calculates a feature vector using the calculated difference for each of the classifiers; and a combination coefficient setting uni
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: June 29, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Kotaro Funakoshi, Naoto Iwahashi
  • Patent number: 10990879
    Abstract: A method includes obtaining a symbolic AI model, where the symbolic AI model is configured to produce an outcome state responsive to an input based on events. The method also includes obtaining a first scenario and a second scenario, where the first scenario causes the failure of a condition associated with a norm of the symbolic AI model and the second scenario satisfies the condition associated with the norm of the symbolic AI model. The method also includes obtaining a failure penalty value, determining a first outcome state based on the symbolic AI model, the first scenario, and the failure penalty value. The method also includes determining a second outcome state based on the symbolic AI model and the second scenario. The method also includes determining an outcome score based on the first outcome state and the second outcome state.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: April 27, 2021
    Assignee: Digital Asset Capital, Inc.
    Inventor: Edward Hunter
  • Patent number: 10963817
    Abstract: Certain aspects involve training tree-based machine-learning models for computing predicted responses and generating explanatory data for the models. For example, independent variables having relationships with a response variable are identified. Each independent variable corresponds to an action or observation for an entity. The response variable has outcome values associated with the entity. Splitting rules are used to generate the tree-based model, which includes decision trees for determining relationships between independent variables and a predicted response associated with the response variable. The tree-based model is iteratively adjusted to enforce monotonicity with respect to representative response values of the terminal nodes. For instance, one or more decision trees are adjusted such that one or more representative response values are modified and a monotonic relationship exists between each independent variable and the response variable.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: March 30, 2021
    Assignee: EQUIFAX INC.
    Inventors: Lewis Jordan, Matthew Turner, Finto Antony
  • Patent number: 10922604
    Abstract: In one respect, there is provided a system for training a neural network adapted for classifying one or more instruction sequences. The system may include at least one processor and at least one memory. The memory may include program code which when executed by the at least one processor provides operations including: training, based at least on training data, a machine learning model to detect one or more predetermined interdependencies amongst a plurality of tokens in the training data; and providing the trained machine learning model to enable classification of one or more instruction sequences. Related methods and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: February 16, 2021
    Assignee: Cylance Inc.
    Inventors: Xuan Zhao, Matthew Wolff, John Brock, Brian Wallace, Andy Wortman, Jian Luan, Mahdi Azarafrooz, Andrew Davis, Michael Wojnowicz, Derek Soeder, David Beveridge, Eric Petersen, Ming Jin, Ryan Permeh
  • Patent number: 10909460
    Abstract: An apparatus includes a processor to: provide a set of feature routines to a set of processor cores to detect features of a data set distributed thereamong; generate metadata indicative of the detected features; generate context data indicative of contextual aspects of the data set; provide the metadata and context data to each processor core, and distribute a set of suggestion models thereamong to enable derivation of a suggested subset of data preparation operations to be suggested to be performed on the data set; transmit indications of the suggested subset to a viewing device, and receive therefrom indications of a selected subset of data preparation operations selected to be performed; compare the selected and suggested subsets; and in response to differences therebetween, re-train at least one suggestion model of the set of suggestion models based at least on the combination of the metadata, context data and selected subset.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: February 2, 2021
    Assignee: SAS INSTITUTE INC.
    Inventors: Nancy Anne Rausch, Roger Jay Barney, John P. Trawinski
  • Patent number: 10832150
    Abstract: A method and system are provided for retraining an analytic model. The method includes building, by a processor, a Markov chain for the analytic model. The Markov chain has only two states that consist of an alarm state and a no alarm state. The method further includes updating, by the processor, the Markov chain with observed states, for each of a plurality of timestamps evaluated during a burn-in period. The method also includes updating, by the processor, state transition probabilities within the Markov chain, for each of a plurality of timestamps evaluated after the burn-in period. The method additionally includes generating, by the processor, a signal for causing the model to be retrained, responsive to any of the state transition probabilities representing a probability of greater than 0.5 of seeing the alarm state in a previous interval and again in a current interval.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul M. J. Barry, Cormac Cummins, Ian Manning, Vinh Tuan Thai
  • Patent number: 10817294
    Abstract: A block coordinate descent method, system, and computer program product for partitioning a global feature matrix into blocks, each node of the nodes of the blocks having a block size of a number of the blocks over a number of the nodes, selecting, at each node, a subset of the blocks from the blocks, and in one of the nodes, launching a thread to simultaneously update a closed-form solution by minimizing a single coordinate in one of the blocks.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: October 27, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Liana Liyow Fong, Wei Tan, Michael Witbrock, Lingfei Wu
  • Patent number: 10789540
    Abstract: Generate an automorphism of the problem graph, determine an embedding of the automorphism to the hardware graph and modify the embedding of the problem graph into the hardware graph to correspond to the embedding of the automorphism to the hardware graph. Determine an upper-bound on the required chain strength. Calibrate and record properties of the component of a quantum processor with a digital processor, query the digital processor for a range of properties. Generate a bit mask and change the sign of the bias of individual qubits according to the bit mask before submitting a problem to a quantum processor, apply the same bit mask to the bit result. Generate a second set of parameters of a quantum processor from a first set of parameters via a genetic algorithm.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: September 29, 2020
    Assignee: D-WAVE SYSTEMS INC.
    Inventors: Andrew D. King, Robert B. Israel, Paul I. Bunyk, Kelly T. R. Boothby, Steven P. Reinhardt, Aidan P. Roy, James A. King, Trevor M. Lanting, Abraham J. Evert
  • Patent number: 10679142
    Abstract: Disclosed is a guidance technique that can be applied to guide search and analysis of stored data by a user. The technique can include inputting from a user a portion of a search query expressed in a pipelined search language, at a system for indexing and searching machine data. The system generates and outputs search guidance for the user as the user builds the search query, by applying the portion of the query to an operation flow model, where the operation flow model represents a plurality of searches performable by the system. The operation flow model has been generated based on multi-user historical search data and includes a plurality of states, each representing a different group of related commands of the pipelined search language.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: June 9, 2020
    Assignee: SPLUNK INC.
    Inventor: Archana Sulochana Ganapathi
  • Patent number: 10635974
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural programming. One of the methods includes processing a current neural network input using a core recurrent neural network to generate a neural network output; determining, from the neural network output, whether or not to end a currently invoked program and to return to a calling program from the set of programs; determining, from the neural network output, a next program to be called; determining, from the neural network output, contents of arguments to the next program to be called; receiving a representation of a current state of the environment; and generating a next neural network input from an embedding for the next program to be called and the representation of the current state of the environment.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: April 28, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Scott Ellison Reed, Joao Ferdinando Gomes de Freitas
  • Patent number: 10599976
    Abstract: Provided are a computer program product, a learning apparatus and a learning method. The method includes calculating, by a processor, a first propagation value that is propagated from a propagation source node to a propagation destination node in a neural network including a plurality of nodes, based on node values of the propagation source node at a plurality of time points and a weight corresponding to passage of time points based on a first attenuation coefficient. The method further includes updating, by the processor, a first update parameter, which is used for updating the first attenuation coefficient, by using the first propagation value. The method also includes updating, by the processor, the first attenuation coefficient by using the first update parameter and an error of the node value of the propagation destination node.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: March 24, 2020
    Assignee: International Business Machines Corporation
    Inventor: Takayuki Osogami