Patents Examined by Ying Yu Chen
  • Patent number: 11806551
    Abstract: A treatment planning prediction method to predict a Dose-Volume Histogram (DVH) or Dose Distribution (DD) for patient data using a machine-learning computer framework is provided with the key inclusion of a Planning Target Volume (PTV) only treatment plan in the framework. A dosimetric parameter is used as an additional parameter to the framework and which is obtained from a prediction of the PTV-only treatment plan. The method outputs a Dose-Volume Histogram and/or a Dose Distribution for the patient including the prediction of the PTV-only treatment plan. The method alleviates the complicated process of quantifying anatomical features and harnesses directly the inherent correlation between the PTV-only plan and the clinical plan in the dose domain. The method provides a more robust and efficient solution to the important DVHs prediction problem in treatment planning and plan quality assurance.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: November 7, 2023
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Yong Yang, Lei Xing, Ming Ma
  • Patent number: 11803731
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting a neural network to perform a particular machine learning task while satisfying a set of constraints.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: October 31, 2023
    Assignee: Google LLC
    Inventor: Gabriel Mintzer Bender
  • Patent number: 11797838
    Abstract: Systems and methods for generating embeddings for nodes of a corpus graph are presented. The embeddings correspond to aggregated embedding vectors for nodes of the corpus graph. Without processing the entire corpus graph to generate all aggregated embedding vectors, a relevant neighborhood of nodes within the corpus graph are identified for a target node of the corpus graph. Based on embedding information of the target node's immediate neighbors, and also upon neighborhood embedding information from the target node's relevant neighborhood, an aggregated embedding vector can be generated for the target node that comprises both an embedding vector portion corresponding to the target node, as well as a neighborhood embedding vector portion, corresponding to embedding information of the relevant neighborhood of the target node. Utilizing both portions of the aggregated embedding vector leads to improved content recommendation to a user in response to a query.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: October 24, 2023
    Assignee: Pinterest, Inc.
    Inventors: Jurij Leskovec, Chantat Eksombatchai, Ruining He, Kaifeng Chen, Rex Ying
  • Patent number: 11797826
    Abstract: A system is provided for classifying an instruction sequence with a machine learning model. The system may include at least one processor and at least one memory. The memory may include program code that provides operations when executed by the at least one processor. The operations may include: processing an instruction sequence with a trained machine learning model configured to detect one or more interdependencies amongst a plurality of tokens in the instruction sequence and determine a classification for the instruction sequence based on the one or more interdependencies amongst the plurality of tokens; and providing, as an output, the classification of the instruction sequence. Related methods and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: October 24, 2023
    Assignee: Cylance Inc.
    Inventors: Xuan Zhao, Matthew Wolff, John Brock, Brian Wallace, Andy Wortman, Jian Luan, Mahdi Azarafrooz, Andrew Davis, Michael Wojnowicz, Derek Soeder, David Beveridge, Eric Petersen, Ming Jin, Ryan Permeh
  • Patent number: 11783160
    Abstract: Various systems, devices, and methods for operating on a data sequence. A system includes a set of circuits that form an input layer to receive a data sequence; first hardware computing units to transform the data sequence, the first hardware computing units connected using a set of randomly selected weights, a first hardware computing unit to: receive an input from a second hardware computing unit, determine a weight of a connection between the first and second hardware computing units using an identifier of the second hardware computing unit and a fixed random weight generator, and operate on the input using the weight to determine a state of the first hardware computing unit; and second hardware computing units to operate on states of the first computing units to generate an output based on the data sequence.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: October 10, 2023
    Assignee: Intel Corporation
    Inventors: Phil Knag, Gregory Kengho Chen, Raghavan Kumar, Huseyin Ekin Sumbul, Ram Kumar Krishnamurthy
  • Patent number: 11775832
    Abstract: Aspects of data modification for neural networks are described herein. The aspects may include a data modifier configured to receive input data and weight values of a neural network. The data modifier may include an input data configured to modify the received input data and a weight modifier configured to modify the received weight values. The aspects may further include a computing unit configured to calculate one or more groups of output data based on the modified input data and the modifier weight values.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: October 3, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Yifan Hao, Yunji Chen, Qi Guo, Tianshi Chen
  • Patent number: 11775807
    Abstract: An artificial neural network (ANN) system includes a processor, a virtual overflow detection circuit and a data format controller. The processor performs node operations with respect to a plurality of nodes included in each layer of an ANN to obtain a plurality of result values of the node operations and performs a quantization operation on the obtained plurality of result values based on a k-th fixed-point format for a current quantization of the each layer to obtain a plurality of quantization values. The virtual overflow detection circuit generates a virtual overflow information indicating a distribution of valid bit numbers of the obtained plurality of quantization values. The data format controller determines a (k+1)-th fixed-point format for a next quantization of the each layer based on the generated virtual overflow information. An overflow and/or an underflow are prevented efficiently by controlling the fixed-point format using the virtual overflow.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: October 3, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae-Gon Kim, Kyoung-Young Kim, Do-Yun Kim, Jun-Seok Park, Sang-Hyuck Ha
  • Patent number: 11763139
    Abstract: A neuromorphic chip includes synaptic cells including respective resistive devices, axon lines, dendrite lines and switches. The synaptic cells are connected to the axon lines and dendrite lines to form a crossbar array. The axon lines are configured to receive input data and to supply the input data to the synaptic cells. The dendrite lines are configured to receive output data and to supply the output data via one or more respective output lines. A given one of the switches is configured to connect an input terminal to one or more input lines and to changeably connect its one or more output terminals to a given one or more axon lines.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: September 19, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Atsuya Okazaki, Masatoshi Ishii, Junka Okazawa, Kohji Hosokawa, Takayuki Osogami
  • Patent number: 11755883
    Abstract: A computer-implemented method for performing computer vision with reduced computational cost and improved accuracy can include obtaining, by a computing system including one or more computing devices, input data comprising an input tensor having one or more dimensions, providing, by the computing system, the input data to a machine-learned convolutional attention network, the machine-learned convolutional attention network including two or more network stages, and, in response to providing the input data to the machine-learned convolutional attention network, receiving, by the computing system, a machine-learning prediction from the machine-learned convolutional attention network. The convolutional attention network can include at least one attention block, wherein the attention block includes a relative attention mechanism, the relative attention mechanism including the sum of a static convolution kernel with an adaptive attention matrix.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: September 12, 2023
    Assignee: GOOGLE LLC
    Inventors: Zihang Dai, Hanxiao Liu, Mingxing Tan, Quoc V. Le
  • Patent number: 11755916
    Abstract: An improved computer implemented method and corresponding systems and computer readable media for improving performance of a deep neural network are provided to mitigate effects related to catastrophic forgetting in neural network learning. In an embodiment, the method includes storing, in memory, logits of a set of samples from a previous set of tasks (D1); and maintaining classification information from the previous set of tasks by utilizing the logits for matching during training on a new set of tasks (D2).
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: September 12, 2023
    Assignee: ROYAL BANK OF CANADA
    Inventors: Yanshuai Cao, Ruitong Huang, Junfeng Wen
  • Patent number: 11734548
    Abstract: The present disclosure provides an integrated circuit chip device and a related product. The integrated circuit chip device includes: a primary processing circuit and a plurality of basic processing circuits. The primary processing circuit or at least one of the plurality of basic processing circuits includes the compression mapping circuits configured to perform compression on each data of a neural network operation. The technical solution provided by the present disclosure has the advantages of a small amount of computations and low power consumption.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: August 22, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11734545
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11733970
    Abstract: An artificial intelligence system includes a neural network layer including an arithmetic operation circuit that performs an arithmetic operation of a sigmoid function. The arithmetic operation circuit includes a first circuit configured to perform an exponent arithmetic operation using a Napier's constant e as a base and output a first calculation result when an exponent in the exponent arithmetic operation is a negative number, wherein an absolute value of the exponent is used in the exponent arithmetic operation, and a second circuit configured to subtract the first calculation result obtained by the first circuit from 1 and output the subtracted value.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: August 22, 2023
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Masanori Nishizawa
  • Patent number: 11727285
    Abstract: A method and system for managing a dataset. An artificial intelligence (AI) model is to be used on the dataset. A data mask describes a labeling status of the data items. A loop is repeated until patience parameters are satisfied. The loop comprises receiving trusted labels provided by trusted labelers; updating the data mask; from a labelled data items subset, training the AI model; cloning the trained AI model into a local AI model on processing nodes; creating and chunking a randomized unlabeled subset into data subsets for dispatching to the processing nodes; receiving an indication that predicted label answers have been inferred by the processing nodes using the local AI model; computing a model uncertainty measurement from statistical analysis of the predicted label answers. The patience parameters include one or more of a threshold value on the model uncertainty measurement and information gain between different training cycles.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: August 15, 2023
    Assignee: ServiceNow Canada Inc.
    Inventors: Frédéric Branchaud-Charron, Parmida Atighehchian, Jan Freyberg, Lorne Schell
  • Patent number: 11727263
    Abstract: A processor implemented method to update a sentence generation model includes: generating a target sentence corresponding to a source sentence using a first decoding model; calculating reward information associated with the target sentence using a second decoding model configured to generate a sentence in an order different from an order of the sentence generated by the first decoding model; and generating an updated sentence generation model by resetting a weight of respective nodes in the first decoding model based on the calculated reward information.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: August 15, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hoshik Lee, Hwidong Na
  • Patent number: 11727264
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: August 15, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Marc Gendron-Bellemare, Remi Munos, Srinivasan Sriram
  • Patent number: 11720796
    Abstract: A method includes maintaining respective episodic memory data for each of multiple actions; receiving a current observation characterizing a current state of an environment being interacted with by an agent; processing the current observation using an embedding neural network in accordance with current values of parameters of the embedding neural network to generate a current key embedding for the current observation; for each action of the plurality of actions: determining the p nearest key embeddings in the episodic memory data for the action to the current key embedding according to a distance measure, and determining a Q value for the action from the return estimates mapped to by the p nearest key embeddings in the episodic memory data for the action; and selecting, using the Q values for the actions, an action from the multiple actions as the action to be performed by the agent.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: August 8, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Benigno Uria-Martínez, Alexander Pritzel, Charles Blundell, Adrià Puigdomènech Badia
  • Patent number: 11715025
    Abstract: A method for time series analysis of time-oriented usage data pertaining to computing resources of a computing system. A method embodiment commences upon collecting time series datasets, individual ones of the time series datasets comprising time-oriented usage data of a respective individual computing resource. A plurality of prediction models are trained using portions of time-oriented data. The trained models are evaluated to determine quantitative measures pertaining to predictive accuracy. One of the trained models is selected and then applied over another time series dataset of the individual resource to generate a plurality of individual resource usage predictions. The individual resource usage predictions are used to calculate seasonally-adjusted resource usage demand amounts over a future time period. The resource usage demand amounts are compared to availability of the resource to form a runway that refers to a future time period when the resource is predicted to be demanded to its capacity.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: August 1, 2023
    Assignee: Nutanix, Inc.
    Inventors: Jianjun Wen, Abhinay Nagpal, Himanshu Shukla, Binny Sher Gill, Cong Liu, Shuo Yang
  • Patent number: 11710046
    Abstract: A method of generating a question-answer learning model through adversarial learning may include: sampling a latent variable based on constraints in an input passage; generating an answer based on the latent variable; generating a question based on the answer; and machine-learning the question-answer learning model using a dataset of the generated question and answer, wherein the constraints are controlled so that the latent variable is present in a data manifold while increasing a loss of the question-answer learning model.
    Type: Grant
    Filed: November 29, 2019
    Date of Patent: July 25, 2023
    Inventors: Dong Hwan Kim, Woo Tae Jeong, Seanie Lee, Gilje Seong
  • Patent number: 11709895
    Abstract: Systems, apparatuses, and methods are provided for identifying a corresponding string stored in memory based on an incomplete input string. A system can analyze and produce phonetic and distance metrics for a plurality of strings stored in memory by comparing the plurality of strings to an incomplete input string. These similarity metrics can be used as the input to a machine learning model, which can quickly and accurately provide a classification. This classification can be used to identify a string stored in memory that corresponds to the incomplete input string.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: July 25, 2023
    Assignee: Visa International Service Association
    Inventors: Pranjal Singh, Soumyajyoti Banerjee