Patents by Inventor Xiangqian Hu

Xiangqian Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240092729
    Abstract: The present invention relates to a method for purifying key intermediates of Citalopram, i.e. 4-[4-(dimethylamino)-1-(4-fluorophenyl)-1-hydroxybutyl]-3-hydroxymethylbenzonitrile and a salt thereof. The method comprises dissolving crude 4-[4-(dimethylamino)-1-(4-fluorophenyl)-1-hydroxybutyl]-3-hydroxymethylbenzonitrile (compound of formula I containing formaldehyde impurity) with an organic solvent, adding a washing solution, controlling the temperature, stirring, leaving to stand for layering, and removing the aqueous layer, so as to obtain a purified organic solution of 4-[4-(dimethylamino)-1-(4-fluorophenyl)-1-hydroxybutyl]-3-hydroxymethylbenzonitrile. The method provided by the present invention can effectively remove aldehyde group-containing impurities in the intermediate. The method of the present invention has the advantages of simple operation, cheap raw materials and mild conditions, and is suitable for large-scale industrial production.
    Type: Application
    Filed: December 28, 2021
    Publication date: March 21, 2024
    Applicants: ZHEJIANG HUAHAI PHARMACEUTICAL CO., LTD., Zhejiang Huahai LiCheng Pharmaceutical Co., Ltd.
    Inventors: Jian ZHANG, Jichao WANG, Xiangqian YOU, Liangwei QIAN, Tao ZHOU, Jiaxing HU, Wenfeng HUANG
  • Patent number: 11403527
    Abstract: A computing device trains a neural network machine learning model. A forward propagation of a first neural network is executed. A backward propagation of the first neural network is executed from a last layer to a last convolution layer to compute a gradient vector. A discriminative localization map is computed for each observation vector with the computed gradient vector using a discriminative localization map function. An activation threshold value is selected for each observation vector from at least two different values based on a prediction error of the first neural network. A biased feature map is computed for each observation vector based on the activation threshold value selected for each observation vector. A masked observation vector is computed for each observation vector using the biased feature map. A forward and a backward propagation of a second neural network is executed a predefined number of iterations using the masked observation vector.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: August 2, 2022
    Assignee: SAS Institute Inc.
    Inventors: Xinmin Wu, Yingjian Wang, Xiangqian Hu
  • Publication number: 20220114449
    Abstract: A computing device trains a neural network machine learning model. A forward propagation of a first neural network is executed. A backward propagation of the first neural network is executed from a last layer to a last convolution layer to compute a gradient vector. A discriminative localization map is computed for each observation vector with the computed gradient vector using a discriminative localization map function. An activation threshold value is selected for each observation vector from at least two different values based on a prediction error of the first neural network. A biased feature map is computed for each observation vector based on the activation threshold value selected for each observation vector. A masked observation vector is computed for each observation vector using the biased feature map. A forward and a backward propagation of a second neural network is executed a predefined number of iterations using the masked observation vector.
    Type: Application
    Filed: October 13, 2021
    Publication date: April 14, 2022
    Inventors: Xinmin Wu, Yingjian Wang, Xiangqian Hu
  • Patent number: 11195084
    Abstract: A computing device trains a neural network machine learning model. A forward propagation of a first neural network is executed. A backward propagation of the first neural network is executed from a last layer to a last convolution layer of a plurality of convolutional layers to compute a gradient vector for first weight values of the last convolution layer using observation vectors. A discriminative localization map is computed for each observation vector with the gradient vector using a discriminative localization map function. A forward and a backward propagation of a second neural network is executed to compute a second weight value for each neuron of the second neural network using the discriminative localization map computed for each observation vector. A predefined number of iterations of the forward and the backward propagation of the second neural network is repeated.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: December 7, 2021
    Assignee: SAS Institute Inc.
    Inventors: Xinmin Wu, Yingjian Wang, Xiangqian Hu
  • Patent number: 10956835
    Abstract: A computing device compresses a gradient boosting tree predictive model. A gradient boosting tree predictive model is trained using a plurality of observation vectors. Each observation vector includes an explanatory variable value of an explanatory variable and a response variable value for a response variable. The gradient boosting tree predictive type model is trained to predict the response variable value of each observation vector based on a respective explanatory variable value of each observation vector. The trained gradient boosting tree predictive model is compressed using a compression model with a predefined penalty constant value and with a predefined array of coefficients to reduce a number of trees of the trained gradient boosting tree predictive model. The compression model minimizes a sparsity norm loss function. The compressed, trained gradient boosting tree predictive model is output for predicting a new response variable value from a new observation vector.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: March 23, 2021
    Assignee: SAS Institute Inc.
    Inventors: Rui Shi, Guixian Lin, Xiangqian Hu, Yan Xu
  • Patent number: 10762390
    Abstract: Machine-learning models and behavior can be visualized. For example, a machine-learning model can be taught using a teaching dataset. A test input can then be provided to the machine-learning model to determine a baseline confidence-score of the machine-learning model. Next, weights for elements in the teaching dataset can be determined. An analysis dataset can be generated that includes a subset of the elements that have corresponding weights above a predefined threshold. For each overlapping element in both the analysis dataset and the test input, (i) a modified version of the test input can be generated that excludes the overlapping element, and (ii) the modified version of the test input can be provided to the machine-learning model to determine an effect of the overlapping element on the baseline confidence-score. A graphical user interface can be generated that visually depicts the test input and various elements' effects on the baseline confidence-score.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: September 1, 2020
    Assignee: SAS INSTITUTE INC.
    Inventors: Aysu Ezen Can, Ning Jin, Ethem F. Can, Xiangqian Hu, Saratendu Sethi
  • Publication number: 20200027028
    Abstract: A computing device compresses a gradient boosting tree predictive model. A gradient boosting tree predictive model is trained using a plurality of observation vectors. Each observation vector includes an explanatory variable value of an explanatory variable and a response variable value for a response variable. The gradient boosting tree predictive type model is trained to predict the response variable value of each observation vector based on a respective explanatory variable value of each observation vector. The trained gradient boosting tree predictive model is compressed using a compression model with a predefined penalty constant value and with a predefined array of coefficients to reduce a number of trees of the trained gradient boosting tree predictive model. The compression model minimizes a sparsity norm loss function. The compressed, trained gradient boosting tree predictive model is output for predicting a new response variable value from a new observation vector.
    Type: Application
    Filed: March 11, 2019
    Publication date: January 23, 2020
    Inventors: Rui Shi, Guixian Lin, Xiangqian Hu, Yan Xu
  • Patent number: 10311128
    Abstract: A computing device computes a quantile value. A maximum value and a minimum value are computed for unsorted variable values to compute an upper bin value and a lower bin value for each bin of a plurality of bins. A frequency counter is computed for each bin by reading the unsorted variable values a second time. A bin number and a cumulative rank value are computed for a quantile. When an estimated memory usage value exceeds a predefined memory size constraint value, a subset of the plurality of bins are split into a plurality of bins, the frequency counter is recomputed for each bin, and the bin number and the cumulative rank value are recomputed. Frequency data is computed using the frequency counters. The quantile value is computed using the frequency data and the cumulative rank value for the quantile and output.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: June 4, 2019
    Assignee: SAS INSTITUTE INC.
    Inventors: Xinmin Wu, Xiangqian Hu, Tao Wang, Xunlei Wu
  • Publication number: 20190156153
    Abstract: Machine-learning models and behavior can be visualized. For example, a machine-learning model can be taught using a teaching dataset. A test input can then be provided to the machine-learning model to determine a baseline confidence-score of the machine-learning model. Next, weights for elements in the teaching dataset can be determined. An analysis dataset can be generated that includes a subset of the elements that have corresponding weights above a predefined threshold. For each overlapping element in both the analysis dataset and the test input, (i) a modified version of the test input can be generated that excludes the overlapping element, and (ii) the modified version of the test input can be provided to the machine-learning model to determine an effect of the overlapping element on the baseline confidence-score. A graphical user interface can be generated that visually depicts the test input and various elements' effects on the baseline confidence-score.
    Type: Application
    Filed: April 13, 2018
    Publication date: May 23, 2019
    Applicant: SAS Institute Inc.
    Inventors: AYSU EZEN CAN, NING JIN, ETHEM F. CAN, XIANGQIAN HU, SARATENDU SETHI
  • Publication number: 20190129919
    Abstract: A computing device computes a quantile value. A maximum value and a minimum value are computed for unsorted variable values to compute an upper bin value and a lower bin value for each bin of a plurality of bins. A frequency counter is computed for each bin by reading the unsorted variable values a second time. A bin number and a cumulative rank value are computed for a quantile. When an estimated memory usage value exceeds a predefined memory size constraint value, a subset of the plurality of bins are split into a plurality of bins, the frequency counter is recomputed for each bin, and the bin number and the cumulative rank value are recomputed. Frequency data is computed using the frequency counters. The quantile value is computed using the frequency data and the cumulative rank value for the quantile and output.
    Type: Application
    Filed: September 25, 2018
    Publication date: May 2, 2019
    Inventors: Xinmin Wu, Xiangqian Hu, Tao Wang, Xunlei Wu
  • Patent number: 10127192
    Abstract: A computing device computes a quantile value. A maximum value and a minimum value are computed for unsorted variable values. An upper bin value and a lower bin value are computed for each bin of a plurality of bins using the maximum and minimum values. A frequency counter is computed for each bin by reading the unsorted variable values a second time. Each frequency counter is a count of the variable values within a respective bin. A bin number and a cumulative rank value are computed for a quantile. The bin number identifies a specific within which a quantile value associated with the quantile is located. The cumulative rank value identifies a cumulative rank for the quantile value associated with the quantile. Frequency data is computed using the frequency counters. The quantile value is computed using the frequency data and the cumulative rank value for the quantile and output.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: November 13, 2018
    Assignee: SAS Institute Inc.
    Inventors: Xiangqian Hu, Xinmin Wu, Tao Wang, Xunlei Wu
  • Patent number: 9703852
    Abstract: In accordance with the teachings described herein, systems and methods are provided for estimating or determining quantiles for data stored in a distributed system. In one embodiment, an instruction is received to estimate or determine a specified quantile for a variate in a set of data stored at a plurality of nodes in the distributed system. A plurality of data bins for the variate are defined that are each associated with a different range of data values in the set of data. Lower and upper quantile bounds for each of the plurality of data bins are determined based on the total number of data values that fall within each of the plurality of data bins. The specified quantile is estimated or determined based on an identified one of the plurality of data bins that includes the specified quantile based on the lower and upper quantile bounds.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: July 11, 2017
    Assignee: SAS INSTITUTE INC.
    Inventors: Guy Blanc, Georges H. Guirguis, Xiangqian Hu, Guixian Lin, Scott Pope
  • Publication number: 20160350396
    Abstract: In accordance with the teachings described herein, systems and methods are provided for estimating or determining quantiles for data stored in a distributed system. In one embodiment, an instruction is received to estimate or determine a specified quantile for a variate in a set of data stored at a plurality of nodes in the distributed system. A plurality of data bins for the variate are defined that are each associated with a different range of data values in the set of data. Lower and upper quantile bounds for each of the plurality of data bins are determined based on the total number of data values that fall within each of the plurality of data bins. The specified quantile is estimated or determined based on an identified one of the plurality of data bins that includes the specified quantile based on the lower and upper quantile bounds.
    Type: Application
    Filed: July 15, 2016
    Publication date: December 1, 2016
    Applicant: SAS Institute Inc.
    Inventors: Guy Blanc, Georges H. Guirguis, Xiangqian Hu, Guixian Lin, Scott Pope
  • Patent number: 9495426
    Abstract: Techniques for providing interactive decision trees are included. For example, a system is provided that stores data related to a decision tree, wherein the data includes one or more data structures and one or more portions of code. The system receives input corresponding to an interaction request associated with a modification to the decision tree. The system determines whether the modification requires multiple-processing iterations of the distributed data set. The system generates an application layer modified decision tree when the generating requires no multiple-processing iterations of the distributed data set. The system facilitates server layer modification of the decision tree when the modification requires multiple-processing iterations of the distributed data set. The system generates a representation of the application layer modified decision tree or the server layer modified decision tree.
    Type: Grant
    Filed: July 2, 2015
    Date of Patent: November 15, 2016
    Assignee: SAS Institute Inc.
    Inventors: Xiangxiang Meng, Rajendra Singh, Xiangqian Hu, Duane Hamilton, Robert Wayne Thompson
  • Publication number: 20160048566
    Abstract: Techniques for providing interactive decision trees are included. For example, a system is provided that stores data related to a decision tree, wherein the data includes one or more data structures and one or more portions of code. The system receives input corresponding to an interaction request associated with a modification to the decision tree. The system determines whether the modification requires multiple-processing iterations of the distributed data set. The system generates an application layer modified decision tree when the generating requires no multiple-processing iterations of the distributed data set. The system facilitates server layer modification of the decision tree when the modification requires multiple-processing iterations of the distributed data set. The system generates a representation of the application layer modified decision tree or the server layer modified decision tree.
    Type: Application
    Filed: July 2, 2015
    Publication date: February 18, 2016
    Inventors: Xiangxiang Meng, Rajendra Singh, Xiangqian Hu, Duane Hamilton, Robert Wayne Thompson
  • Publication number: 20140351196
    Abstract: Systems and methods for determining an optimal splitting scheme for a node in a classification decision tree. A computing system may receive input data related to a decision tree to be generated from a data set. The input data identifies a target attribute of the data set and a set of candidate attributes of the data set to be used as nodes in the decision tree. The computing system may determine, using a clustering algorithm and the set of candidate attributes, a number of potential splitting schemes to be used to split a node in the decision tree. The computing system may calculate a splitting measurement for each of the plurality of potential splitting schemes. The computing system may select an optimal splitting scheme from the plurality of potential splitting schemes for each node in the decision tree based on the splitting measurement.
    Type: Application
    Filed: May 21, 2014
    Publication date: November 27, 2014
    Applicant: SAS Institute Inc.
    Inventors: Xiangqian Hu, Xunlei Wu, Xiangxiang Meng, Oliver Schabenberger
  • Publication number: 20140330826
    Abstract: Systems and methods for data reduction of a data set are included. A computing system may group data points in a data set into a number of data point bubbles represented by a number of representative points. A data point bubble may include a one or more data points from the data set and a representative point from the data set. The computing system may calculate a cluster assignment for the representative point by executing a clustering algorithm using the number of representative points.
    Type: Application
    Filed: May 5, 2014
    Publication date: November 6, 2014
    Applicant: SAS Institute Inc.
    Inventors: Xiangqian Hu, Xunlei Wu, Xiangxiang Meng, Oliver Schabenberger