Patents by Inventor Saratendu Sethi

Saratendu Sethi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10275690
    Abstract: A computing device automatically classifies an observation vector. (a) A converged classification matrix is computed that defines a label probability for each observation vector. (b) The value of the target variable associated with a maximum label probability value is selected for each observation vector. Each observation vector is assigned to a cluster. A distance value is computed between observation vectors assigned to the same cluster. An average distance value is computed for each observation vector. A predefined number of observation vectors are selected that have minimum values for the average distance value. The supervised data is updated to include the selected observation vectors with the value of the target variable selected in (b). The selected observation vectors are removed from the unlabeled subset. (a) and (b) are repeated. The value of the target variable for each observation vector is output to a labeled dataset.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: April 30, 2019
    Assignee: SAS INSTITUTE INC.
    Inventors: Xu Chen, Saratendu Sethi
  • Publication number: 20190050368
    Abstract: A computing device automatically classifies an observation vector. A label set defines permissible values for a target variable. Supervised data includes a labeled subset that has one of the permissible values. A converged classification matrix is computed based on the supervised data and an unlabeled subset using a prior class distribution matrix that includes a row for each observation vector. Each column is associated with a single permissible value of the label set. A cell value in each column is a likelihood that each associated permissible value of the label set occurs based on prior class distribution information. The value of the target variable is selected using the converged classification matrix. A weighted classification label distribution matrix is computed from the converged classification matrix. The value of the target variable for each observation vector of the plurality of observation vectors is output to a labeled dataset.
    Type: Application
    Filed: October 17, 2018
    Publication date: February 14, 2019
    Inventors: Xu Chen, Saratendu Sethi
  • Publication number: 20190034766
    Abstract: A computing device automatically classifies an observation vector. (a) A converged classification matrix is computed that defines a label probability for each observation vector. (b) The value of the target variable associated with a maximum label probability value is selected for each observation vector. Each observation vector is assigned to a cluster. A distance value is computed between observation vectors assigned to the same cluster. An average distance value is computed for each observation vector. A predefined number of observation vectors are selected that have minimum values for the average distance value. The supervised data is updated to include the selected observation vectors with the value of the target variable selected in (b). The selected observation vectors are removed from the unlabeled subset. (a) and (b) are repeated. The value of the target variable for each observation vector is output to a labeled dataset.
    Type: Application
    Filed: August 22, 2018
    Publication date: January 31, 2019
    Inventors: Xu Chen, Saratendu Sethi
  • Publication number: 20190034558
    Abstract: Recurrent neural networks (RNNs) can be visualized. For example, a processor can receive vectors indicating values of nodes in a gate of a RNN. The values can result from processing data at the gate during a sequence of time steps. The processor can group the nodes into clusters by applying a clustering method to the values of the nodes. The processor can generate a first graphical element visually indicating how the respective values of the nodes in a cluster changed during the sequence of time steps. The processor can also determine a reference value based on multiple values for multiple nodes in the cluster, and generate a second graphical element visually representing how the respective values of the nodes in the cluster each relate to the reference value. The processor can cause a display to output a graphical user interface having the first graphical element and the second graphical element.
    Type: Application
    Filed: September 21, 2018
    Publication date: January 31, 2019
    Applicants: SAS Institute Inc., North Carolina State University
    Inventors: SAMUEL PAUL LEEMAN-MUNK, SARATENDU SETHI, CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, JAMES ALLEN COX, LAWRENCE E. LEWIS
  • Patent number: 10192001
    Abstract: Convolutional neural networks can be visualized. For example, a graphical user interface (GUI) can include a matrix of symbols indicating feature-map values that represent a likelihood of a particular feature being present or absent in an input to a convolutional neural network. The GUI can also include a node-link diagram representing a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network, a second row of symbols representing a hidden layer of the feed forward neural network, and a third row of symbols representing an output layer of the feed forward neural network. Lines between the rows of symbols can represent connections between nodes in the input layer, the hidden layer, and the output layer of the feed forward neural network.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: January 29, 2019
    Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITY
    Inventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
  • Patent number: 10191921
    Abstract: A system provides image search results based on a query that includes an attribute or an association and a concept identifier. The query is input into a trained query model to define a search syntax for the query. The search syntax is submitted to an expanded annotated image database that includes a concept image of a concept identified by the concept identifier with a plurality of attributes associated with the concept and a plurality of associations associated with the concept. A query result is received based on matching the defined search syntax to one or more of the attributes or one or more of the associations. The query result includes the concept image of the concept associated with the matched one or more of the attributes or one or more of the associations. The concept image included in the received query result is presented in a display.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: January 29, 2019
    Assignee: SAS Institute Inc.
    Inventors: Ethem F. Can, Richard Welland Crowell, Samuel Paul Leeman-Munk, Jared Peterson, Saratendu Sethi
  • Patent number: 10048826
    Abstract: Interactive visualizations of a convolutional neural network are provided. For example, a graphical user interface (GUI) can include a matrix having symbols indicating feature-map values that represent likelihoods of particular features being present or absent at various locations in an input to a convolutional neural network. Each column in the matrix can have feature-map values generated by convolving the input to the convolutional neural network with a respective filter for identifying a particular feature in the input. The GUI can detect, via an input device, an interaction indicating that that the columns in the matrix are to be combined into a particular number of groups. Based on the interaction, the columns can be clustered into the particular number of groups using a clustering method. The matrix in the GUI can then be updated to visually represent each respective group of columns as a single column of symbols within the matrix.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: August 14, 2018
    Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITY
    Inventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
  • Publication number: 20180095632
    Abstract: Interactive visualizations of a convolutional neural network are provided. For example, a graphical user interface (GUI) can include a matrix having symbols indicating feature-map values that represent likelihoods of particular features being present or absent at various locations in an input to a convolutional neural network. Each column in the matrix can have feature-map values generated by convolving the input to the convolutional neural network with a respective filter for identifying a particular feature in the input. The GUI can detect, via an input device, an interaction indicating that that the columns in the matrix are to be combined into a particular number of groups. Based on the interaction, the columns can be clustered into the particular number of groups using a clustering method. The matrix in the GUI can then be updated to visually represent each respective group of columns as a single column of symbols within the matrix.
    Type: Application
    Filed: October 3, 2017
    Publication date: April 5, 2018
    Applicants: SAS Institute Inc., North Carolina State University
    Inventors: SAMUEL PAUL LEEMAN-MUNK, SARATENDU SETHI, CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, JAMES ALLEN COX, LAWRENCE E. LEWIS, MUSTAFA ONUR KABUL
  • Publication number: 20180096078
    Abstract: Convolutional neural networks can be visualized. For example, a graphical user interface (GUI) can include a matrix of symbols indicating feature-map values that represent a likelihood of a particular feature being present or absent in an input to a convolutional neural network. The GUI can also include a node-link diagram representing a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network, a second row of symbols representing a hidden layer of the feed forward neural network, and a third row of symbols representing an output layer of the feed forward neural network. Lines between the rows of symbols can represent connections between nodes in the input layer, the hidden layer, and the output layer of the feed forward neural network.
    Type: Application
    Filed: October 4, 2017
    Publication date: April 5, 2018
    Applicants: SAS Institute Inc., North Carolina State University
    Inventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
  • Publication number: 20180096241
    Abstract: Deep neural networks can be visualized. For example, first values for a first layer of nodes in a neural network, second values for a second layer of nodes in the neural network, and/or third values for connections between the first layer of nodes and the second layer of nodes can be received. A quilt graph can be output that includes (i) a first set of symbols having visual characteristics representative of the first values and representing the first layer of nodes along a first axis; (ii) a second set of symbols having visual characteristics representative of the second values and representing the second layer of nodes along a second axis; and/or (iii) a matrix of blocks between the first axis and the second axis having visual characteristics representative of the third values and representing the connections between the first layer of nodes and the second layer of nodes.
    Type: Application
    Filed: May 2, 2017
    Publication date: April 5, 2018
    Inventors: CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, SARATENDU SETHI, JAMES ALLEN COX, LAWRENCE E. LEWIS, SAMUEL PAUL LEEMAN-MUNK
  • Patent number: 9934462
    Abstract: Deep neural networks can be visualized. For example, first values for a first layer of nodes in a neural network, second values for a second layer of nodes in the neural network, and/or third values for connections between the first layer of nodes and the second layer of nodes can be received. A quilt graph can be output that includes (i) a first set of symbols having visual characteristics representative of the first values and representing the first layer of nodes along a first axis; (ii) a second set of symbols having visual characteristics representative of the second values and representing the second layer of nodes along a second axis; and/or (iii) a matrix of blocks between the first axis and the second axis having visual characteristics representative of the third values and representing the connections between the first layer of nodes and the second layer of nodes.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: April 3, 2018
    Assignee: SAS INSTITUTE INC.
    Inventors: Christopher Graham Healey, Samuel Paul Leeman-Munk, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, Saratendu Sethi, James Allen Cox, Lawrence E. Lewis
  • Publication number: 20170371856
    Abstract: Various embodiments are generally directed to systems for summarizing data visualizations (i.e., images of data visualizations), such as a graph image, for instance. Some embodiments are particularly directed to a personalized graph summarizer that analyzes a data visualization, or image, to detect pre-defined patterns within the data visualization, and produces a textual summary of the data visualization based on the pre-defined patterns detected within the data visualization. In various embodiments, the personalized graph summarizer may include features to adapt to the preferences of a user for generating an automated, personalized computer-generated narrative. For instance, additional pre-defined patterns may be created for detection and/or the textual summary may be tailored based on user preferences. In some such instances, one or more of the user preferences may be automatically determined by the personalized graph summarizer without requiring the user to explicitly indicate them.
    Type: Application
    Filed: June 22, 2017
    Publication date: December 28, 2017
    Applicant: SAS Institute Inc.
    Inventors: Ethem F. Can, Richard W. Crowell, James Tetterton, Jared Peterson, SARATENDU SETHI
  • Patent number: 9704097
    Abstract: Training data for training a neural network usable for electronic sentiment analysis can be automatically constructed. For example, an electronic communication usable for training the neural network and including multiple characters can be received. A sentiment dictionary including multiple expressions mapped to multiple sentiment values representing different sentiments can be received. Each expression in the sentiment dictionary can be mapped to a corresponding sentiment value. An overall sentiment for the electronic communication can be determined using the sentiment dictionary. Training data usable for training the neural network can be automatically constructed based on the overall sentiment of the electronic communication. The neural network can be trained using the training data. A second electronic communication including an unknown sentiment can be received. At least one sentiment associated with the second electronic communication can be determined using the neural network.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: July 11, 2017
    Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITY
    Inventors: Ravinder Devarajan, Jordan Riley Benson, David James Caira, Saratendu Sethi, James Allen Cox, Christopher G. Healey, Gowtham Dinakaran, Kalpesh Padia
  • Publication number: 20160350651
    Abstract: Training data for training a neural network usable for electronic sentiment analysis can be automatically constructed. For example, an electronic communication usable for training the neural network and including multiple characters can be received. A sentiment dictionary including multiple expressions mapped to multiple sentiment values representing different sentiments can be received. Each expression in the sentiment dictionary can be mapped to a corresponding sentiment value. An overall sentiment for the electronic communication can be determined using the sentiment dictionary. Training data usable for training the neural network can be automatically constructed based on the overall sentiment of the electronic communication. The neural network can be trained using the training data. A second electronic communication including an unknown sentiment can be received. At least one sentiment associated with the second electronic communication can be determined using the neural network.
    Type: Application
    Filed: December 11, 2015
    Publication date: December 1, 2016
    Inventors: Ravinder Devarajan, Jordan Riley Benson, David James Caira, Saratendu Sethi, James Allen Cox, Christopher G. Healey, Gowtham Dinakaran, Kalpesh Padia
  • Publication number: 20160350664
    Abstract: The results of electronic narrative analytics can be visualized. For example, an electronic communication that includes multiple narratives can be received. Each narrative can be segmented into respective blocks of characters. Multiple sentiments associated with the respective blocks of characters can be determined. Multiple sentiment patterns can be determined based on the multiple sentiments. The multiple sentiment patterns can be categorized into multiple sentiment pattern groups. Also, multiple semantic tags associated with the multiple sentiment patterns can be determined. Further, the multiple narratives can be categorized into multiple topic sets. A graphical user interface can be displayed visually indicating at least a portion of: the multiple sentiments, the multiple sentiment pattern groups, the multiple semantic tags, or the multiple topic sets.
    Type: Application
    Filed: June 8, 2016
    Publication date: December 1, 2016
    Inventors: Ravinder Devarajan, Jordan Riley Benson, David James Caira, Saratendu Sethi, James Allen Cox, Christopher G. Healey, Gowtham Dinakaran, Kalpesh Padia, Shaoliang Nie
  • Publication number: 20160350644
    Abstract: The results of electronic sentiment analysis can be visualized. For example, multiple sentiments expressed in an electronic communication can be determined using a neural network. Each sentiment of the multiple sentiments can include a positive sentiment, a neutral sentiment, or a negative sentiment. A transition between at least two sentiments of the multiple sentiments can be determined. The transition can indicate a change between the at least two sentiments occurring over a period of time. A graphical user interface visually indicating the transition between the at least two sentiments can be displayed on a timeline. The timeline can include a timeframe associated with multiple segments of the electronic communication.
    Type: Application
    Filed: December 11, 2015
    Publication date: December 1, 2016
    Inventors: Ravinder Devarajan, Jordan Riley Benson, David James Caira, Saratendu Sethi, James Allen Cox, Christopher G. Healey, Gowtham Dinakaran, Kalpesh Padia
  • Patent number: 9460071
    Abstract: In a computing device that defines a rule for natural language processing of text, annotated text is selected from a first document of a plurality of annotated documents. An entity rule type is selected from a plurality of entity rule types. An argument of the selected entity rule type is identified. A value for the identified argument is randomly selected based on the selected annotated text to generate a rule instance. The generated rule instance is applied to remaining documents of the plurality of annotated documents. A rule performance measure is computed based on application of the generated rule instance. The generated rule instance and the computed rule performance measure are stored for application to other documents.
    Type: Grant
    Filed: April 21, 2015
    Date of Patent: October 4, 2016
    Assignee: SAS Institute Inc.
    Inventors: Viswanath Avasarala, David Styles, James Tetterton, Richard Crowell, Saratendu Sethi
  • Publication number: 20160078014
    Abstract: In a computing device that defines a rule for natural language processing of text, annotated text is selected from a first document of a plurality of annotated documents. An entity rule type is selected from a plurality of entity rule types. An argument of the selected entity rule type is identified. A value for the identified argument is randomly selected based on the selected annotated text to generate a rule instance. The generated rule instance is applied to remaining documents of the plurality of annotated documents. A rule performance measure is computed based on application of the generated rule instance. The generated rule instance and the computed rule performance measure are stored for application to other documents.
    Type: Application
    Filed: April 21, 2015
    Publication date: March 17, 2016
    Inventors: Viswanath Avasarala, David Styles, James Tetterton, Richard Crowell, Saratendu Sethi