Patents by Inventor Samuel Paul Leeman-Munk

Samuel Paul Leeman-Munk has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10324983
    Abstract: Recurrent neural networks (RNNs) can be visualized. For example, a processor can receive vectors indicating values of nodes in a gate of a RNN. The values can result from processing data at the gate during a sequence of time steps. The processor can group the nodes into clusters by applying a clustering method to the values of the nodes. The processor can generate a first graphical element visually indicating how the respective values of the nodes in a cluster changed during the sequence of time steps. The processor can also determine a reference value based on multiple values for multiple nodes in the cluster, and generate a second graphical element visually representing how the respective values of the nodes in the cluster each relate to the reference value. The processor can cause a display to output a graphical user interface having the first graphical element and the second graphical element.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: June 18, 2019
    Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITY
    Inventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis
  • Patent number: 10235622
    Abstract: A computing device identifies a pattern in a dataset. A first neural network model is executed using data points as input to input nodes of the first neural network model to generate first output node data. A second neural network model is executed using the first output node data as input to input nodes of the second neural network model to generate second output node data. The second output node data includes a plurality of output values for each x-value of the plurality of data points. For each x-value, an output value of the plurality of output values is associated with a single pattern type of a plurality of predefined pattern types. For each pattern type of the plurality of predefined pattern types, a start time and a stop time is identified when the output value for the associated pattern type exceeds a predefined pattern window threshold value.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: March 19, 2019
    Assignee: SAS INSTITUTE INC.
    Inventors: Stuart Andrew Hunt, Samuel Paul Leeman-Munk, Richard Welland Crowell
  • Publication number: 20190034558
    Abstract: Recurrent neural networks (RNNs) can be visualized. For example, a processor can receive vectors indicating values of nodes in a gate of a RNN. The values can result from processing data at the gate during a sequence of time steps. The processor can group the nodes into clusters by applying a clustering method to the values of the nodes. The processor can generate a first graphical element visually indicating how the respective values of the nodes in a cluster changed during the sequence of time steps. The processor can also determine a reference value based on multiple values for multiple nodes in the cluster, and generate a second graphical element visually representing how the respective values of the nodes in the cluster each relate to the reference value. The processor can cause a display to output a graphical user interface having the first graphical element and the second graphical element.
    Type: Application
    Filed: September 21, 2018
    Publication date: January 31, 2019
    Applicants: SAS Institute Inc., North Carolina State University
    Inventors: SAMUEL PAUL LEEMAN-MUNK, SARATENDU SETHI, CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, JAMES ALLEN COX, LAWRENCE E. LEWIS
  • Patent number: 10192001
    Abstract: Convolutional neural networks can be visualized. For example, a graphical user interface (GUI) can include a matrix of symbols indicating feature-map values that represent a likelihood of a particular feature being present or absent in an input to a convolutional neural network. The GUI can also include a node-link diagram representing a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network, a second row of symbols representing a hidden layer of the feed forward neural network, and a third row of symbols representing an output layer of the feed forward neural network. Lines between the rows of symbols can represent connections between nodes in the input layer, the hidden layer, and the output layer of the feed forward neural network.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: January 29, 2019
    Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITY
    Inventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
  • Patent number: 10191921
    Abstract: A system provides image search results based on a query that includes an attribute or an association and a concept identifier. The query is input into a trained query model to define a search syntax for the query. The search syntax is submitted to an expanded annotated image database that includes a concept image of a concept identified by the concept identifier with a plurality of attributes associated with the concept and a plurality of associations associated with the concept. A query result is received based on matching the defined search syntax to one or more of the attributes or one or more of the associations. The query result includes the concept image of the concept associated with the matched one or more of the attributes or one or more of the associations. The concept image included in the received query result is presented in a display.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: January 29, 2019
    Assignee: SAS Institute Inc.
    Inventors: Ethem F. Can, Richard Welland Crowell, Samuel Paul Leeman-Munk, Jared Peterson, Saratendu Sethi
  • Patent number: 10048826
    Abstract: Interactive visualizations of a convolutional neural network are provided. For example, a graphical user interface (GUI) can include a matrix having symbols indicating feature-map values that represent likelihoods of particular features being present or absent at various locations in an input to a convolutional neural network. Each column in the matrix can have feature-map values generated by convolving the input to the convolutional neural network with a respective filter for identifying a particular feature in the input. The GUI can detect, via an input device, an interaction indicating that that the columns in the matrix are to be combined into a particular number of groups. Based on the interaction, the columns can be clustered into the particular number of groups using a clustering method. The matrix in the GUI can then be updated to visually represent each respective group of columns as a single column of symbols within the matrix.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: August 14, 2018
    Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITY
    Inventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
  • Publication number: 20180211153
    Abstract: A computing device identifies a pattern in a dataset. A first neural network model is executed using data points as input to input nodes of the first neural network model to generate first output node data. A second neural network model is executed using the first output node data as input to input nodes of the second neural network model to generate second output node data. The second output node data includes a plurality of output values for each x-value of the plurality of data points. For each x-value, an output value of the plurality of output values is associated with a single pattern type of a plurality of predefined pattern types. For each pattern type of the plurality of predefined pattern types, a start time and a stop time is identified when the output value for the associated pattern type exceeds a predefined pattern window threshold value.
    Type: Application
    Filed: July 25, 2017
    Publication date: July 26, 2018
    Inventors: Stuart Andrew Hunt, Samuel Paul Leeman-Munk, Richard Welland Crowell
  • Publication number: 20180096078
    Abstract: Convolutional neural networks can be visualized. For example, a graphical user interface (GUI) can include a matrix of symbols indicating feature-map values that represent a likelihood of a particular feature being present or absent in an input to a convolutional neural network. The GUI can also include a node-link diagram representing a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network, a second row of symbols representing a hidden layer of the feed forward neural network, and a third row of symbols representing an output layer of the feed forward neural network. Lines between the rows of symbols can represent connections between nodes in the input layer, the hidden layer, and the output layer of the feed forward neural network.
    Type: Application
    Filed: October 4, 2017
    Publication date: April 5, 2018
    Applicants: SAS Institute Inc., North Carolina State University
    Inventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
  • Publication number: 20180095632
    Abstract: Interactive visualizations of a convolutional neural network are provided. For example, a graphical user interface (GUI) can include a matrix having symbols indicating feature-map values that represent likelihoods of particular features being present or absent at various locations in an input to a convolutional neural network. Each column in the matrix can have feature-map values generated by convolving the input to the convolutional neural network with a respective filter for identifying a particular feature in the input. The GUI can detect, via an input device, an interaction indicating that that the columns in the matrix are to be combined into a particular number of groups. Based on the interaction, the columns can be clustered into the particular number of groups using a clustering method. The matrix in the GUI can then be updated to visually represent each respective group of columns as a single column of symbols within the matrix.
    Type: Application
    Filed: October 3, 2017
    Publication date: April 5, 2018
    Applicants: SAS Institute Inc., North Carolina State University
    Inventors: SAMUEL PAUL LEEMAN-MUNK, SARATENDU SETHI, CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, JAMES ALLEN COX, LAWRENCE E. LEWIS, MUSTAFA ONUR KABUL
  • Publication number: 20180096241
    Abstract: Deep neural networks can be visualized. For example, first values for a first layer of nodes in a neural network, second values for a second layer of nodes in the neural network, and/or third values for connections between the first layer of nodes and the second layer of nodes can be received. A quilt graph can be output that includes (i) a first set of symbols having visual characteristics representative of the first values and representing the first layer of nodes along a first axis; (ii) a second set of symbols having visual characteristics representative of the second values and representing the second layer of nodes along a second axis; and/or (iii) a matrix of blocks between the first axis and the second axis having visual characteristics representative of the third values and representing the connections between the first layer of nodes and the second layer of nodes.
    Type: Application
    Filed: May 2, 2017
    Publication date: April 5, 2018
    Inventors: CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, SARATENDU SETHI, JAMES ALLEN COX, LAWRENCE E. LEWIS, SAMUEL PAUL LEEMAN-MUNK
  • Patent number: 9934462
    Abstract: Deep neural networks can be visualized. For example, first values for a first layer of nodes in a neural network, second values for a second layer of nodes in the neural network, and/or third values for connections between the first layer of nodes and the second layer of nodes can be received. A quilt graph can be output that includes (i) a first set of symbols having visual characteristics representative of the first values and representing the first layer of nodes along a first axis; (ii) a second set of symbols having visual characteristics representative of the second values and representing the second layer of nodes along a second axis; and/or (iii) a matrix of blocks between the first axis and the second axis having visual characteristics representative of the third values and representing the connections between the first layer of nodes and the second layer of nodes.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: April 3, 2018
    Assignee: SAS INSTITUTE INC.
    Inventors: Christopher Graham Healey, Samuel Paul Leeman-Munk, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, Saratendu Sethi, James Allen Cox, Lawrence E. Lewis
  • Patent number: 9595002
    Abstract: Electronic communications can be normalized using a neural network. For example, a noncanonical communication that includes multiple terms can be received. The noncanonical communication can be preprocessed by (I) generating a vector including multiple characters from a term of the multiple terms; and (II) repeating a substring of the term in the vector such that a last character of the substring is positioned in a last position in the vector. The vector can be transmitted to a neural network configured to receive the vector and generate multiple probabilities based on the vector. A normalized version of the noncanonical communication can be determined using one or more of the multiple probabilities generated by the neural network. Whether the normalized version of the noncanonical communication should be outputted can also be determined using at least one of the multiple probabilities generated by the neural network.
    Type: Grant
    Filed: June 7, 2016
    Date of Patent: March 14, 2017
    Assignee: SAS INSTITUTE INC.
    Inventors: Samuel Paul Leeman-Munk, James Allen Cox
  • Patent number: 9552547
    Abstract: Electronic communications can be normalized using neural networks. For example, an electronic representation of a noncanonical communication can be received. A normalized version of the noncanonical communication can be determined using a normalizer including a neural network. The neural network can receive a single vector at an input layer of the neural network and transform an output of a hidden layer of the neural network into multiple values that sum to a total value of one. Each value of the multiple values can be a number between zero and one and represent a probability of a particular character being in a particular position in the normalized version of the noncanonical communication. The neural network can determine the normalized version of the noncanonical communication based on the multiple values. Whether the normalized version should be output can be determined based on a result from a flagger including another neural network.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: January 24, 2017
    Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITY
    Inventors: Samuel Paul Leeman-Munk, Wookhee Min, Bradford Wayne Mott, James Curtis Lester, II, James Allen Cox
  • Publication number: 20160350652
    Abstract: A neural network can be used to determine edit operations for normalizing an electronic communication. For example, an electronic representation of multiple characters that form a noncanonical communication can be received. It can be determined that the noncanonical communication is mapped to at least two canonical terms in a database. A recurrent neural network can be used to determine one or more edit operations usable to convert the noncanonical communication into a normalized version of the noncanonical communication. In some examples, the one or more edit operations can include inserting a character into the noncanonical communication, deleting the character from the noncanonical communication, or replacing the character with another character in the noncanonical communication. The noncanonical communication can be transformed into the normalized version of the noncanonical communication by performing the one or more edit operations.
    Type: Application
    Filed: December 14, 2015
    Publication date: December 1, 2016
    Inventors: Wookhee Min, Samuel Paul Leeman-Munk, Bradford Wayne Mott, James Curtis Lester, II, James Allen Cox
  • Publication number: 20160350650
    Abstract: Electronic communications can be normalized using neural networks. For example, an electronic representation of a noncanonical communication can be received. A normalized version of the noncanonical communication can be determined using a normalizer including a neural network. The neural network can receive a single vector at an input layer of the neural network and transform an output of a hidden layer of the neural network into multiple values that sum to a total value of one. Each value of the multiple values can be a number between zero and one and represent a probability of a particular character being in a particular position in the normalized version of the noncanonical communication. The neural network can determine the normalized version of the noncanonical communication based on the multiple values. Whether the normalized version should be output can be determined based on a result from a flagger including another neural network.
    Type: Application
    Filed: November 10, 2015
    Publication date: December 1, 2016
    Inventors: Samuel Paul Leeman-Munk, Wookhee Min, Bradford Wayne Mott, James Curtis Lester, II, James Allen Cox
  • Publication number: 20160350646
    Abstract: Electronic communications can be normalized using a neural network. For example, a noncanonical communication that includes multiple terms can be received. The noncanonical communication can be preprocessed by (I) generating a vector including multiple characters from a term of the multiple terms; and (II) repeating a substring of the term in the vector such that a last character of the substring is positioned in a last position in the vector. The vector can be transmitted to a neural network configured to receive the vector and generate multiple probabilities based on the vector. A normalized version of the noncanonical communication can be determined using one or more of the multiple probabilities generated by the neural network. Whether the normalized version of the noncanonical communication should be outputted can also be determined using at least one of the multiple probabilities generated by the neural network.
    Type: Application
    Filed: June 7, 2016
    Publication date: December 1, 2016
    Applicant: SAS Institute Inc.
    Inventors: Samuel Paul Leeman-Munk, James Allen Cox