Patents by Inventor Christopher Graham Healey
Christopher Graham Healey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10324983Abstract: Recurrent neural networks (RNNs) can be visualized. For example, a processor can receive vectors indicating values of nodes in a gate of a RNN. The values can result from processing data at the gate during a sequence of time steps. The processor can group the nodes into clusters by applying a clustering method to the values of the nodes. The processor can generate a first graphical element visually indicating how the respective values of the nodes in a cluster changed during the sequence of time steps. The processor can also determine a reference value based on multiple values for multiple nodes in the cluster, and generate a second graphical element visually representing how the respective values of the nodes in the cluster each relate to the reference value. The processor can cause a display to output a graphical user interface having the first graphical element and the second graphical element.Type: GrantFiled: September 21, 2018Date of Patent: June 18, 2019Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITYInventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis
-
Publication number: 20190034558Abstract: Recurrent neural networks (RNNs) can be visualized. For example, a processor can receive vectors indicating values of nodes in a gate of a RNN. The values can result from processing data at the gate during a sequence of time steps. The processor can group the nodes into clusters by applying a clustering method to the values of the nodes. The processor can generate a first graphical element visually indicating how the respective values of the nodes in a cluster changed during the sequence of time steps. The processor can also determine a reference value based on multiple values for multiple nodes in the cluster, and generate a second graphical element visually representing how the respective values of the nodes in the cluster each relate to the reference value. The processor can cause a display to output a graphical user interface having the first graphical element and the second graphical element.Type: ApplicationFiled: September 21, 2018Publication date: January 31, 2019Applicants: SAS Institute Inc., North Carolina State UniversityInventors: SAMUEL PAUL LEEMAN-MUNK, SARATENDU SETHI, CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, JAMES ALLEN COX, LAWRENCE E. LEWIS
-
Patent number: 10192001Abstract: Convolutional neural networks can be visualized. For example, a graphical user interface (GUI) can include a matrix of symbols indicating feature-map values that represent a likelihood of a particular feature being present or absent in an input to a convolutional neural network. The GUI can also include a node-link diagram representing a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network, a second row of symbols representing a hidden layer of the feed forward neural network, and a third row of symbols representing an output layer of the feed forward neural network. Lines between the rows of symbols can represent connections between nodes in the input layer, the hidden layer, and the output layer of the feed forward neural network.Type: GrantFiled: October 4, 2017Date of Patent: January 29, 2019Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITYInventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
-
Patent number: 10048826Abstract: Interactive visualizations of a convolutional neural network are provided. For example, a graphical user interface (GUI) can include a matrix having symbols indicating feature-map values that represent likelihoods of particular features being present or absent at various locations in an input to a convolutional neural network. Each column in the matrix can have feature-map values generated by convolving the input to the convolutional neural network with a respective filter for identifying a particular feature in the input. The GUI can detect, via an input device, an interaction indicating that that the columns in the matrix are to be combined into a particular number of groups. Based on the interaction, the columns can be clustered into the particular number of groups using a clustering method. The matrix in the GUI can then be updated to visually represent each respective group of columns as a single column of symbols within the matrix.Type: GrantFiled: October 3, 2017Date of Patent: August 14, 2018Assignees: SAS INSTITUTE INC., NORTH CAROLINA STATE UNIVERSITYInventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
-
Publication number: 20180096078Abstract: Convolutional neural networks can be visualized. For example, a graphical user interface (GUI) can include a matrix of symbols indicating feature-map values that represent a likelihood of a particular feature being present or absent in an input to a convolutional neural network. The GUI can also include a node-link diagram representing a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network, a second row of symbols representing a hidden layer of the feed forward neural network, and a third row of symbols representing an output layer of the feed forward neural network. Lines between the rows of symbols can represent connections between nodes in the input layer, the hidden layer, and the output layer of the feed forward neural network.Type: ApplicationFiled: October 4, 2017Publication date: April 5, 2018Applicants: SAS Institute Inc., North Carolina State UniversityInventors: Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
-
Publication number: 20180095632Abstract: Interactive visualizations of a convolutional neural network are provided. For example, a graphical user interface (GUI) can include a matrix having symbols indicating feature-map values that represent likelihoods of particular features being present or absent at various locations in an input to a convolutional neural network. Each column in the matrix can have feature-map values generated by convolving the input to the convolutional neural network with a respective filter for identifying a particular feature in the input. The GUI can detect, via an input device, an interaction indicating that that the columns in the matrix are to be combined into a particular number of groups. Based on the interaction, the columns can be clustered into the particular number of groups using a clustering method. The matrix in the GUI can then be updated to visually represent each respective group of columns as a single column of symbols within the matrix.Type: ApplicationFiled: October 3, 2017Publication date: April 5, 2018Applicants: SAS Institute Inc., North Carolina State UniversityInventors: SAMUEL PAUL LEEMAN-MUNK, SARATENDU SETHI, CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, JAMES ALLEN COX, LAWRENCE E. LEWIS, MUSTAFA ONUR KABUL
-
Publication number: 20180096241Abstract: Deep neural networks can be visualized. For example, first values for a first layer of nodes in a neural network, second values for a second layer of nodes in the neural network, and/or third values for connections between the first layer of nodes and the second layer of nodes can be received. A quilt graph can be output that includes (i) a first set of symbols having visual characteristics representative of the first values and representing the first layer of nodes along a first axis; (ii) a second set of symbols having visual characteristics representative of the second values and representing the second layer of nodes along a second axis; and/or (iii) a matrix of blocks between the first axis and the second axis having visual characteristics representative of the third values and representing the connections between the first layer of nodes and the second layer of nodes.Type: ApplicationFiled: May 2, 2017Publication date: April 5, 2018Inventors: CHRISTOPHER GRAHAM HEALEY, SHAOLIANG NIE, KALPESH PADIA, RAVINDER DEVARAJAN, DAVID JAMES CAIRA, JORDAN RILEY BENSON, SARATENDU SETHI, JAMES ALLEN COX, LAWRENCE E. LEWIS, SAMUEL PAUL LEEMAN-MUNK
-
Patent number: 9934462Abstract: Deep neural networks can be visualized. For example, first values for a first layer of nodes in a neural network, second values for a second layer of nodes in the neural network, and/or third values for connections between the first layer of nodes and the second layer of nodes can be received. A quilt graph can be output that includes (i) a first set of symbols having visual characteristics representative of the first values and representing the first layer of nodes along a first axis; (ii) a second set of symbols having visual characteristics representative of the second values and representing the second layer of nodes along a second axis; and/or (iii) a matrix of blocks between the first axis and the second axis having visual characteristics representative of the third values and representing the connections between the first layer of nodes and the second layer of nodes.Type: GrantFiled: May 2, 2017Date of Patent: April 3, 2018Assignee: SAS INSTITUTE INC.Inventors: Christopher Graham Healey, Samuel Paul Leeman-Munk, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, Saratendu Sethi, James Allen Cox, Lawrence E. Lewis