Patents by Inventor Gal Novik
Gal Novik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230325628Abstract: Causal explanations of outputs of a neural network can be learned from an attention layer in the neural network. The neural network may compute an output variable by processing a variable set including one or more input variables. An attention matrix may be computed by the attention layer in an abductive inference for which a new variable set including the input variables and the output variable is input into the neural network. Causal relationship between the variables in the new variable set may be determined based on the attention matrix and illustrated in a causal graph. A tree structure may be generated based on the causal graph. An input variable may be identified using the tree structure and determined to be the reason why the neural network computed the output variable. An explanation of the causal relation between the input variable and output variable can be generated and provided.Type: ApplicationFiled: May 30, 2023Publication date: October 12, 2023Inventors: Shami Nisimov, Raanan Yonatan Yehezkel Rohekar, Yaniv Gurwicz, Guy Koren, Gal Novik
-
Publication number: 20230316589Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: March 28, 2023Publication date: October 5, 2023Applicant: Intel CorporationInventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
-
Patent number: 11698930Abstract: Various embodiments are generally directed to techniques for determining artificial neural network topologies, such as by utilizing probabilistic graphical models, for instance. Some embodiments are particularly related to determining neural network topologies by bootstrapping a graph, such as a probabilistic graphical model, into a multi-graphical model, or graphical model tree. Various embodiments may include logic to determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, logic may partition a graph into multiple subgraph sets based on each of the sample sets. In several embodiments, the multiple subgraph sets may be scored, such as with Bayesian statistics, and selected amongst as part of determining a topology for a neural network.Type: GrantFiled: June 21, 2018Date of Patent: July 11, 2023Assignee: INTEL CORPORATIONInventors: Yaniv Gurwicz, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Guy Koren, Gal Novik
-
Publication number: 20230117143Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.Type: ApplicationFiled: November 8, 2022Publication date: April 20, 2023Applicant: Intel CorporationInventors: RAANAN YONATAN YEHEZKEL ROHEKAR, Guy Koren, Shami Nisimov, Gal Novik
-
Patent number: 11620766Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer.Type: GrantFiled: June 10, 2021Date of Patent: April 4, 2023Assignee: INTEL CORPORATIONInventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
-
Patent number: 11501152Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.Type: GrantFiled: July 26, 2017Date of Patent: November 15, 2022Assignee: INTEL CORPORATIONInventors: Raanan Yonatan Yehezkel Rohekar, Guy Koren, Shami Nisimov, Gal Novik
-
Publication number: 20220027704Abstract: Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to determine a plurality of weights for a layer of a convolutional neural network (CNN) comprising a plurality of kernels; organize the plurality of weights into a plurality of clusters for the plurality of kernels; and apply a K-means compression algorithm to each of the plurality of clusters. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: July 2, 2021Publication date: January 27, 2022Applicant: Intel CorporationInventors: Yonatan Glesner, Gal Novik, Dmitri Vainbrand, Gal Leibovich
-
Publication number: 20210350585Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: June 10, 2021Publication date: November 11, 2021Applicant: INTEL CORPORATIONInventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
-
Patent number: 11055604Abstract: Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to determine a plurality of weights for a layer of a convolutional neural network (CNN) comprising a plurality of kernels; organize the plurality of weights into a plurality of clusters for the plurality of kernels; and apply a K-means compression algorithm to each of the plurality of clusters. Other embodiments are also disclosed and claimed.Type: GrantFiled: September 12, 2017Date of Patent: July 6, 2021Assignee: INTEL CORPORATIONInventors: Yonatan Glesner, Gal Novik, Dmitri Vainbrand, Gal Leibovich
-
Patent number: 11037330Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.Type: GrantFiled: April 8, 2017Date of Patent: June 15, 2021Assignee: INTEL CORPORATIONInventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
-
Patent number: 11010658Abstract: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.Type: GrantFiled: December 22, 2017Date of Patent: May 18, 2021Assignee: Intel CorporationInventors: Guy Koren, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Gal Novik
-
Publication number: 20190102673Abstract: Methods and apparatus relating to online activation compression with K-means are described. In one embodiment, logic (e.g., in a processor) compresses one or more activation functions for a convolutional network based on non-uniform quantization. The non-uniform quantization for each layer of the convolutional network is performed offline, and an activation function for a specific layer of the convolutional network is quantized during runtime. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: September 29, 2017Publication date: April 4, 2019Applicant: Intel CorporationInventors: Gal Leibovich, Gal Novik, Yonatan Glesner
-
Publication number: 20190080222Abstract: Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to determine a plurality of weights for a layer of a convolutional neural network (CNN) comprising a plurality of kernels; organize the plurality of weights into a plurality of clusters for the plurality of kernels; and apply a K-means compression algorithm to each of the plurality of clusters. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: September 12, 2017Publication date: March 14, 2019Inventors: Yonatan Glesner, Gal Novik, Dmitri Vainbrand, Gal Leibovich
-
Publication number: 20190042917Abstract: Various embodiments are generally directed to techniques for determining artificial neural network topologies, such as by utilizing probabilistic graphical models, for instance. Some embodiments are particularly related to determining neural network topologies by bootstrapping a graph, such as a probabilistic graphical model, into a multi-graphical model, or graphical model tree. Various embodiments may include logic to determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, logic may partition a graph into multiple subgraph sets based on each of the sample sets. In several embodiments, the multiple subgraph sets may be scored, such as with Bayesian statistics, and selected amongst as part of determining a topology for a neural network.Type: ApplicationFiled: June 21, 2018Publication date: February 7, 2019Applicant: INTEL CORPORATIONInventors: Yaniv Gurwicz, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Guy Koren, Gal Novik
-
Publication number: 20190042911Abstract: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.Type: ApplicationFiled: December 22, 2017Publication date: February 7, 2019Inventors: Guy Koren, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Gal Novik
-
Publication number: 20180322385Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.Type: ApplicationFiled: July 26, 2017Publication date: November 8, 2018Applicant: Intel CorporationInventors: RAANAN YONATAN YEHEZKEL ROHEKAR, Guy Koren, Shami Nisimov, Gal Novik
-
Publication number: 20180293758Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: April 8, 2017Publication date: October 11, 2018Applicant: Intel CorporationInventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
-
Patent number: 9547664Abstract: The present invention extends to methods, systems, and computer program products for selecting candidate records for deduplication from a table. A table can be processed to compute an inverse index for each field of the table. A deduplication algorithm can traverse the inverse indices in accordance with a flexible user-defined policy to identify candidate records for deduplication. Both exact matches and approximate matches can be found.Type: GrantFiled: May 1, 2014Date of Patent: January 17, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Yaron Zinar, Efim Hudis, Yifat Orlin, Gal Novik, Yuri Gurevich, Gad Peleg
-
Publication number: 20140236907Abstract: The present invention extends to methods, systems, and computer program products for selecting candidate records for deduplication from a table. A table can be processed to compute an inverse index for each field of the table. A deduplication algorithm can traverse the inverse indices in accordance with a flexible user-defined policy to identify candidate records for deduplication. Both exact matches and approximate matches can be found.Type: ApplicationFiled: May 1, 2014Publication date: August 21, 2014Applicant: Microsoft CorporationInventors: Yaron Zinar, Efim Hudis, Yifat Orlin, Gal Novik, Yuri Gurevich, Gad Peleg
-
Patent number: 8719236Abstract: The present invention extends to methods, systems, and computer program products for selecting candidate records for deduplication from a table. A table can be processed to compute an inverse index for each field of the table. A deduplication algorithm can traverse the inverse indices in accordance with a flexible user-defined policy to identify candidate records for deduplication. Both exact matches and approximate matches can be found.Type: GrantFiled: August 23, 2012Date of Patent: May 6, 2014Assignee: Microsoft CorporationInventors: Yaron Zinar, Efim Hudis, Yifat Orlin, Gal Novik, Yuri Gurevich, Gad Peleg