Patents by Inventor Gal Novik

Gal Novik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230325628
    Abstract: Causal explanations of outputs of a neural network can be learned from an attention layer in the neural network. The neural network may compute an output variable by processing a variable set including one or more input variables. An attention matrix may be computed by the attention layer in an abductive inference for which a new variable set including the input variables and the output variable is input into the neural network. Causal relationship between the variables in the new variable set may be determined based on the attention matrix and illustrated in a causal graph. A tree structure may be generated based on the causal graph. An input variable may be identified using the tree structure and determined to be the reason why the neural network computed the output variable. An explanation of the causal relation between the input variable and output variable can be generated and provided.
    Type: Application
    Filed: May 30, 2023
    Publication date: October 12, 2023
    Inventors: Shami Nisimov, Raanan Yonatan Yehezkel Rohekar, Yaniv Gurwicz, Guy Koren, Gal Novik
  • Publication number: 20230316589
    Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: March 28, 2023
    Publication date: October 5, 2023
    Applicant: Intel Corporation
    Inventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
  • Patent number: 11698930
    Abstract: Various embodiments are generally directed to techniques for determining artificial neural network topologies, such as by utilizing probabilistic graphical models, for instance. Some embodiments are particularly related to determining neural network topologies by bootstrapping a graph, such as a probabilistic graphical model, into a multi-graphical model, or graphical model tree. Various embodiments may include logic to determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, logic may partition a graph into multiple subgraph sets based on each of the sample sets. In several embodiments, the multiple subgraph sets may be scored, such as with Bayesian statistics, and selected amongst as part of determining a topology for a neural network.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: July 11, 2023
    Assignee: INTEL CORPORATION
    Inventors: Yaniv Gurwicz, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Guy Koren, Gal Novik
  • Publication number: 20230117143
    Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.
    Type: Application
    Filed: November 8, 2022
    Publication date: April 20, 2023
    Applicant: Intel Corporation
    Inventors: RAANAN YONATAN YEHEZKEL ROHEKAR, Guy Koren, Shami Nisimov, Gal Novik
  • Patent number: 11620766
    Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: April 4, 2023
    Assignee: INTEL CORPORATION
    Inventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
  • Patent number: 11501152
    Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: November 15, 2022
    Assignee: INTEL CORPORATION
    Inventors: Raanan Yonatan Yehezkel Rohekar, Guy Koren, Shami Nisimov, Gal Novik
  • Publication number: 20220027704
    Abstract: Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to determine a plurality of weights for a layer of a convolutional neural network (CNN) comprising a plurality of kernels; organize the plurality of weights into a plurality of clusters for the plurality of kernels; and apply a K-means compression algorithm to each of the plurality of clusters. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: July 2, 2021
    Publication date: January 27, 2022
    Applicant: Intel Corporation
    Inventors: Yonatan Glesner, Gal Novik, Dmitri Vainbrand, Gal Leibovich
  • Publication number: 20210350585
    Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: June 10, 2021
    Publication date: November 11, 2021
    Applicant: INTEL CORPORATION
    Inventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
  • Patent number: 11055604
    Abstract: Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to determine a plurality of weights for a layer of a convolutional neural network (CNN) comprising a plurality of kernels; organize the plurality of weights into a plurality of clusters for the plurality of kernels; and apply a K-means compression algorithm to each of the plurality of clusters. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: July 6, 2021
    Assignee: INTEL CORPORATION
    Inventors: Yonatan Glesner, Gal Novik, Dmitri Vainbrand, Gal Leibovich
  • Patent number: 11037330
    Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 8, 2017
    Date of Patent: June 15, 2021
    Assignee: INTEL CORPORATION
    Inventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
  • Patent number: 11010658
    Abstract: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: May 18, 2021
    Assignee: Intel Corporation
    Inventors: Guy Koren, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Gal Novik
  • Publication number: 20190102673
    Abstract: Methods and apparatus relating to online activation compression with K-means are described. In one embodiment, logic (e.g., in a processor) compresses one or more activation functions for a convolutional network based on non-uniform quantization. The non-uniform quantization for each layer of the convolutional network is performed offline, and an activation function for a specific layer of the convolutional network is quantized during runtime. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: Intel Corporation
    Inventors: Gal Leibovich, Gal Novik, Yonatan Glesner
  • Publication number: 20190080222
    Abstract: Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to determine a plurality of weights for a layer of a convolutional neural network (CNN) comprising a plurality of kernels; organize the plurality of weights into a plurality of clusters for the plurality of kernels; and apply a K-means compression algorithm to each of the plurality of clusters. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: September 12, 2017
    Publication date: March 14, 2019
    Inventors: Yonatan Glesner, Gal Novik, Dmitri Vainbrand, Gal Leibovich
  • Publication number: 20190042917
    Abstract: Various embodiments are generally directed to techniques for determining artificial neural network topologies, such as by utilizing probabilistic graphical models, for instance. Some embodiments are particularly related to determining neural network topologies by bootstrapping a graph, such as a probabilistic graphical model, into a multi-graphical model, or graphical model tree. Various embodiments may include logic to determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, logic may partition a graph into multiple subgraph sets based on each of the sample sets. In several embodiments, the multiple subgraph sets may be scored, such as with Bayesian statistics, and selected amongst as part of determining a topology for a neural network.
    Type: Application
    Filed: June 21, 2018
    Publication date: February 7, 2019
    Applicant: INTEL CORPORATION
    Inventors: Yaniv Gurwicz, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Guy Koren, Gal Novik
  • Publication number: 20190042911
    Abstract: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.
    Type: Application
    Filed: December 22, 2017
    Publication date: February 7, 2019
    Inventors: Guy Koren, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Gal Novik
  • Publication number: 20180322385
    Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.
    Type: Application
    Filed: July 26, 2017
    Publication date: November 8, 2018
    Applicant: Intel Corporation
    Inventors: RAANAN YONATAN YEHEZKEL ROHEKAR, Guy Koren, Shami Nisimov, Gal Novik
  • Publication number: 20180293758
    Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 8, 2017
    Publication date: October 11, 2018
    Applicant: Intel Corporation
    Inventors: Tomer Bar-On, Jacob Subag, Yaniv Fais, Jeremie Dreyfuss, Gal Novik, Gal Leibovich, Tomer Schwartz, Ehud Cohen, Lev Faivishevsky, Uzi Sarel, Amitai Armon, Yahav Shadmiy
  • Patent number: 9547664
    Abstract: The present invention extends to methods, systems, and computer program products for selecting candidate records for deduplication from a table. A table can be processed to compute an inverse index for each field of the table. A deduplication algorithm can traverse the inverse indices in accordance with a flexible user-defined policy to identify candidate records for deduplication. Both exact matches and approximate matches can be found.
    Type: Grant
    Filed: May 1, 2014
    Date of Patent: January 17, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yaron Zinar, Efim Hudis, Yifat Orlin, Gal Novik, Yuri Gurevich, Gad Peleg
  • Publication number: 20140236907
    Abstract: The present invention extends to methods, systems, and computer program products for selecting candidate records for deduplication from a table. A table can be processed to compute an inverse index for each field of the table. A deduplication algorithm can traverse the inverse indices in accordance with a flexible user-defined policy to identify candidate records for deduplication. Both exact matches and approximate matches can be found.
    Type: Application
    Filed: May 1, 2014
    Publication date: August 21, 2014
    Applicant: Microsoft Corporation
    Inventors: Yaron Zinar, Efim Hudis, Yifat Orlin, Gal Novik, Yuri Gurevich, Gad Peleg
  • Patent number: 8719236
    Abstract: The present invention extends to methods, systems, and computer program products for selecting candidate records for deduplication from a table. A table can be processed to compute an inverse index for each field of the table. A deduplication algorithm can traverse the inverse indices in accordance with a flexible user-defined policy to identify candidate records for deduplication. Both exact matches and approximate matches can be found.
    Type: Grant
    Filed: August 23, 2012
    Date of Patent: May 6, 2014
    Assignee: Microsoft Corporation
    Inventors: Yaron Zinar, Efim Hudis, Yifat Orlin, Gal Novik, Yuri Gurevich, Gad Peleg