Patents by Inventor Itamar Arel
Itamar Arel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11263524Abstract: Embodiments described herein cover a hierarchical machine learning system with a separated perception subsystem (that includes a hierarchy of nodes having at least a first layer and a second layer) and application subsystem. In one example embodiment a first node in the first layer processes a first input and processes at least a portion of the first input to generate a first feature vector. A second node in the second layer processes a second input comprising at least a portion of the first feature vector to generate a second feature vector. The first node generates a first sparse feature vector from the first feature vector and/or the second node generates a second sparse feature vector from the second feature vector. A third node of the perception subsystem then processes at least one of the first sparse feature vector or the second sparse feature vector to determine an output.Type: GrantFiled: November 20, 2018Date of Patent: March 1, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Itamar Arel, Joshua Benjamin Looks
-
Patent number: 11205122Abstract: Some embodiments described herein cover a machine learning architecture with a separated perception subsystem and application subsystem. These subsystems can be co-trained. In one example embodiment, a data item is received and information from the data item is processed by a first node to generate a sparse feature vector. A second node processes the sparse feature vector to determine an output. A relevancy rating associated with the output is determined. A determination is made as to whether to update the first node based on update criteria associated with the first node, wherein the update criteria comprise a relevancy criterion and a novelty criterion. The second node is updated based on the relevancy rating.Type: GrantFiled: July 17, 2018Date of Patent: December 21, 2021Assignee: Apprente LLCInventors: Itamar Arel, Joshua Benjamin Looks
-
Patent number: 10573296Abstract: A synthetic training data item comprising a first sequence of symbols that represent a synthetic sentence output by a simulator is received. The synthetic training data item is processed using a machine learning model, which outputs a second sequence of symbols that represent the synthetic sentence. The synthetic training data item is modified by replacing the first sequence of symbols with the second sequence of symbols. A statistically significant mismatch exists between the first sequence of symbols and a third sequence of symbols that would be output by an acoustic model that processes a set of acoustic features that represent an utterance of the synthetic sentence, and no statistically significant mismatch exists between the second sequence of symbols and the third sequence of symbols. The modified synthetic training data item may be used to train a second machine learning model that processes data output by the acoustic model.Type: GrantFiled: December 10, 2018Date of Patent: February 25, 2020Assignee: Apprente LLCInventors: Itamar Arel, Joshua Benjamin Looks, Ali Ziaei, Michael Lefkowitz
-
Patent number: 10559299Abstract: A synthetic training data item comprising a first sequence of symbols that represent a synthetic sentence output by a simulator is received. The synthetic training data item is processed using a machine learning model, which outputs a second sequence of symbols that represent the synthetic sentence. The synthetic training data item is modified by replacing the first sequence of symbols with the second sequence of symbols. A statistically significant mismatch exists between the first sequence of symbols and a third sequence of symbols that would be output by an acoustic model that processes a set of acoustic features that represent an utterance of the synthetic sentence, and no statistically significant mismatch exists between the second sequence of symbols and the third sequence of symbols. The modified synthetic training data item may be used to train a second machine learning model that processes data output by the acoustic model.Type: GrantFiled: June 10, 2019Date of Patent: February 11, 2020Assignee: Apprente LLCInventors: Itamar Arel, Joshua Benjamin Looks, Ali Ziaei, Michael Lefkowitz
-
Publication number: 20190279105Abstract: Embodiments described herein cover a hierarchical machine learning system with a separated perception subsystem (that includes a hierarchy of nodes having at least a first layer and a second layer) and application subsystem. In one example embodiment a first node in the first layer processes a first input and processes at least a portion of the first input to generate a first feature vector. A second node in the second layer processes a second input comprising at least a portion of the first feature vector to generate a second feature vector. The first node generates a first sparse feature vector from the first feature vector and/or the second node generates a second sparse feature vector from the second feature vector. A third node of the perception subsystem then processes at least one of the first sparse feature vector or the second sparse feature vector to determine an output.Type: ApplicationFiled: November 20, 2018Publication date: September 12, 2019Inventors: Itamar Arel, Joshua Benjamin Looks
-
Patent number: 10325223Abstract: Some embodiments described herein cover a machine learning architecture with a separated perception subsystem and application subsystem. These subsystems can be co-trained to yield a lifelong learning system that can capture spatio-temporal regularities in its inputs. In one example embodiment a first node of the machine learning architecture receives a data item. The first node receives a first feature vector that was generated at a first time. The first node processes information from at least a portion of the data item and at least a portion of the first feature vector at a second time to generate a second feature vector. The first node generates a sparse feature vector from the second feature vector, wherein a majority of feature elements in the sparse feature vector have a value of zero. A second node of the machine learning architecture then processes the sparse feature vector to determine a first output.Type: GrantFiled: February 6, 2018Date of Patent: June 18, 2019Assignee: Apprente, Inc.Inventors: Itamar Arel, Joshua Benjamin Looks
-
Patent number: 10210861Abstract: In one embodiment synthetic training data items are generated, each comprising a) a textual representation of a synthetic sentence and b) one or more transcodes of the synthetic sentence comprising one or more actions and one or more entities associated with the one or more actions. For each synthetic training data item, the textual representation of the synthetic sentence is converted into a sequence of phonemes that represent the synthetic sentence. A first machine learning model is then trained as a transcoder that determines transcodes comprising actions and associated entities from sequences of phonemes, wherein the training is performed using a first training dataset comprising the plurality of synthetic training data items that comprise a) sequences phonemes that represent synthetic sentences and b) transcodes of the synthetic sentences. The transcoder may be used in a conversational agent.Type: GrantFiled: September 28, 2018Date of Patent: February 19, 2019Assignee: Apprente, Inc.Inventors: Itamar Arel, Joshua Benjamin Looks, Ali Ziaei, Michael Lefkowitz
-
Patent number: 10162794Abstract: Embodiments described herein cover a hierarchical machine learning system with a separated perception subsystem (that includes a hierarchy of nodes having at least a first layer and a second layer) and application subsystem. In one example embodiment a first node in the first layer processes a first input and processes at least a portion of the first input to generate a first feature vector. A second node in the second layer processes a second input comprising at least a portion of the first feature vector to generate a second feature vector. The first node generates a first sparse feature vector from the first feature vector and/or the second node generates a second sparse feature vector from the second feature vector. A third node of the perception subsystem then processes at least one of the first sparse feature vector or the second sparse feature vector to determine an output.Type: GrantFiled: March 7, 2018Date of Patent: December 25, 2018Assignee: Apprente, Inc.Inventors: Itamar Arel, Joshua Benjamin Looks
-
Patent number: 10055685Abstract: Some embodiments described herein cover a machine learning architecture with a separated perception subsystem and application subsystem. These subsystems can be co-trained. In one example embodiment, a data item is received and information from the data item is processed by a first node to generate a first feature vector comprising a plurality of features, each of the plurality of features having a similarity value representing a similarity to one of a plurality of centroids. The first node selects a subset of the features from the first feature vector, the subset containing one or more features that have highest similarity values. The first node generates a second feature vector from the first feature vector by replacing similarity values of features in the first feature vector that are not in the subset with zeros. A second node then processes the second feature vector to determine an output.Type: GrantFiled: October 16, 2017Date of Patent: August 21, 2018Assignee: Apprente, Inc.Inventors: Itamar Arel, Joshua Benjamin Looks
-
Publication number: 20170213150Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning using a partitioned reinforcement learning input state space (RL input state space). One of the methods includes maintaining data defining a plurality of partitions of a space of reinforcement learning (RL) input states, each partition corresponding to a respective supervised learning model; obtaining a current state representation that represents a current state of the environment; for the current state representation and for each action in the set of actions, identifying a respective partition and processing the action and the current state representation using the supervised learning model that corresponds to the respective partition to generate a respective current value function estimate; and selecting an action to be performed by the computer-implemented agent in response to the current state representation using the respective current value function estimates.Type: ApplicationFiled: July 8, 2016Publication date: July 27, 2017Applicant: Osaro, Inc.Inventors: Itamar Arel, Michael Kahane, Khashayar Rohanimanesh
-
Patent number: 9536191Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning using confidence scores. One of the methods includes receiving a current observation; for each of multiple actions: determining a respective value function estimate that is an estimate of a return resulting from the agent performing the action in response to the current observation, determining a respective confidence score that is a measure of confidence that the respective value function estimate for the action is an accurate estimate of the return that will result from the agent performing the action in response to the current observation, adjusting the respective value function estimate for the action using the respective confidence score for the action to determine a respective adjusted value function estimate; and selecting an action to be performed by the agent in response to the current observation using the respective adjusted value function estimates.Type: GrantFiled: November 25, 2015Date of Patent: January 3, 2017Assignee: Osaro, Inc.Inventors: Itamar Arel, Michael Kahane, Khashayar Rohanimanesh