Patents by Inventor Patrick Haffner
Patrick Haffner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11606463Abstract: A virtual assistant system for communicating with customers uses human intelligence to correct any errors in the system AI, while collecting data for machine learning and future improvements for more automation. The system may use a modular design, with separate components for carrying out different system functions and sub-functions, and with frameworks for selecting the component best able to respond to a given customer conversation.Type: GrantFiled: March 31, 2020Date of Patent: March 14, 2023Assignee: INTERACTIONS LLCInventors: Yoryos Yeracaris, Michael Johnston, Ethan Selfridge, Phillip Gray, Patrick Haffner
-
Publication number: 20210201911Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: ApplicationFiled: March 15, 2021Publication date: July 1, 2021Inventors: Ann K. SYRDAL, Sumit CHOPRA, Patrick HAFFNER, Taniya MISHRA, Ilija ZELJKOVIC, Eric ZAVESKY
-
Patent number: 10950237Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: GrantFiled: November 30, 2015Date of Patent: March 16, 2021Assignee: Nuance Communications, Inc.Inventors: Ann K. Syrdal, Sumit Chopra, Patrick Haffner, Taniya Mishra, Ilija Zeljkovic, Eric Zavesky
-
Patent number: 10726833Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.Type: GrantFiled: May 21, 2018Date of Patent: July 28, 2020Assignee: NUANCE COMMUNICATIONS, INC.Inventors: Srinivas Bangalore, Robert Bell, Diamantino Antonio Caseiro, Mazin Gilbert, Patrick Haffner
-
Publication number: 20180268810Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.Type: ApplicationFiled: May 21, 2018Publication date: September 20, 2018Inventors: Srinivas BANGALORE, Robert BELL, Diamantino Antonio CASEIRO, Mazin GILBERT, Patrick HAFFNER
-
Patent number: 9978363Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.Type: GrantFiled: June 12, 2017Date of Patent: May 22, 2018Assignee: NUANCE COMMUNICATIONS, INC.Inventors: Srinivas Bangalore, Robert Bell, Diamantino Antonio Caseiro, Mazin Gilbert, Patrick Haffner
-
Patent number: 9858345Abstract: A method and apparatus for using a classifier for processing a query are disclosed. For example, the method receives a query from a user, and processes the query to locate one or more documents in accordance with a search engine having a discriminative classifier, wherein the discriminative classifier is trained with a plurality of artificial query examples. The method then presents a result of the processing to the user.Type: GrantFiled: September 19, 2016Date of Patent: January 2, 2018Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Ilija Zeljkovic, Srinivas Bangalore, Patrick Haffner, Jay Wilpon
-
Publication number: 20170345418Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.Type: ApplicationFiled: June 12, 2017Publication date: November 30, 2017Inventors: Srinivas BANGALORE, Robert BELL, Diamantino Antonio CASEIRO, Mazin GILBERT, Patrick HAFFNER
-
Patent number: 9728183Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.Type: GrantFiled: November 10, 2015Date of Patent: August 8, 2017Assignee: AT&T Intellectual Property I, L.P.Inventors: Sumit Chopra, Dimitrios Dimitriadis, Patrick Haffner
-
Patent number: 9679561Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.Type: GrantFiled: March 28, 2011Date of Patent: June 13, 2017Assignee: Nuance Communications, Inc.Inventors: Srinivas Bangalore, Robert Bell, Diamantino Antonio Caseiro, Mazin Gilbert, Patrick Haffner
-
Patent number: 9680877Abstract: A system to detect anomalies in internet protocol (IP) flows uses a set of machine-learning (ML) rules that can be applied in real time at the IP flow level. A communication network has a large number of routers equipped with flow monitoring capability. A flow collector collects flow data from the routers throughout the communication network and provides them to a flow classifier. At the same time, a limited number of locations in the network monitor data packets and generate alerts based on packet data properties. The packet alerts and the flow data are provided to a machine learning system that detects correlations between the packet-based alerts and the flow data to thereby generate a series of flow-level alerts. These rules are provided to the flow time classifier. Over time, the new packet alerts and flow data are used to provide updated rules generated by the machine learning system.Type: GrantFiled: December 15, 2015Date of Patent: June 13, 2017Assignee: AT&T Intellectual Property I, L.P.Inventors: Nicholas Duffield, Patrick Haffner, Balachander Krishnamurthy, Haakon Andreas Ringberg
-
Publication number: 20170004216Abstract: A method and apparatus for using a classifier for processing a query are disclosed. For example, the method receives a query from a user, and processes the query to locate one or more documents in accordance with a search engine having a discriminative classifier, wherein the discriminative classifier is trained with a plurality of artificial query examples. The method then presents a result of the processing to the user.Type: ApplicationFiled: September 19, 2016Publication date: January 5, 2017Inventors: ILIJA ZELJKOVIC, Srinivas Bangalore, Patrick Haffner, Jay Wilpon
-
Patent number: 9449100Abstract: A method and apparatus for using a classifier for processing a query are disclosed. For example, the method receives a query from a user, and processes the query to locate one or more documents in accordance with a search engine having a discriminative classifier, wherein the discriminative classifier is trained with a plurality of artificial query examples. The method then presents a result of the processing to the user.Type: GrantFiled: August 4, 2014Date of Patent: September 20, 2016Assignee: AT&T Intellectual Property I, L.P.Inventors: Ilija Zeljkovic, Srinivas Bangalore, Patrick Haffner, Jay Wilpon
-
Patent number: 9349102Abstract: A traffic classifier has a plurality of binary classifiers, each associated with one of a plurality of calibrators. Each calibrator trained to translate an output score of the associated binary classifier into an estimated class probability value using a fitted logistic curve, each estimated class probability value indicating a probability that the packet flow on which the output score is based belongs to the traffic class associated with the binary classifier associated with the calibrator. The classifier training system configured to generate a training data based on network information gained using flow and packet sampling methods. In some embodiments, the classifier training system configured to generate reduced training data sets, one for each traffic class, reducing the training data related to traffic not associated with the traffic class.Type: GrantFiled: September 14, 2012Date of Patent: May 24, 2016Assignee: AT&T Intellectual Property I, L.P.Inventors: Subhabrata Sen, Nicholas Duffield, Patrick Haffner, Jeffrey Erman, Yu Jin
-
Patent number: 9323745Abstract: Disclosed are systems, methods, and computer-readable media for performing translations from a source language to a target language. The method comprises receiving a source phrase, generating a target bag of words based on a global lexical selection of words that loosely couples the source words/phrases and target words/phrases, and reconstructing a target phrase or sentence by considering all permutations of words with a conditional probability greater than a threshold.Type: GrantFiled: July 21, 2014Date of Patent: April 26, 2016Assignee: AT&T Intellectual Property II, L.P.Inventors: Srinivas Bangalore, Patrick Haffner, Stephan Kanthak
-
Publication number: 20160105462Abstract: A system to detect anomalies in internet protocol (IP) flows uses a set of machine-learning (ML) rules that can be applied in real time at the IP flow level. A communication network has a large number of routers equipped with flow monitoring capability. A flow collector collects flow data from the routers throughout the communication network and provides them to a flow classifier. At the same time, a limited number of locations in the network monitor data packets and generate alerts based on packet data properties. The packet alerts and the flow data are provided to a machine learning system that detects correlations between the packet-based alerts and the flow data to thereby generate a series of flow-level alerts. These rules are provided to the flow time classifier. Over time, the new packet alerts and flow data are used to provide updated rules generated by the machine learning system.Type: ApplicationFiled: December 15, 2015Publication date: April 14, 2016Applicant: AT&T Intellectual Property I, L.P.Inventors: Nicholas Duffield, Patrick Haffner, Balachander Krishnamurthy, Haakon Andreas Ringberg
-
Publication number: 20160078869Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: ApplicationFiled: November 30, 2015Publication date: March 17, 2016Inventors: Ann K. SYRDAL, Sumit CHOPRA, Patrick HAFFNER, Taniya MISHRA, Ilija ZELJKOVIC, Eric ZAVESKY
-
Publication number: 20160063991Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.Type: ApplicationFiled: November 10, 2015Publication date: March 3, 2016Inventors: Sumit CHOPRA, Dimitrios DIMITRIADIS, Patrick HAFFNER
-
Patent number: 9258217Abstract: A system to detect anomalies in internet protocol (IP) flows uses a set of machine-learning (ML) rules that can be applied in real time at the IP flow level. A communication network has a large number of routers that can be equipped with flow monitoring capability. A flow collector collects flow data from the routers throughout the communication network and provides them to a flow classifier. At the same time, a limited number of locations in the network monitor data packets and generate alerts based on packet data properties. The packet alerts and the flow data are provided to a machine learning system that detects correlations between the packet-based alerts and the flow data to thereby generate a series of flow-level alerts. These rules are provided to the flow time classifier. Over time, the new packet alerts and flow data are used to provide updated rules generated by the machine learning system.Type: GrantFiled: September 28, 2009Date of Patent: February 9, 2016Assignee: AT&T Intellectual Property I, L.P.Inventors: Nicholas Duffield, Patrick Haffner, Balachander Krishnamurthy, Haakon Andreas Ringberg
-
Patent number: 9218815Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: GrantFiled: November 24, 2014Date of Patent: December 22, 2015Assignee: AT&T Intellectual Property I, L.P.Inventors: Ann K. Syrdal, Sumit Chopra, Patrick Haffner, Taniya Mishra, Ilija Zeljkovic, Eric Zavesky