Patents by Inventor Patrick Haffner
Patrick Haffner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9208778Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.Type: GrantFiled: November 10, 2014Date of Patent: December 8, 2015Assignee: AT&T Intellectual Property I, L.P.Inventors: Sumit Chopra, Dimitrios Dimitriadis, Patrick Haffner
-
Publication number: 20150081302Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: ApplicationFiled: November 24, 2014Publication date: March 19, 2015Inventors: Ann K. SYRDAL, Sumit CHOPRA, Patrick Haffner, Taniya MISHRA, Ilija ZELJKOVIC, Eric Zavesky
-
Publication number: 20150058012Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.Type: ApplicationFiled: November 10, 2014Publication date: February 26, 2015Inventors: Sumit CHOPRA, Dimitrios DIMITRIADIS, Patrick HAFFNER
-
Patent number: 8935188Abstract: In one embodiment, the present disclosure is a method and apparatus for classifying applications using the collective properties of network traffic. In one embodiment, a method for classifying traffic in a communication network includes receiving a traffic activity graph, the traffic activity graph comprising a plurality of nodes interconnected by a plurality of edges, where each of the nodes represents an endpoint associated with the communication network and each of the edges represents traffic between a corresponding pair of the nodes, generating an initial set of inferences as to an application class associated with each of the edges, based on at least one measured statistic related to at least one traffic flow in the communication network, and refining the initial set of inferences based on a spatial distribution of the traffic flows, to produce a final traffic activity graph.Type: GrantFiled: August 17, 2010Date of Patent: January 13, 2015Assignees: AT&T Intellectual Property I, L.P., Regents of the University of MinnesotaInventors: Nicholas Duffield, Patrick Haffner, Yu Jin, Subhabrata Sen, Zhi-Li Zhang
-
Publication number: 20140372404Abstract: A method and apparatus for using a classifier for processing a query are disclosed. For example, the method receives a query from a user, and processes the query to locate one or more documents in accordance with a search engine having a discriminative classifier, wherein the discriminative classifier is trained with a plurality of artificial query examples. The method then presents a result of the processing to the user.Type: ApplicationFiled: August 4, 2014Publication date: December 18, 2014Inventors: ILIJA ZELJKOVIC, Srinivas Bangalore, Patrick Haffner, Jay Wilpon
-
Publication number: 20140358537Abstract: Disclosed herein are systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the domain of the received speech. The disclosure includes recognizing received speech with a collection of domain-specific speech recognizers, determining a speech recognition confidence for each of the speech recognition outputs, selecting speech recognition candidates based on a respective speech recognition confidence for each speech recognition output, and combining selected speech recognition candidates to generate text based on the combination.Type: ApplicationFiled: August 14, 2014Publication date: December 4, 2014Inventors: Mazin GILBERT, Srinivas BANGALORE, Patrick HAFFNER, Robert BELL
-
Patent number: 8897500Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: GrantFiled: May 5, 2011Date of Patent: November 25, 2014Assignee: AT&T Intellectual Property I, L.P.Inventors: Ann K. Syrdal, Sumit Chopra, Patrick Haffner, Taniya Mishra, Ilija Zeljkovic, Eric Zavesky
-
Patent number: 8886533Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.Type: GrantFiled: October 25, 2011Date of Patent: November 11, 2014Assignee: AT&T Intellectual Property I, L.P.Inventors: Sumit Chopra, Dimitrios Dimitriadis, Patrick Haffner
-
Publication number: 20140330552Abstract: Disclosed are systems, methods, and computer-readable media for performing translations from a source language to a target language. The method comprises receiving a source phrase, generating a target bag of words based on a global lexical selection of words that loosely couples the source words/phrases and target words/phrases, and reconstructing a target phrase or sentence by considering all permutations of words with a conditional probability greater than a threshold.Type: ApplicationFiled: July 21, 2014Publication date: November 6, 2014Inventors: Srinivas BANGALORE, Patrick HAFFNER, Stephan KANTHAK
-
Patent number: 8880614Abstract: A method and apparatus for providing protection for mail servers in networks such as the packet networks are disclosed. For example, the present method detects a mail server is reaching its processing limit. The method then selectively limits connections to the mail server from a plurality of source nodes based on a spam index associated with each of the source nodes.Type: GrantFiled: November 13, 2006Date of Patent: November 4, 2014Assignee: AT&T Intellectual Property II, L.P.Inventors: Albert G. Greenberg, Patrick Haffner, Subhabrata Sen, Oliver Spatscheck
-
Patent number: 8812321Abstract: Disclosed herein are systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the domain of the received speech. The disclosure includes recognizing received speech with a collection of domain-specific speech recognizers, determining a speech recognition confidence for each of the speech recognition outputs, selecting speech recognition candidates based on a respective speech recognition confidence for each speech recognition output, and combining selected speech recognition candidates to generate text based on the combination.Type: GrantFiled: September 30, 2010Date of Patent: August 19, 2014Assignee: AT&T Intellectual Property I, L.P.Inventors: Mazin Gilbert, Srinivas Bangalore, Patrick Haffner, Robert Bell
-
Patent number: 8799279Abstract: A method and apparatus for using a classifier for processing a query are disclosed. For example, the method receives a query from a user, and processes the query to locate one or more documents in accordance with a search engine having a discriminative classifier, wherein the discriminative classifier is trained with a plurality of artificial query examples. The method then presents a result of the processing to the user.Type: GrantFiled: December 31, 2008Date of Patent: August 5, 2014Assignee: AT&T Intellectual Property I, L.P.Inventors: Ilija Zeljkovic, Srinivas Bangalore, Patrick Haffner, Jay Wilpon
-
Patent number: 8788258Abstract: Disclosed are systems, methods, and computer-readable media for performing translations from a source language to a target language. The method comprises receiving a source phrase, generating a target bag of words based on a global lexical selection of words that loosely couples the source words/phrases and target words/phrases, and reconstructing a target phrase or sentence by considering all permutations of words with a conditional probability greater than a threshold.Type: GrantFiled: March 15, 2007Date of Patent: July 22, 2014Assignee: AT&T Intellectual Property II, L.P.Inventors: Srinivas Bangalore, Patrick Haffner, Stephan Kanthak
-
Publication number: 20130103402Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.Type: ApplicationFiled: October 25, 2011Publication date: April 25, 2013Applicant: AT&T Intellectual Property I, L.P.Inventors: Sumit CHOPRA, Dimitrios Dimitriadis, Patrick Haffner
-
Publication number: 20130013542Abstract: A traffic classifier has a plurality of binary classifiers, each associated with one of a plurality of calibrators. Each calibrator trained to translate an output score of the associated binary classifier into an estimated class probability value using a fitted logistic curve, each estimated class probability value indicating a probability that the packet flow on which the output score is based belongs to the traffic class associated with the binary classifier associated with the calibrator. The classifier training system configured to generate a training data based on network information gained using flow and packet sampling methods. In some embodiments, the classifier training system configured to generate reduced training data sets, one for each traffic class, reducing the training data related to traffic not associated with the traffic class.Type: ApplicationFiled: September 14, 2012Publication date: January 10, 2013Applicant: AT&T Intellectual Property I, L.P.Inventors: SUBHABRATA SEN, Nicholas duffield, Patrick Haffner, Jeffrey Erman, Yu Jin
-
Patent number: 8311956Abstract: A traffic classifier has a plurality of binary classifiers, each associated with one of a plurality of calibrators. Each calibrator trained to translate an output score of the associated binary classifier into an estimated class probability value using a fitted logistic curve, each estimated class probability value indicating a probability that the packet flow on which the output score is based belongs to the traffic class associated with the binary classifier associated with the calibrator. The classifier training system configured to generate a training data based on network information gained using flow and packet sampling methods. In some embodiments, the classifier training system configured to generate reduced training data sets, one for each traffic class, reducing the training data related to traffic not associated with the traffic class.Type: GrantFiled: August 11, 2009Date of Patent: November 13, 2012Assignee: AT&T Intellectual Property I, L.P.Inventors: Subhabrata Sen, Nicholas Duffield, Patrick Haffner, Jeffrey Erman, Yu Jin
-
Publication number: 20120281885Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: ApplicationFiled: May 5, 2011Publication date: November 8, 2012Applicant: AT&T Intellectual Property I, L.P.Inventors: Ann K. SYRDAL, Sumit Chopra, Patrick Haffner, Taniya Mishra, Ilija Zeljkovic, Eric Zavesky
-
Publication number: 20120253799Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.Type: ApplicationFiled: March 28, 2011Publication date: October 4, 2012Applicant: AT&T Intellectual Property I, L.P.Inventors: Srinivas BANGALORE, Robert Bell, Diamantino Antonio Caseiro, Mazin Gilbert, Patrick Haffner
-
Publication number: 20120084086Abstract: Disclosed herein are systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the domain of the received speech. The disclosure includes recognizing received speech with a collection of domain-specific speech recognizers, determining a speech recognition confidence for each of the speech recognition outputs, selecting speech recognition candidates based on a respective speech recognition confidence for each speech recognition output, and combining selected speech recognition candidates to generate text based on the combination.Type: ApplicationFiled: September 30, 2010Publication date: April 5, 2012Applicant: AT&T Intellectual Property I, L.P.Inventors: Mazin GILBERT, Srinivas Bangalore, Patrick Haffner, Robert Bell
-
METHOD AND APPARATUS FOR CLASSIFYING APPLICATIONS USING THE COLLECTIVE PROPERTIES OF NETWORK TRAFFIC
Publication number: 20120047096Abstract: In one embodiment, the present disclosure is a method and apparatus for classifying applications using the collective properties of network traffic. In one embodiment, a method for classifying traffic in a communication network includes receiving a traffic activity graph, the traffic activity graph comprising a plurality of nodes interconnected by a plurality of edges, where each of the nodes represents an endpoint associated with the communication network and each of the edges represents traffic between a corresponding pair of the nodes, generating an initial set of inferences as to an application class associated with each of the edges, based on at least one measured statistic related to at least one traffic flow in the communication network, and refining the initial set of inferences based on a spatial distribution of the traffic flows, to produce a final traffic activity graph.Type: ApplicationFiled: August 17, 2010Publication date: February 23, 2012Inventors: Nicholas Duffield, Patrick Haffner, Yu Jin, Subhabrata Sen, Zhi-Li Zhang